id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2304.13334 | Information content in continuous attractor neural networks is preserved
in the presence of moderate disordered background connectivity | Continuous attractor neural networks (CANN) form an appealing conceptual
model for the storage of information in the brain. However a drawback of CANN
is that they require finely tuned interactions. We here study the effect of
quenched noise in the interactions on the coding of positional information
within CANN. Using the replica method we compute the Fisher information for a
network with position-dependent input and recurrent connections composed of a
short-range (in space) and a disordered component. We find that the loss in
positional information is small for not too large disorder strength, indicating
that CANN have a regime in which the advantageous effects of local connectivity
on information storage outweigh the detrimental ones. Furthermore, a
substantial part of this information can be extracted with a simple linear
readout. | Tobias Kühn, Rémi Monasson | 2023-04-26T07:08:59Z | http://arxiv.org/abs/2304.13334v2 | Information content in continuous attractor neural networks is preserved in the presence of moderate disordered background connectivity
###### Abstract
Continuous attractor neural networks (CANN) form an appealing conceptual model for the storage of information in the brain. However a drawback of CANN is that they require finely tuned interactions. We here study the effect of quenched noise in the interactions on the coding of positional information within CANN. Using the replica method we compute the Fisher information for a network with position-dependent input and recurrent connections composed of a short-range (in space) and a disordered component. We find that the loss in positional information is small for not too large disorder strength, indicating that CANN have a regime in which the advantageous effects of local connectivity on information storage outweigh the detrimental ones. Furthermore, a substantial part of this information can be extracted with a simple linear readout.
## I Introduction
The ring attractor neural network was proposed by Amari in the 70's as a practical way to memorize a collective variable within a noisy neural population [1]. This work opened the way to various theoretical applications of the concept of continuous attractor neural networks (CANN), _e.g._ in the contexts of the orientational tuning [2] or hippocampal place cells [3], as well as to extensions, in particular to the case of multiple attractor embeddings [4; 5; 6]. While indirect evidence for the existence of CANN could be found in recordings of activity in the hippocampus [7], in the enthonrial [8] and the prefrontal cortex [9], a direct and beautiful observation of ring attractor coding for head direction was obtained only recently in the ellipsoid body of the fly [10].
From a theoretical point of view, CANN models rely on recurrent excitatory interactions between neurons active for similar values of the encoded variable, _e.g._ the position of the animal in physical space, together with a long-range inhibition preventing all cells to be active together. This combination of local positive interactions and global inhibition creates a localized bump of activity, whose center of mass reliably represents the collective variable. In this regard, a crucial condition is that the bump can be easily moved (under weak external -sensory- inputs) to span the continuous set of values of the variable. This condition imposes that the short-range connections are finely tuned, so that the model be effectively translation invariant.
When the finite-tuning condition breaks down, _e.g._ due to random modulations of the interactions, the bump can get stuck in the absence of neural noise [3]. In practice, quenched noise in the interactions can come from imperfect learning of one environment, or from interferences resulting from other information encoded (maps, objects distorting the map locally,...). Quantifying the loss in the accuracy of information storage resulting from heterogeneities in the interactions is an important issue.
We address this question here in the framework of decoding of information, based on analytical and numerical calculations. We propose an analytically tractable model of binary (active/silent) neurons receiving position-dependent inputs, and connected through each other through spatially coherent and short-range interactions, on top of a disordered and incoherent background. Using the replica method we compute the Fisher information in the high-dimensional neural activity about the encoded position as a function of the intensity of disordered interactions. This quantity was identified as a measure that is both relatively easy to compute for many systems and objectively quantifies the information contained in the neural activity about the stimulus (an orientation or a point in space) [11]. It is more appropriate for this quantification than, for example, the readout of the center of mass of a bump of activity [12]. Yet, the Fisher information is not an information measure in the sense of Shannon. From this point of view, the mutual information between the stimulus and the neural activity is the quantity we are eventually interested in [13]. However, it is a global quantity, integrated over all possible stimuli and its computation is generally more difficult than that of the Fisher information, which puts restrictions on the system it can be calculated for [14]. In the thermodynamic limit, the mutual information can be obtained from the Fisher information under the condition that the correlations are not
too strong [15]. We have explicitly checked that this prerequisite is fulfilled by our model, so that the computation of the Fisher information is actually sufficient. For a current review discussing both measures of information and their use in neuroscience, see e.g. [16].
The paper is organized as follows. In sec. II, our model is introduced. We establish the Fisher information as a means to quantify the information contained in the neural activity about the stimulus, together with its relation to other information-theoretic measures, compute it in the thermodynamic (mean-field) limit and derive its analytical properties in the limiting case of weak connection strengths. In sec. III, we validate our mean-field results by means of Monte-Carlo calulations, study the dependence of the Fisher information on changes in the recurrent and feed-forward connectivity and how precise a linear readout can be compared to the bound predicted by the Fisher information. In sec. IV, we put out results into context and give an outlook to possible future directions.
## II Model and methods
### Distribution of neural activities
We model neurons as binary units, taking the value 1 when active and 0 when inactive. Each neuron is receiving a'sensory' input, whose value depends on the mismatch between the position \(r_{i}\) in physical space it is maximally responding to and the position of the 'animal'. The probability distribution of activities is governed by a Boltzmann law \(P\left(\mathbf{n}|\xi\right)\sim e^{-E\left[\mathbf{n}\right]}\) and the energy
\[E\left[\mathbf{n}\right]=-\frac{1}{2}\sum_{i\neq j}\left(J_{ij}+K_{ij}\right)n _{i}n_{j}-\sum_{i}n_{i}\,\mathrm{U}\left(\xi-r_{i}\right), \tag{1}\]
where \(K_{ij}=K\left(r_{i}-r_{j}\right)\) is the local part of the interaction that we assume to be decaying in space with a typical length scale \(w_{\mathrm{rec}}\) and strength \(K_{\mathrm{rec}}\). \(J\) is the disordered part of the connectivity with the statistics
\[\left\langle J_{ij}\right\rangle=0,\,\,\left\langle J_{ij}J_{i^{\prime}j^{ \prime}}\right\rangle=\frac{g^{2}}{2N}\delta_{ii^{\prime}}\delta_{jj^{\prime}} \tag{2}\]
and U mimics the space-dependent input, which we model by
\[\mathrm{U}\left(\Delta x\right)=\mathrm{U}_{\mathrm{inp}}\exp\left(-\frac{ \Delta x^{2}}{w_{\mathrm{inp}}^{2}}\right). \tag{3}\]
Figure 1: Scheme of the network model employed in this work. The bell-shaped curves represent the space sensitive single-neuron (feedforward) input and the red circle a symbolic inhibitory neuron that ensures that the summed activity of all neurons is constant.
We are assuming periodic boundary conditions. As a simple way to model global inhibition, we impose the constraint that the summed activity is fixed to \(\sum_{i}n_{i}=f\cdot N=M\), where \(f\in[0,1]\). Our model is closely related to that of [17], where the case of a proper continuous attractor neural network (CANN) is studied, so the connectivity matrix is composed of a sum of local connectivities in different environments. However, in this study we are interested in the effect of the disorder on the information content in the neural activity for a single environment. Also, the presence of other maps is not the only source of disorder as there is always some variability in the connectivity. Therefore, to simplify the setup, we content ourselves with approximating the disordered contribution to the connections as Gaussian, which also corresponds to the high-temperature behaviour of the disorder in [17].
### Fisher information and mutual information
We now want to quantify the amount of information contained in the neural activity. One possibility to do so is to compute the Fisher information for a given stimulus \(\xi\),
\[\mathcal{I}_{\mathbf{n}}\left(\xi\right)\coloneqq\left\langle-\frac{\partial^{2} }{\partial\xi^{2}}\ln P\left(\mathbf{n}|\xi\right)\right\rangle_{\mathbf{n}}, \tag{4}\]
a standard measure for the quantification of information in neural populations [12; 16]. According to the Cramer-Rao bound, its inverse gives a lower bound on the variance of any unbiased estimator of \(\xi\)[13]. We will discuss this relation in greater depth in section III.3. Furthermore, in the thermodynamic limit, that we are interested in, it also determines the mutual information, a connection first established in [15] and later refined in [18]. The mutual information is given by the decrease of the entropy of the neural activity due to the knowledge of the stimulus, concretely
\[\text{MI} \coloneqq-\sum_{\mathbf{n}}P\left(\mathbf{n}\right)\ln P\left(\mathbf{n} \right)+\int d\xi\,p\left(\xi\right)\sum_{\mathbf{n}}P\left(\mathbf{n}|\xi\right)\, \ln P\left(\mathbf{n}|\xi\right)\] \[\text{where }P\left(\mathbf{n}\right) \coloneqq\int d\xi\,p\left(\xi\right)P\left(\mathbf{n}|\xi\right).\]
In [15] (their eq. (13)), the relation
\[\text{MI} =-\int d\xi\,p\left(\xi\right)\ln p\left(\xi\right)+\int d\xi\,p \left(\xi\right)\ln\left(\frac{\mathcal{I}\left(\xi\right)}{2\pi e}\right)+ \mathcal{O}\left(\frac{1}{N}\right) \tag{5}\]
was derived for an ensemble of neurons with fixed covariances, without disorder. However, in this study we are limiting ourselves to the saddle-point approximation of the Fisher information, which is valid up to corrections of order \(1/N\) as well. So the presence of disorder does not change much and we obtain that
\[\left\langle\text{MI}\right\rangle_{J}=-\int d\xi\,p\left(\xi\right)\ln p \left(\xi\right)+\int d\xi\,p\left(\xi\right)\ln\left(\frac{\left\langle \mathcal{I}\left(\xi\right)\right\rangle_{J}}{2\pi e}\right)+\mathcal{O}\left( \frac{1}{N}\right), \tag{6}\]
where \(\left\langle\dots\right\rangle_{J}\) is indicating the average over the disordered connectivity \(J\). In appendix D, we rederive this relation, for unconnected neurons, but more directly than in [15].
Hereafter, we will focus on the Fisher information, which is easier to obtain than the mutual information that we get for free due to eq. (6). Determining the Fisher information for our model, we obtain from eq. (4) after some lines of computation, detailed in appendix B,
\[\mathcal{I}_{\mathbf{n}}\left(\xi\right)= \sum_{i,j}\mathrm{U}^{\prime}\left(\xi-r_{i}\right)\left\langle \left[\left\langle n_{i}n_{j}\right\rangle_{\mathbf{n}}-\left\langle n_{i}\right \rangle_{\mathbf{n}}\left\langle n_{j}\right\rangle_{\mathbf{n}}\right]\right\rangle_{J }\mathrm{U}^{\prime}\left(\xi-r_{j}\right) \tag{7}\] \[= \left[\mathbf{U}^{\prime}\left(\xi-\mathbf{r}\right)\right]^{\mathrm{T}} C\mathbf{U}^{\prime}\left(\xi-\mathbf{r}\right), \tag{8}\]
where \(C\) denotes the disorder-averaged covariance matrix of \(\mathbf{n}\). Conditioned on one realization of the disorder furthermore, we have introduced the thermal average
\[\left\langle f\left(\mathbf{n}\right)\right\rangle_{\mathbf{n}}\coloneqq\frac{1}{ \mathcal{Z}_{J}\left(\xi\right)}\sum_{\mathbf{n}}f\left(\mathbf{n}\right)e^{-E\left[ \mathbf{n}\right]}, \tag{9}\]
for some function \(f\), together with the partition function
\[\mathcal{Z}_{J}\left(\xi\right)\coloneqq\sum_{\mathbf{n}}e^{-E\left[\mathbf{n}\right]}. \tag{10}\]
Eq. (7) can be brought into a more familiar form by noting that
\[\frac{\partial}{\partial\xi}\mathrm{T}\left(\xi-r_{i}\right) \coloneqq\frac{\partial}{\partial\xi}\left\langle n_{i}\right\rangle _{J}=\left\langle n_{i}\left[\sum_{j}\mathrm{U}^{\prime}\left(\xi-r_{j} \right)\left(n_{j}-\left\langle n_{j}\right\rangle\right)\right]\right\rangle_{J }=\left(C\mathbf{U}^{\prime}\right)_{i} \tag{11}\] \[\Leftrightarrow\mathrm{U}^{\prime}\left(\xi-r_{i}\right) =\left(C^{-1}\mathbf{T}^{\prime}\right)_{i}, \tag{12}\]
where we have introduced the tuning curve \(\mathrm{T}\) of neuron \(i\) indicating its average activity given the input \(\xi\). With this, the Fisher information can be written as [14]
\[\mathcal{I}_{\mathbf{n}}\left(\xi\right)=\left[\mathbf{T}^{\prime}\left(\xi \right)\right]^{\mathrm{T}}C^{-1}\mathbf{T}^{\prime}\left(\xi\right). \tag{13}\]
This form is more handy when dealing with experimental data because the tuning curve is (in principle) a directly measurable quantity, whereas \(\mathrm{U}^{\prime}\) is not. For our purposes, however, the form of eq. (7) is more practical because there, the only quantity depending on the disorder is the covariance matrix. We therefore only have to compute the disorder average of the covariance matrix, which we will tackle in the following.
### Disorder-averaged statistics
As usual for disordered systems [19], we determine the statistics from the logarithm of the partition function, the cumulant-generating functional (or Gibbs free energy)
\[\left\langle W\left(\mathbf{h}\right)\right\rangle_{J}= \int dJ\,P\left(J\right)\ln\left[\sum_{\mathbf{n},\,\sum_{i}n_{i}=M}e^ {\frac{1}{2}\sum_{i\neq j}(J_{ij}+K_{ij})n_{i}n_{j}+\sum_{i}n_{i}[\mathrm{U}( \xi-r_{i})+h_{i}]}\right]. \tag{14}\]
The computation of \(W\) proceeds along the classical lines [20], with the difference that we have a local connectivity. We therefore do not only introduce the Gaussian helping field \(q\) to decouple the four-point terms emerging from the disorder average, but also a space-dependent (also Gaussian) order parameter \(\phi_{x}\) to decouple the local term \(\mathbf{n}^{\mathrm{T}}K\mathbf{n}\). As apparent from the saddle-point equations (18) and (19), \(q\) quantifies the population-averaged variance of the activity and \(\phi_{x}\) the population-averaged input to the neuron with place field at position \(x\). Furthermore, due to the restriction on the summed activity, we introduce the Lagrange multiplier \(\lambda\). As derived in appendix B, we obtain the disorder-averaged cumulant-generating function in the thermodynamic limit \(N\to\infty\)
\[\left\langle W\left(\mathbf{h}\right)\right\rangle_{J}= \operatorname*{extr}_{q,\mathbf{\bar{\phi}},\mathbf{\phi},\mathbf{\lambda}} \left\{\frac{1}{2}Ng^{2}q^{2}-\frac{1}{2}Ng^{2}\bar{q}^{2}-\frac{1}{2}\mathbf{ \phi}^{T}K^{-1}\mathbf{\phi}-N\left(\lambda-g^{2}\left(\bar{q}-q\right)\right)f.\right. \tag{15}\] \[\left.+\prod_{k}\frac{1}{\sqrt{2\pi}}\int dt_{y}\,e^{-\frac{t_{x} ^{2}}{2}}\sum_{x}\ln\left[1+e^{\phi_{x}+t_{x}g\sqrt{2q}+\mathrm{U}(\xi-r_{i})+ \lambda+h_{x}}\right]\right\}\] (16) \[= G\left(\mathbf{h},\mathbf{\phi},q\right), \tag{17}\]
where the "extr." implies a supremum over \(q\), \(\bar{q}\), \(\mathbf{\psi}\) and \(\mathbf{\phi}\) and an infimum over \(\lambda\). We comment on the latter point in appendix A.1. As detailed in appendix A.2, we obtain the saddle-point equations
\[q =\int dx\,\int Dt_{x}\,\frac{1}{\left[1+e^{-\left(\phi_{x}+t_{x} g\sqrt{2q}+\mathrm{U}(\xi-x)+\lambda\right)}\right]^{2}}, \tag{18}\] \[\phi_{x} =\int dy\,K\left(x-y\right)\int Dt_{y}\,\frac{1}{1+e^{-\left(\phi _{y}+t_{y}g\sqrt{2q}+\mathrm{U}(\xi-y)+\lambda\right)}}.\] (19) \[\text{and }f =\int dx\,\int\mathcal{D}t_{x}\,\frac{1}{1+e^{-\left(\phi_{x}+t_{ x}g\sqrt{2q}+\mathrm{U}(\xi-x)+\lambda\right)}} \tag{20}\]
with the \(N\)-fold Gaussian measure
\[\int\mathcal{D}t=\frac{1}{\left(2\pi\right)^{\frac{N}{2}}}\int dt _{x_{1}}e^{-\frac{t_{x_{1}}^{2}}{2}}\cdots\int dt_{x_{N}}e^{-\frac{t_{x_{N}}^{2 }}{2}}. \tag{21}\]
The entire statistics of our system can now be determined by taking derivatives of \(G\) with respect to \(h\), which is set to \(0\) afterwards. We have to calculate the total derivative, also taking into account the \(h\)-dependence of \(q\), \(\phi\) and \(\lambda\), which in turn, by the implicit-function theorem, we obtain by taking the total derivative with respect to \(h\) of the their saddle-point-equations. This yields
\[\frac{1}{N}\frac{d^{2}}{d\mathbf{h}^{2}}\left\langle W_{f}\left(\mathbf{h}\right) \right\rangle_{J}=\frac{\partial^{2}G}{\partial\mathbf{h}^{2}}-\left(\begin{array} []{c}\frac{\partial^{2}G}{\partial\mathbf{h}\partial\phi}\\ \frac{\partial^{2}G}{\partial\mathbf{h}\partial q}\\ \frac{\partial^{2}G}{\partial\mathbf{h}\partial\lambda}\end{array}\right)^{\rm T} \left(\begin{array}{ccc}\frac{\partial^{2}G}{\partial\mathbf{g}^{2}}&\frac{ \partial^{2}G}{\partial\mathbf{g}\partial q}\\ \frac{\partial^{2}G}{\partial\mathbf{g}\partial q}&\frac{\partial^{2}G}{\partial q \partial q}\\ \frac{\partial^{2}G}{\partial\mathbf{g}\partial q}&\frac{\partial^{2}G}{\partial \lambda\partial q}\end{array}\right)^{-1}\left(\begin{array}{c}\frac{ \partial^{2}G}{\partial\mathbf{g}\partial\mathbf{h}}\\ \frac{\partial^{2}G}{\partial\mathbf{g}\partial\mathbf{h}}\\ \frac{\partial^{2}G}{\partial\mathbf{h}\partial\mathbf{h}}\end{array}\right). \tag{22}\]
Evaluating this expression, we obtain
\[C =V+VK_{\rm eff}V+C^{\rm indirect} \tag{23}\] \[=V\left(\mathbb{1}-KV\right)^{-1}+C^{\rm indirect}, \tag{24}\]
where \(V\) is the diagonal matrix with the disorder-averaged single-neuron variances
\[V_{xy} =\delta_{xy}v_{x}, \tag{25}\] \[\text{where }v_{x} \coloneqq\int\mathcal{D}t\frac{\partial m_{x}}{\partial\phi_{x}}= \int\mathcal{D}t\,m_{x}\left(1-m_{x}\right), \tag{26}\]
\(m_{x}\) is the magnetization conditioned on the Gaussian helping variable \(t\),
\[m_{x}\coloneqq\frac{1}{1+e^{-\left[\phi_{x}+t\,g\sqrt{2}t+\mathrm{U}\left( \xi-x\right)+\lambda\right]}}, \tag{27}\]
the effective local connectivity \(K_{\rm eff}\) is given by \(\left[\left(K^{\rm eff}\right)^{-1}\right]_{xy}\coloneqq-\frac{\partial^{2}G} {\partial\phi_{x}\partial\phi_{y}}\) and fulfills the Dyson equation
\[K_{xy}^{\rm eff}=K_{xy}+\int dz\,K_{xz}v_{z}K_{xy}^{\rm eff}, \tag{28}\]
and \(C^{\rm indirect}\) emerges from the remaining part of the Hessian in eq. (22). It results in an subleading contribution to the Fisher information (see below), so we give its precise form only in appendix B.
### Disorder-averaged Fisher information
The Fisher information per neuron averaged over the disorder now reads
\[\mathcal{I}_{\mathbf{n}}\left(\xi\right)=\sum_{x}\,\sum_{y}\,\mathrm{U}^{\prime} \left(\xi-x\right)\left[v_{x}\delta_{x,y}+v_{x}K_{xy}^{\rm eff}v_{y}+C_{xy}^{ \rm indirect}\right]\mathrm{U}^{\prime}\left(\xi-y\right) \tag{29}\]
In fig. (6) in the appendix, we show these three contributions separately for the parameters used for fig. (3). The first term stems from the single-neuron variances and is therefore also present without network (if present the variances are effected by the network, though). The third is always negligible, which intuitively makes sense because it emerges from the indirect \(\mathbf{h}\)-dependence of the free energy via \(g\) and \(\lambda\), which are both spatially unstructured. The second term emerges from the (positive) local interactions and also contributes positively. In order to gain a better intuition for where this term comes from, it is useful to re-derive the expression for the Fisher information using eq. (13), limiting ourselves to the case without disorder. By some lines of rearrangements, shown in appendix B.2, we derive that the vector of tuning curves can be expressed as
\[\mathbf{T}^{\prime}=V\left(1+K_{\rm eff}V\right)\mathbf{U}^{\prime}. \tag{30}\]
Combining this expression with eq. (13) and \(C=V+VK_{\rm eff}V\), we are getting back eq. (29) (without the contribution of the disorder) after canceling a factor \(\left(V+VK_{\rm eff}V\right)\), as it should. However, it is also insightful to write down the expression before this cancellation,
\[\mathcal{I}_{\mathbf{n}}=\overbrace{\left(\mathbf{U}^{\prime}\right)^{\rm T} \left(V+VK_{\rm eff}V\right)\overbrace{\left[V+VK_{\rm eff}V\right]^{=C^{-1}}} \over\left(V+VK_{\rm eff}V\right)\mathbf{U}^{\prime}}^{=\mathbf{T}^{\prime}} \over\left(V+VK_{\rm eff}V\right)\mathbf{U}^{\prime}, \tag{31}\]
because it gives an intuition about how the local connectivity shapes the Fisher information: first, it modifies the tuning curves, which is captured by the term \(VK_{\text{eff}}V\text{U}^{\prime}\), second it introduces cross-covariances, which is captured by the term \(VK_{\text{eff}}V\) contributing to the covariance. As apparent from fig. (2), panel a, the tuning curves are sharpened with increasing \(K_{\text{rec}}\), which is reflected by the fact that the direct contribution of the cross-covariances to the Fisher information is positive, see fig. (6). The cross cavariances, in turn, are detrimental in our case: they reduce the Fisher information, as apparent from panel b in fig. (2).
We can study further eq. (29) analytically, which amounts to examining the saddle-point equations (18) to (20). In particular, we can do this in the limiting case \(g\to 0\). In this limit, the Gaussian integrals get trivial. As detailed in appendix A.3, we can use for the study of the derivatives of \(q\), \(\phi\) and \(\lambda\) that, in eqs. (18) to (20), these quantities are (implicitly) given by
\[0=\frac{\partial}{\partial\left[\left\{\phi_{x}\right\}_{x},\lambda,q\right] }G_{g}\left(g,q,\phi,\lambda\right), \tag{32}\]
with \(G\) as given in eq. (17) and therefore, by the implicit-function theorem,
\[\frac{\partial}{\partial g}\begin{pmatrix}\left\{\phi_{x}\right\}_{x}\\ \lambda\\ q\end{pmatrix}=-\left(\frac{\partial^{2}}{\partial\left[\left\{\phi_{x} \right\}_{x},\lambda,q\right]^{2}}G_{g}\left[q,\mathbf{\phi},\lambda\right] \right)^{-1}\frac{\partial^{2}}{\partial g\partial\left[\left\{\phi_{x} \right\}_{x},\lambda,q\right]}G_{g}\left[q,\mathbf{\phi},\lambda\right]. \tag{33}\]
For a reasonable choice of the parameters, also guaranteeing the stability of the saddle-point solution, \(\frac{\partial^{2}}{\partial\left[\left\{\phi_{x}\right\}_{x},\lambda,q \right]^{2}}G_{g}\) is invertible (which we as well check numerically by computing it explicitly in appendix B). The partial derivative of \(\partial_{\left\{\phi_{x}\right\}_{x},\lambda,q}G\) with respect to \(g\) vanishes in the limit \(g\to 0\), as we show in appendix A.3. Therefore, the derivatives of \(q\), \(\phi\) and \(\lambda\) with respect to \(g\) go to \(0\) for vanishing \(g\). This results carries over to higher-order cumulants and to the cumulant-generating functional itself. Because the Fisher information depends on \(g\) only via these quantities, its derivative vanishes in the limit of \(g\to 0\):
\[\lim_{g\to 0}\frac{\partial}{\partial g}\mathcal{I}_{\mathbf{n}}\left(\xi\right)=0. \tag{34}\]
The derivatives with respect to the strength of the local interaction, \(K_{\text{rec}}\), however, in general do not vanish for vanishing connectivity. Therefore even small connection strengths will have a (beneficial) effect, as seen before.
Figure 2: Panel a: tuning curves for a network without disorder in the connectivity for different strengths of local interactions. Panel b: corresponding change in the Fisher information; the dotted lines show eq. (31) with the term \(VK_{\text{eff}}V\) in the middle part, constituting \(C^{-1}\), removed. Parameters: \(g=0\), \(w_{\text{two point}}=0.1\), \(\text{U}_{\text{inp}}=0.2\), \(w_{\text{one point}}=0.07\), \(f=0.15\).
## III Numerical validation of mean-field results and applications
### Monte Carlo simulation
To validate our computation derived for the thermodynamic limit, we perform Monte-Carlo simulations for multiple sets of parameters, see fig. (3).We use a standard Metropolis algorithm, taking into account the condition of a fixed total activity by flipping always two spins, in opposite direction, as suggested in [17]. The results confirm our mean-field result that the disorder diminishes the Fisher information, but quite slowly if the disorder is on a moderate level, as predicted by eq. (34) and visible in fig. (3).
### Influence of the network on the Fisher information
In an attractor network, disorder may result from the presence of other maps stored in the same network. Therefore it scales in the same way as the spatially dependent part of the connectivity. It is thus interesting to examine the behavior of the Fisher information when both parts of the connectivity are scaled by the same factor \(r\):
\[K\to r\cdot K,\quad J\to r\cdot J \tag{35}\]
Because the derivative of the expression for the Fisher information with respect to the disorder strength \(g\) vanishes for \(g=0\), the effect of the local part of the connectivity dominates for small synaptic strength: the Fisher information initially increases. This can be understood from what we have derived before: increasing the local connectivity sharpens the tuning curves and therefore increases the signal. This effect is diminished (but not cancelled) by the introduction of covariances between the neurons (see also [12]). For larger scaling factors, however, this overall beneficial effect is wiped out by the disorder, whose detrimental effect eventually dominates, fig. (4), panel a.
Finally, we ask if we can keep the Fisher information constant by increasing the recurrent weights when the input gets weaker. We have plotted lines of constant Fisher information for varying strength of the input and the recurrent connections in fig. (4), panel b. We can indeed make up to a decrease in the input by strengthening local connections, even though of course only in a limited range.
### Linear readout
To put our results into context and convey a more intuitive understanding, we briefly discuss what one can learn from the Fisher information about the accuracy of a linear readout.
We have fitted a readout vector \(\mathbf{w}\) to the activity as measured after the thermalization process (so the random
Figure 3: The Fisher information per neuron in dependence of the disorder, comparison with Monte-Carlo simulations. In panel a, the network is strongly input driven, and in panel b only weakly. Parameters, panel a: \(K_{\rm rec}=5\), \(w_{\rm rec}=0.1\), \(\rm U_{\rm inp}=2.25\), \(w_{\rm inp}=0.07\), \(f=0.15\); panel b: \(K_{\rm rec}=20\), \(\rm U_{\rm inp}=0.2\), other parameters as in panel a.
initial conditions should not play a role) and computed the squared residual error as
\[\left\langle\mathrm{res}\left(\xi\right)\right\rangle_{\xi}\coloneqq \min_{\mathbf{w}}\left\langle\left\langle\left\|\mathbf{w}\cdot\mathbf{n}-\xi\right\|_{ 2}^{2}\right\rangle_{\mathbf{n}}\right\rangle_{\xi} =\left\langle\xi^{2}\right\rangle_{\xi}-2\mathbf{w}_{\mathrm{train} }^{\mathrm{T}}C_{\xi,\mathbf{n}}^{\mathrm{test}}+\mathbf{w}_{\mathrm{train}}^{ \mathrm{T}}C_{\mathbf{n},\mathbf{n}}^{\mathrm{test}}\mathbf{w}_{\mathrm{train}} \tag{36}\] \[\text{where }\mathbf{w}_{\mathrm{train}} =\left[C_{\mathbf{n},\mathbf{n}}^{\mathrm{train}}+\lambda\cdot\mathds{1 }\right]^{-1}C_{\xi,\mathbf{n}}^{\mathrm{train}}, \tag{37}\]
where \(\left\langle\dots\right\rangle_{\mathbf{n}}\) and \(\left\langle\dots\right\rangle_{\xi}\) denote the thermal average over the configurations \(\mathbf{n}\) and the average over the distribution of the stimulus \(\xi\), respectively and \(\lambda\) denotes the strength of the \(L_{2}\) regularization that we impose. We might expect to obtain an upper bound for the accuracy of the linear estimator by the Cramer-Rao bound. However, due to the periodic boundary conditions, the estimate from the linear readout \(\xi_{\mathrm{est.}}=\mathbf{w}\cdot\mathbf{n}\) will be biased. This is particularly apparent at the borders \(0\) and \(1\), where the estimate will always be \(\frac{1}{2}\), corresponding to random guessing. The farther away from the them the stimulus is situated, the less pronounced the effect becomes. We have therefore limited the fitted stimuli to \(\xi\in\left[0.4,0.6\right]\). However, even in this regime, the linear readout gets biased in the highly disordered regime, so that there the Cramer-Rao bound only applies in its generalized form [13], their eq. (12.333):
\[\left\langle\left(\mathbf{w}\cdot\mathbf{n}-\xi\right)^{2}\right\rangle_{\mathbf{n}}\geq \frac{\left(1+b^{\prime}\left(\xi\right)\right)^{2}}{\mathcal{I}_{\mathbf{n}}\left( \xi\right)}+b\left(\xi\right)^{2}, \tag{38}\]
Figure 4: Interplay of feedforward, local and disordered recurrent input shaping the Fisher information. For panel a, we scale the synapses according to eq. (35), keeping the other parameters fixed, for panel b, we keep the the Fisher information constant, varying \(\mathrm{U}_{\mathrm{inp}}\) and \(K_{\mathrm{rec}}\) concertedly. Parameters, panel a: \(K_{\mathrm{rec}}^{\mathrm{max}}=8\), \(w_{\mathrm{rec}}=0.1\), \(\mathrm{U}_{\mathrm{inp}}=0.2\), \(w_{\mathrm{inp}}=0.07\), \(f=0.15\), \(g^{\mathrm{max}}=0.16\) panel b: \(g=5\), other parameters as in panel a.
Figure 5: Inverse residual of a linear fit of the neural activity to estimate \(\xi\), averaged over \(\xi\in\left[0.4,0.6\right]\). Only for small \(g\) (approximately the first two data points at \(0.1\) and \(0.5\)), this estimate is approximately unbiased and therefore only there, the Cramer-Rao bound guaranties that the Fisher information is an upper bound. Note that in this plot, different to all the others, we are plotting the total Fisher information for all \(100\) neurons, not the averaged, single-neuron analog. Parameters as in fig. 3, panel a.
where \(b\left(\xi\right)\coloneqq\left\langle\mathbf{w}\cdot\mathbf{n}\right\rangle_{\xi}-\xi\) is the bias of the linear estimator. In case of random guessing, in particular, \(b^{\prime}\left(\xi\right)=-1\), so that the error is solely determined by the square of the bias. Therefore, the bound given by the Fisher information in fig. (5) is only meaningful for small disorder (\(g\sim 0.1,0.5\)), whereas for greater disorder, it is invalidated by the bias, up to the point where the linear readout basically generates a random guess (for \(g\sim 2\)). In the low-disorder regime, however, we observe that the error of the linear readout is not far from the optimal case given by the Cramer-Rao bound.
## IV Discussion and outlook
In this work, we have studied an attractor-inspired neural network with a connectivity consisting of two parts: (1) excitatory couplings between neurons that are similarly tuned to stimuli, and (2) a quenched random background without local tuning. We have studied the influence of both of these contributions on the information about the stimulus contained in the neural activity through the analytical computation of the Fisher information. As expected the local part of the connectivity enhances the information content, whereas the disordered part degrades it. However, the latter effect is mild. By fitting a linear readout to estimate the driving stimulus we show that the Fisher information is not only a formal estimate the information contained in the neural activity, but also gives a useful bound on how much of it can be extracted with simple decoders.
It has been recognized for a long time that the presence of disorder in the interactions could impact the information stored in attractor networks in the form of patterns. In the case of CANNs this translates into a breaking of the translational symmetry of bump-like solutions, see for instance [3] and box 3 in [21]. However, despite the loss of translation invariance, noise in the neural activity (controlled here by the inverse of the coupling strength, playing the role of temperature in statistical mechanics) may be sufficient to move the bump [22]. We here show that the disorder does not wipe out all information in the attractor network. In particular, the Fisher information is robust to the introduction of disorder, staying constant to the first order in \(g\). Consequently, globally enhancing the connectivity strength (the local and the disordered part by the same factor, as in eq. (35)) initially always has a beneficial effect, which is overtaken by the effect of the disorder only for larger connectivity. In real biological networks, the connectivity is not fixed, but builds up during development, partially through learning [23]. One might therefore speculate if this process is optimized for the synaptic strengths to eventually match this sweet spot.
Similarly, we can ask for the information-theoretical implication of the development of tuning curves as observed in the visual cortex. In young animals, recurrent connections between similarly tuned neurons get enhanced, other weakened [24], and the orientation selectivity is sharpened [25]. According to our analysis, these changes in recurrent connectivity could compensate a decrease in strength of sensory inputs [26], see fig. (4)(b).
While our model allows for this kind of qualitative considerations, it is minimal in the sense that it contains the ingredients needed to study the effects we are interested in in their simplest form. This of course limits biological plausibility and calls for enhancements. From a technical point of view, we did of course not consider all possible scenarios. We give below an outlook on possible further directions, starting with the technical aspects.
First, we have limited ourselves to a parameter regime far away from phase transitions. In particular we have not considered the low-temperature/high-connectivity regime. The analysis of this case (without the computation of the Fisher information) was carried out by one of us and collaborators in a series of papers [27; 17; 22], in the specific case of background noise due to a extensive number \(\alpha N\) of alternative attractors embedded in the network. Though the quenched noise distribution in this case was not Gaussian we do not expect the phase diagram to significantly change. Based on these previous works we can thus build educated guesses on what to expect in terms of information theory. We still expect a glassy phase for large disorder and weak local connectivity, a ferromagnetic ("bump") phase for weak disorder and strong local connectivity and a paramagnetic phase in case both contributions are small (large-temperature regime). In the present study, we have basically stayed in the latter regime (however, the activity was still bump-like due to the feed-forward input). This limitation has also allowed us to stick to the replica-symmetric solution of the saddle-point equations - an assumption that might not be satisfied at very low temperature and strong enough disorder; however, the comparison with the results of Monte-Carlo simulations in fig. (3) led to reasonable results. This is expected; for spin-glass models, the effect of replica-symmetry breaking is typically rather limited, in particular close to the Almeida-Thouless line. In order to study the glassy regime, corresponding to a network load \(\alpha\) beyond the critical value, we expect that replica symmetry breaking has to be considered. However, as for the Fisher information, we expect it to be approximately zero in the glassy state because the activity would not show any dependence on space anymore (besides a residual one due to the input), and the disorder-averaged single-neuron variances vanish for large disorder (and all other cumulants as well). As for the paramagnetic to ferromagnetic transition, we expect a qualitative change in the shape of the neural activities, with considerably sharper bumps on the ferromagnetic side of the transition. Due to the mechanisms discussed around eq. (13), we expect a corresponding steep increase in the Fisher information.
On a more biological side, we have made, for convenience, several unrealistic assumptions that could be waived. The receptive fields in our setup, for example, have all the same shape and size and are evenly spaced - in reality there is of course variability in shape and they are scattered in the environment. In particular, place fields may cluster near new objects [28], suggesting the importance of taking into account inhomogeneous densities. These features could rather easily included in our framework, at least if the probability distributions for the single-neuron properties are independent. The additions suggested above only require another average over them (see also appendix D). Though we have here studied a one-dimensional stimulus, the features determining the Fisher information, such as the sharpness of the tuning curve, can also be defined in higher dimensions, and we expect qualitatively the same results in that case. Last of all, it would be interesting to better understand how much of the information we have estimated can be extracted from the neural population in practice, beyond the linear readout mechanism considered here.
###### Acknowledgements.
TK thanks Ulisse Ferrari and Gabriel Mahuas for many very insightful discussions. This work was partly funded by the Human Frontier Science Program RGP0057/2016 grant and TK by a short-term postdoc fellowship of the German Academic Exchange Service (DAAD).
## Appendix A Computing the cumulant-generating function
As indicated in the main text, computing the Fisher information basically amounts to calculating the covariance matrix of the activity, which we obtain from the disorder-average of the cumulant-generating functional defined in eq. (14), or in other words, of the logarithm of the partition function. In the following, we will explain step by step how to take into account its features, starting with the fixed total activity, then including the disordered part of the connectivity and finally deriving and analysing the saddle-point equations for this case in order to compute the cumulant-generating functional in the thermodynamic limit.
### Effects of the fixed total activity and the space-dependent part of the coupling
As mentioned in the main text, we fix the summed activity of all neurons to a certain number \(M\), mimicking the effect of a global inhibition. This determines the partition function, which reads
\[\mathcal{Z}_{M}\left[\mathbf{h}\right]=\sum_{\mathbf{s},\sum_{i}n_{i}=M}\exp\left(\sum _{i<j,}K_{ij}n_{i}n_{j}+\sum_{i=1}^{N}h_{i}n_{i}\right).\]
Explicitly performing the spin sums under this limitation is difficult, so we introduce the Fourier series with \(\mathcal{Z}_{M}\left[\mathbf{j}\right]\) as coefficients:
\[U_{k}\left[\mathbf{h}\right] :=\sum_{M=0}^{N}\,e^{i2\pi kM}\mathcal{Z}_{M}\left[\mathbf{h}\right]\] \[=\sum_{\mathbf{n}}\exp\left(\sum_{i<j,}K_{ij}n_{i}n_{j}+\sum_{i=1}^{N }h_{i}n_{i}\right)\sum_{M=0}^{N}\,e^{i2\pi kM}\delta_{M,\sum_{i}n_{i}}\] \[=\sum_{\mathbf{n}}\exp\left(\sum_{i<j,}K_{ij}n_{i}n_{j}+\sum_{i=1}^{N }\left(h_{i}+i2\pi k\right)n_{i}\right).\]
Applying the transform to obtain the Fourier coefficients from a periodic function, we get
\[\mathcal{Z}_{M}\left[\mathbf{h}\right] =\int_{0}^{1}dk\,e^{-i2\pi kM}U_{k}\left[\mathbf{j}\right]\] \[=\int_{0}^{1}dk\,\sum_{\mathbf{s}}\exp\left(\sum_{i<j,}K_{ij}n_{i}n_{j }-i2\pi k\left(M-\sum_{i=1}^{N}n_{i}\right)+\sum_{i=1}^{N}h_{i}n_{i}\right).\]
For \(N\gg 1\), we can evaluate the \(k\)-integral in saddle-point approximation, so that in this limit, we replace the partition function by its "grandcanonical" counterpart
\[\mathcal{Z}_{f,\mathrm{gc}}\left[\mathbf{h}\right]:=\inf_{\lambda}\left[\sum_{\bm {n}}\exp\left(\sum_{i<j,}K_{ij}n_{i}n_{j}+\sum_{i=1}^{N}\left(h_{i}+h_{i} \right)n_{i}+\lambda\left(\sum_{i=1}^{N}n_{i}-Nf\right)\right)\right],\]
where we have introduced \(f:=\frac{M}{N}\) and \(\lambda=i2\pi k\). Note that even though the stationary value of \(\lambda\) is real, we are integrating it along the imaginary axis (using what is known as Bromwich contour, see e.g. [29], appendix C), as always when the integration variable has been introduced as Lagrange multiplyer. Varying \(\lambda\), we are therefore looking for the infimum, not the supremum. We obtain the "grand-canonical" cumulant-generating function that we will work with:
\[W_{\lambda}\left[\mathbf{h}\right] :=\ln\left[\sum_{\mathbf{s}}\exp\left(\sum_{i<j,}K_{ij}n_{i}n_{j}+ \sum_{i=1}^{N}\left[h_{i}n_{i}+\lambda\left(n_{i}-f\right)\right]\right)\right]\] \[\lambda_{\mathbf{h},\mathbf{j},K}\left(f\right)\text{ such that }\frac{ \partial\mathrm{W}_{\lambda}\left[\mathbf{h}\right]}{\partial\lambda}=Nf.\]
Now we decouple the interacting term by means of a Gaussian helping field
\[\exp\left(\frac{1}{2}\sum_{i\neq j,}K_{ij}n_{i}n_{j}\right)\] \[= \frac{1}{\left(2\pi\right)^{\frac{N}{2}}\sqrt{\det\left(K\right) }}\int d\mathbf{\phi}\,e^{-\frac{1}{2}\mathbf{\phi}^{\mathrm{T}}K^{-1}\mathbf{\phi}+\sum_ {i}\phi_{i}n_{i}}\]
and replace the \(\phi\)-integral by another saddle-point approximation:
\[W\left[\mathbf{h}\right]=\sup_{\mathbf{\phi}}\,\inf_{\lambda}\left[-\frac{1}{2}\mathbf{ \phi}^{\mathrm{T}}K^{-1}\mathbf{\phi}+\sum_{i}\left(\ln\left(1+e^{h_{i}+\lambda+ \phi_{i}}\right)-\lambda f\right)\right]\]
For the examples shown in the figures, we are assuming a rectangular shape for \(K\),
\[K\left(r_{i}-r_{j}\right)=\begin{cases}K_{\mathrm{rec}},&\text{for }\left|r_{i}-r_{j} \right|\leq w_{\mathrm{rec}}\\ 0,&\text{else},\end{cases} \tag{10}\]
but this choice is only made for convenience, the theoretical results extend to general shapes.
### Incorporating disorder
Drawing random connections in addition to the spatially ordered ones additionally modifies the extremizing probability distribution and introduces more contributions to the pairwise covariances. We would like to compute the quenched average of the cumulant-generating functional, so
\[\left\langle W\left(\mathbf{h}\right)\right\rangle_{J} =\int dJ\,P\left(J\right)\ln\left[\sum_{\mathbf{n}}e^{\frac{1}{2}\sum_ {i\neq j}(J_{ij}+K_{ij})n_{i}n_{j}+\sum_{i}(n_{i},\mathrm{U}(\xi-r_{i})+h_{i}n_ {i})}\right] \tag{11}\] \[=\lim_{n\to 0}\int dJ\,P\left(J\right)\left[\frac{-1+\sum_{\mathbf{n} ^{1},\ldots,\mathbf{n}^{n}}e^{\sum_{i\neq j}(J_{ij}+K_{ij})\sum_{a=1}^{n}n_{i}^{a} n_{j}^{a}+\sum_{i}n_{i}^{a}\left(\mathrm{U}(\xi-r_{i})+h_{i}\right)}}{n} \right], \tag{12}\]
where we have used the replica trick to represent the logarithm [19]. As indicated in the main text, eq. (2), we assume that the couplings are uncorrelated and Gaussian, so that, after the standard procedure of introducing appropriate helping fields, we obtain
\[\int dJ\,P\left(J\right)\,e^{\sum_{i\neq j}J_{ij}\sum_{\alpha=1}^{n _{i}^{\alpha}n_{j}^{\alpha}}} \tag{10}\] \[= \exp\left(-\frac{1}{2}\frac{g^{2}}{N}\sum_{i}\sum_{\alpha,\beta} n_{i}^{\alpha}n_{i}^{\beta}\right)\prod_{\alpha}\left[\frac{\sqrt{N}}{g\sqrt{2 \pi}}\int d\tilde{q}_{\alpha}\exp\left(-\frac{1}{2}\frac{N}{g^{2}}\bar{q}_{ \alpha}^{2}+\bar{q}_{\alpha}\sum_{i}n_{i}^{\alpha}\right)\right]\] (11) \[\times\prod_{\alpha\neq\beta}\left[\frac{\sqrt{N}}{g\sqrt{2\pi}} \int dq_{\alpha\beta}\,\exp\left(-\frac{1}{2}\frac{N}{g^{2}}q_{\alpha\beta}^{2} +q_{\alpha\beta}\sum_{i}n_{i}^{\alpha}n_{i}^{\beta}\right)\right]. \tag{12}\]
We combine this result with the contribution from the network without disorder, but local connectivity and solve the resulting integral, assuming replica-symmetry in \(q\) and \(\mathbf{\phi}\). The validity of the assumption of replica symmetry is validated numerically by comparing our theoretical results with the outcomes of Monte-Carlo computations and discussed in sec. IV. Dropping subleading terms in \(N\), we solve the integrals in saddle-point approximation and obtain for \(W\):
\[\left\langle W\left(\mathbf{h}\right)\right\rangle_{J}= \lim_{n\to 0}\underset{\mathbf{q},\bar{q},\bar{\mathbf{\phi}}}{\text{extr}} \left[-\frac{1}{n}+\frac{e^{-\frac{1}{2}Ng^{2}n(n-1)q^{2}-\frac{1}{2}Ng^{2}n \bar{q}^{2}}e^{-\frac{1}{2}n\mathbf{\phi}^{T}K^{-1}\mathbf{\phi}}}{n}\right. \tag{13}\] \[\times\left.\left(\prod_{l,\gamma}\sum_{n_{i}^{\gamma}=0}^{1} \right)\prod_{k}e^{g^{2}\bar{q}\sum_{i}\sum_{\alpha}n_{i}^{\alpha}+g^{2}q\sum _{\alpha\neq\beta}n_{k}^{\alpha}n_{k}^{\beta}\sum_{\alpha=1}^{n}n_{i}^{\alpha }\phi^{i}+\sum_{i}\sum_{\alpha=1}^{n}n_{i}^{\alpha}\left(\text{U}(\xi-r_{i})+ h_{i}\right)}\right]\] (14) \[= \frac{1}{2}Ng^{2}q^{2}-\frac{1}{2}Ng^{2}\bar{q}^{2}-\frac{1}{2} \mathbf{\phi}^{T}K^{-1}\mathbf{\phi}\] (15) \[+\prod_{k}\frac{1}{\sqrt{2\pi}}\int dt_{k}\,e^{-\frac{t_{k}^{2}} {2}}\sum_{i}\ln\left[1+e^{\phi^{i}+t_{i}g\sqrt{2q}+\text{U}(\xi-r_{i})+g^{2}( \bar{q}-q)+h_{i}}\right]. \tag{16}\]
Taking now into account the restriction on the total activity in addition, the mean-field cumulant-generating functional reads
\[\left\langle W\left(\mathbf{h}\right)\right\rangle_{J}= \underset{q,\bar{q},\bar{\mathbf{\phi}},\lambda}{\text{extr}}\left\{ \frac{1}{2}Ng^{2}q^{2}-\frac{1}{2}Ng^{2}\bar{q}^{2}-\frac{1}{2}\mathbf{\phi}^{T}K^ {-1}\mathbf{\phi}-N\lambda f\right. \tag{17}\] \[\left.+\prod_{k}\frac{1}{\sqrt{2\pi}}\int dt_{k}\,e^{-\frac{t_{k} ^{2}}{2}}\sum_{i}\ln\left[1+e^{\phi^{i}+t_{i}g\sqrt{2q}+\text{U}(\xi-r_{i}))+g ^{2}(\bar{q}-q)+\lambda+h_{i}}\right]\right\}. \tag{18}\]
Because we are evaluating this quantity only at its extremal values, we are free to express it in shifted coordinates, \(\lambda+g^{2}\left(\bar{q}-q\right)\rightarrow\lambda\), in order to simplify our expressions and to get rid of \(\bar{q}\), so that we obtain
\[\left\langle W_{I}\left(\mathbf{h}\right)\right\rangle_{J}= \underset{q,\bar{q},\bar{\mathbf{\phi}},\mathbf{\phi},\lambda}{\text{extr} }\left\{\frac{1}{2}Ng^{2}q^{2}-\frac{1}{2}Ng^{2}\bar{q}^{2}-\frac{1}{2}\mathbf{ \phi}^{T}K^{-1}\mathbf{\phi}-N\left(\lambda-g^{2}\left(\bar{q}-q\right)\right)f.\right. \tag{19}\] \[\left.+\prod_{k}\frac{1}{\sqrt{2\pi}}\int dt_{k}\,e^{-\frac{t_{k}^ {2}}{2}}\sum_{l}\ln\left[1+e^{\phi^{i}+t_{i}g\sqrt{2q}+\text{U}(\xi-r_{i})+ \lambda+h_{i}}\right]\right\}\] (20) \[= G_{g}\left(\mathbf{h},\mathbf{\phi},q,\lambda\right), \tag{21}\]
which leads to the saddle-point equations
\[q =f+\int dx\,\int Dt\,\left\{\frac{1}{\left[1+e^{-\left(\phi_{x}+t _{i}g\sqrt{2q}+\text{U}(\xi-x)+\lambda+h_{x}\right)}\right]^{2}}-\frac{1}{ \left[1+e^{-\left(\phi_{x}+t_{i}g\sqrt{2q}+\text{U}(\xi-x)+\lambda+h_{x} \right)}\right]}\right\} \tag{22}\] \[\bar{q} =f\] (23) \[f =\int dx\,\int Dt\,\frac{1}{1+e^{-\left(\phi_{x}+t_{i}g\sqrt{2q}+ \text{U}(\xi-x)+\lambda+h_{x}\right)}}dx\] (24) \[\phi\left(x\right) =\int K\left(x-y\right)\int Dt\,\frac{1}{1+e^{-\left(\phi_{y}+t _{y}g\sqrt{2q}+\text{U}(\xi-y)+\lambda+h_{y}\right)}}dy. \tag{25}\]
Drawing the limit of \(N\to\infty\), we have turned the sums over neuron sites into integrals over space, which we indicate by renaming the indices to \(x\) and \(y\) instead of \(i\) and \(j\). Finally, setting \(\mathbf{h}=0\) and using eq. (100) to simplify eq. (101), we obtain the final saddle-point equations as given in the main text, eqs. (18) to (20). Note that this simplification is valid for the saddle-point values \(q\), \(\left\{\phi_{x}\right\}_{x}\) and \(\lambda\) - however, when taking further derivatives of \(G\) with respect to \(\mathbf{h}\) (as necessary to determine covariances), we have to assume general \(q\), \(\left\{\phi_{x}\right\}_{x}\) and \(\lambda\) (not as given in the saddle point) and therefore have to use the right-hand side of eq. (101), and not of eq. (18).
### Analysis of the saddle-point equations in the limit \(g\to 0\)
The quantities \(q\), \(\phi_{x}\) and \(\lambda\) are implicitly given by
\[0=\frac{\partial}{\partial\left[\left\{\phi_{x}\right\}_{x},\lambda,q\right]} G_{g}\left[q,\mathbf{\phi},\lambda\right], \tag{102}\]
For \(g=0\), the integrands in the saddle-point equations (18) to (20) get independent of \(t\) and we can perform the Gaussian integrals, so that we obtain
\[q =\int dx\,\frac{1}{\left[1+e^{-\left(\phi_{x}+\mathrm{U}\left( \xi-x\right)t\right)+\lambda\right)}\right]^{2}}, \tag{103}\] \[\phi_{x} =\int dy\,K\left(x-y\right)\frac{1}{1+e^{-\left(\phi_{y}+\mathrm{ U}\left(\xi-y\right)+\lambda\right)}}.\] (104) \[f =\int dx\,\frac{1}{1+e^{-\left(\phi_{x}+\mathrm{U}\left(\xi-r_{i }\right)+\lambda\right)}}, \tag{105}\]
the latter two corresponding to eqs. (12) - (13) in [17]. In particular, all auxiliary fields have a well-behaved limit for \(g\to 0\). Furthermore, from eq. (102), we obtain the derivatives of the auxiliary variables with repect to \(g\) to be given by
\[\frac{\partial}{\partial g}\begin{pmatrix}\left\{\phi_{x}\right\}_{x}\\ \lambda\\ q\end{pmatrix}=\left(\frac{\partial^{2}}{\partial\left[\left\{\phi_{x}\right\} _{x},\lambda,q\right]^{2}}G\left[q,\mathbf{\phi},\lambda\right]\right)^{-1}\frac{ \partial^{2}}{\partial g\partial\left[\left\{\phi_{x}\right\}_{x},\lambda,q \right]}G_{g}\left[q,\mathbf{\phi},\lambda\right]. \tag{106}\]
Further differentiating \(\frac{\partial}{\partial\left[\left\{\phi_{x}\right\}_{x},\lambda,q\right]}G _{g}\left[q,\mathbf{\phi},\lambda\right]\) with respect to \(g\) yields
\[\frac{\partial^{2}}{\partial g\partial\left[\left\{\phi_{x}\right\}_{x}, \lambda,q\right]}G_{g}\left[q,\mathbf{\phi},\lambda\right]=\begin{pmatrix}\left\{ \int dy\,K_{xy}\int Dt_{y}\,t_{y}\sqrt{2q}\cdot m_{y}\left(1-2m_{y}\right) \right\}_{x}\\ \int dy\,\int dy\,\int\mathcal{D}t_{y}\,t_{y}\sqrt{2q}\cdot m_{y}\left(1-2m_{ y}\right)\\ \int dx\int Dt_{y}\,t_{y}\sqrt{2q}\cdot m_{y}\left(1-2m_{y}\right)\left(1-m_{y} \right)\end{pmatrix}\stackrel{{ g=0}}{{=}}0, \tag{107}\]
with \(m_{x}\) as introduced in eq. (27). The last equality in eq. (107) holds because for \(g=0\), \(m_{x}\) is independent of \(t_{x}\) and the remaining \(t_{x}\)-integrand is antisymmetric. To obtain the derivatives of the order parameters at \(g=0\), we therefore only have to check that differentiating \(\partial_{\left\{\phi_{x}\right\}_{x},\lambda,q}G\) with respect to \(q,\phi_{x}\) and \(\lambda\) once more yields a regular Hessian. We obtain
\[\frac{\partial^{2}}{\partial\left[\left\{\phi_{x}\right\}_{x},\lambda,q \right]^{2}}G_{g}\left[q,\mathbf{\phi},\lambda\right]=\begin{pmatrix}-\left(K^{-1 }\right)_{xy}+\delta_{xy}v_{y}&v_{x}&g^{2}\kappa_{x}^{3}\\ v_{y}&\int dz\,v_{z}&g^{2}\int dz\,\kappa_{z}^{3}\\ g^{2}\kappa_{y}^{3}&g^{2}\int dz\,\kappa_{z}^{3}&g^{2}+g^{4}\int dz\,\kappa_{z} ^{4}\end{pmatrix}, \tag{108}\]
where we have omitted the \(t_{x}\)-dependence of \(m_{x}\) for brevity and have introduced the higher-order cumulants
\[\kappa_{x}^{3}\coloneqq \frac{\partial^{2}m_{x}}{\partial\phi_{x}^{2}}=\int\mathcal{D}t \,m_{x}\left(1-2m_{x}\right)\left(1-m_{x}\right) \tag{109}\] \[\kappa_{x}^{4}\coloneqq \frac{\partial^{3}m_{x}}{\partial\phi_{i}^{3}}=\int\mathcal{D}t \,m_{x}\left(1-m_{x}\right)\left(1-6m_{x}+6m_{x}^{2}\right), \tag{110}\]
(the fourth-order one for later use). It is not apparent why the Hessian should have a zero mode - indeed, this would mean in particular that the saddle-point approximation is not well-defined. So as long as we trust the saddle-point approximation, we also know that the derivatives of the order-parameters with respect to \(g\) vanish for \(g=0\). Also, in appendix B, we numerically confirm that the Hessian does not have zero modes for \(g\to 0\).
## Appendix B Computing the Fisher information
First, we convince ourselves that computing the Fisher information simply amounts to computing the covariance matrix. Consider the probability distribution for the neural network state \(\mathbf{n}\), conditioned on the stimulus \(\xi\), \(P\left(\mathbf{n}|\xi\right)\), given by
\[P\left(\mathbf{n}|\xi\right)=\frac{1}{\mathcal{Z}_{J}\left(\xi\right)}e^{\sum_{i}n _{i}\mathrm{U}\left(\xi-r_{i}\right)+\sum_{i<j}\left(J_{ij}+K_{ij}\right)n_{i}n _{j}},\]
and
\[\mathcal{Z}_{J}\left(\xi\right)=\sum_{\mathbf{n}}e^{\sum_{i}n_{i}\mathrm{U}\left( \xi-r_{i}\right)+\sum_{i<j}\left(J_{ij}+K_{ij}\right)n_{i}n_{j}}.\]
For the Fisher information, we need the second derivative of the logarithm of \(Q\) with respect to \(\xi\):
\[-\frac{\partial^{2}}{\partial\xi^{2}}\ln\left(P\left(\mathbf{n}|\xi \right)\right)= -\frac{\partial^{2}}{\partial\xi^{2}}\sum_{i}n_{i}\mathrm{U}\left( \xi-r_{i}\right)+\frac{\partial^{2}}{\partial\xi^{2}}\ln\mathcal{Z}_{J}\left( \xi\right)\] \[= -\frac{\partial}{\partial\xi}\sum_{i}n_{i}\mathrm{U}^{\prime} \left(\xi-r_{i}\right)+\frac{\partial}{\partial\xi}\frac{\frac{\partial}{ \partial\xi}\mathcal{Z}_{J}\left(\xi\right)}{\mathcal{Z}_{J}\left(\xi\right)}\] \[= -\sum_{i}n_{i}\mathrm{U}^{\prime\prime}\left(\xi-r_{i}\right)+ \frac{\frac{\partial^{2}}{\partial\xi^{2}}\mathcal{Z}_{J}\left(\xi\right)}{ \mathcal{Z}_{J}\left(\xi\right)}-\left(\frac{\frac{\partial}{\partial\xi} \mathcal{Z}_{J}\left(\xi\right)}{\mathcal{Z}_{J}\left(\xi\right)}\right)^{2}.\]
Upon averaging over the neurons states of the neuron \(n_{i}\), we obtain
\[\mathcal{I}_{\mathbf{n}}\left(\xi\right)= \left\langle-\frac{1}{\mathcal{Z}_{J}\left(\xi\right)}\sum_{\mathbf{n }}e^{\sum_{i}n_{i}\mathrm{U}\left(\xi-r_{i}\right)+\sum_{i<j}J_{ij}n_{i}n_{j}} \sum_{i}n_{i}\mathrm{U}^{\prime\prime}\left(\xi-r_{i}\right)\right.\] \[+\left.\frac{1}{\mathcal{Z}_{J}\left(\xi\right)}\sum_{\mathbf{n}}e^{ \sum_{i}n_{i}\mathrm{U}\left(\xi-r_{i}\right)+\sum_{i<j}J_{ij}n_{i}n_{j}} \left[\sum_{i}n_{i}\mathrm{U}^{\prime\prime}\left(\xi-r_{i}\right)+\left( \sum_{i}n_{i}\mathrm{U}^{\prime}\left(\xi-r_{i}\right)\right)^{2}\right]\] \[= \sum_{i,j}\mathrm{U}^{\prime}\left(\xi-r_{i}\right)\left\langle \left[\left\langle n_{i}n_{j}\right\rangle_{\mathbf{n}}-\left\langle n_{i}\right \rangle_{\mathbf{n}}\left\langle n_{j}\right\rangle_{\mathbf{n}}\right]\right\rangle_ {J}\mathrm{U}^{\prime}\left(\xi-r_{j}\right), \tag{10}\]
where we have used the usual thermal average
\[\left\langle f\left(\mathbf{n}\right)\right\rangle_{\mathbf{n}}:=\frac{1}{\mathcal{Z}_ {J}\left(\xi\right)}\sum_{\mathbf{n}}f\left(\mathbf{n}\right)e^{\sum_{i}n_{i}\mathrm{U }\left(\xi-r_{i}\right)+\sum_{i<j}J_{ij}n_{i}n_{j}}. \tag{11}\]
As indicated before, to determine the Fisher information, we therefore just have to compute the covariance matrix, which we achieve by differentiating the cumulant-generating functional twice with respect to \(\mathbf{h}\), considering all indirect dependencies via the auxiliary fields (evaluated at their respective saddle-point values). Taking into account both the fixed total activity and the disorder, the cumulant-generating functional is given by eq. (115). Formally differentiating this expression yields
\[\frac{d^{2}}{d\mathbf{h}^{2}}\left\langle W_{f}\left(\mathbf{h}\right) \right\rangle_{J}= \frac{\partial^{2}G}{\partial\mathbf{h}^{2}}+2\frac{\partial^{2}G}{ \partial\mathbf{h}\partial\mathbf{\phi}}\frac{\partial\mathbf{\phi}}{\partial\mathbf{h}}+2 \frac{\partial^{2}G}{\partial\mathbf{h}\partial q}\frac{\partial q}{\partial\bm {h}}+2\frac{\partial^{2}G}{\partial\mathbf{h}\partial\lambda}\frac{\partial\lambda }{\partial\mathbf{h}} \tag{12}\] \[+\frac{\partial^{2}G}{\partial q^{2}}\left(\frac{\partial q}{ \partial\mathbf{h}}\right)^{2}+\frac{\partial^{2}G}{\partial\lambda^{2}}\left( \frac{\partial\lambda}{\partial\mathbf{h}}\right)^{2}+\frac{\partial\phi}{ \partial\mathbf{h}}\frac{\partial^{2}G}{\partial\mathbf{\phi}^{2}}\frac{\partial\phi }{\partial\mathbf{h}}\] (13) \[+2\frac{\partial^{2}G}{\partial q\partial\lambda}\frac{\partial q} {\partial\mathbf{h}}\frac{\partial\lambda}{\partial\mathbf{h}}+2\frac{\partial^{2}G}{ \partial q\partial\phi}\frac{\partial q}{\partial\mathbf{h}}\frac{\partial\mathbf{\phi }}{\partial\mathbf{h}}+2\frac{\partial^{2}G}{\partial\lambda\partial\phi}\frac{ \partial\lambda}{\partial\mathbf{h}}\frac{\partial\phi}{\partial\mathbf{h}}. \tag{14}\]
We obtain the derivatives of \(q\) and \(\mathbf{\phi}\) by taking the total derivatives of their defining equations, i.e. (146) and (149), which yields
\[0 =\frac{d}{d\mathbf{h}}\frac{\partial}{\partial q}G\left(\mathbf{h},\mathbf{\phi },q,\lambda\right)=\frac{\partial^{2}G}{\partial q^{2}}\frac{\partial q}{ \partial\mathbf{h}}+\frac{\partial^{2}G}{\partial\lambda\partial q}\frac{\partial \lambda}{\partial\mathbf{h}}+\frac{\partial^{2}G}{\partial\mathbf{\phi}\partial q} \frac{\partial\mathbf{\phi}}{\partial\mathbf{h}}+\frac{\partial^{2}G}{\partial q \partial\mathbf{h}} \tag{150}\] \[0 =\frac{d}{d\mathbf{h}}\frac{\partial}{\partial\lambda}G\left(\mathbf{h}, \mathbf{\phi},q,\lambda\right)=\frac{\partial^{2}G}{\partial q\partial\lambda} \frac{\partial q}{\partial\mathbf{h}}+\frac{\partial^{2}G}{\partial\lambda^{2}} \frac{\partial\lambda}{\partial\mathbf{h}}+\frac{\partial^{2}G}{\partial\mathbf{\phi} \partial\lambda}\frac{\partial\mathbf{\phi}}{\partial\mathbf{h}}+\frac{\partial^{2}G }{\partial\lambda\partial\mathbf{h}}\] (151) \[0 =\frac{d}{d\mathbf{h}}\frac{\partial}{\partial\mathbf{\phi}}G\left(\mathbf{h}, \mathbf{\phi},q,\lambda\right)=\frac{\partial^{2}G}{\partial q\partial\mathbf{\phi} }\frac{\partial q}{\partial\mathbf{h}}+\frac{\partial^{2}G}{\partial\lambda \partial\mathbf{\phi}}\frac{\partial\lambda}{\partial\mathbf{h}}+\frac{\partial^{2}G }{\partial\mathbf{\phi}^{2}}\frac{\partial\mathbf{\phi}}{\partial\mathbf{h}}+\frac{ \partial^{2}G}{\partial\mathbf{\phi}\partial\mathbf{h}}+\frac{\partial^{2}G}{\partial \mathbf{\phi}\partial\mathbf{h}}, \tag{152}\]
so that we obtain after inserting into (147)
\[\frac{d^{2}}{d\mathbf{h}^{2}}\left\langle W_{f}\left(\mathbf{h}\right) \right\rangle_{J}=\frac{\partial^{2}G}{\partial\mathbf{h}^{2}}-\begin{pmatrix} \frac{\partial^{2}G}{\partial\mathbf{h}\partial\mathbf{h}}\end{pmatrix}^{\mathrm{T}} \begin{pmatrix}\frac{\partial^{2}G}{\partial\mathbf{h}^{2}}&\frac{\partial^{2}G}{ \partial\mathbf{h}\partial\mathbf{h}}&\frac{\partial^{2}G}{\partial\mathbf{h}\partial \mathbf{h}}\\ \frac{\partial^{2}G}{\partial\mathbf{h}\partial\mathbf{h}}\end{pmatrix}^{\mathrm{T}} \begin{pmatrix}\frac{\partial^{2}G}{\partial\mathbf{h}^{2}}&\frac{\partial^{2}G}{ \partial\mathbf{h}^{2}}&\frac{\partial^{2}G}{\partial\mathbf{h}\partial\mathbf{h}}&\frac {\partial^{2}G}{\partial\mathbf{h}\partial\mathbf{h}}\\ \frac{\partial^{2}G}{\partial\mathbf{h}\partial\mathbf{h}}\end{pmatrix}^{-1}\begin{pmatrix} \frac{\partial^{2}G}{\partial\mathbf{h}\partial\mathbf{h}}\\ \frac{\partial^{2}G}{\partial\mathbf{h}\partial\mathbf{h}}\end{pmatrix}. \tag{153}\]
In order to compactly write down the entries of the matrix and the vectors above, we introduce the effective local connectivity
\[\left(K_{\mathrm{eff}}^{-1}\right)_{xy}\coloneqq-\frac{\partial^{2}G}{ \partial\phi_{x}\partial\phi_{y}}\Leftrightarrow\left[\left(\frac{\partial^{2} G}{\partial\phi\partial\phi}\right)^{-1}\right]_{xy}=-\left(K_{\mathrm{eff}} \right)_{xy}, \tag{154}\]
which fulfills the Dyson equation whose concrete form we obtain by performing the derivatives of \(G\) explicitly:
\[\left(K_{\mathrm{eff}}^{-1}\right)_{xy} =\left(K^{-1}\right)_{xy}-\delta_{xy}v_{x} \tag{155}\] \[\Leftrightarrow K_{xy}^{\mathrm{eff}} =K_{xy}+\int\,K_{xz}v_{z}K_{zy}^{\mathrm{eff}}. \tag{156}\]
Using the identities derived in appendix C, in particular eq. (142), we can note the final form of the covariance matrix:
\[C=V+VK_{\mathrm{eff}}V-\left(\mathbb{1}_{N}+VK_{\mathrm{eff}}\right)\left(g \mathbf{\kappa}^{3},\,\,\mathbf{v}\right)S^{-1}\begin{pmatrix}g\mathbf{\kappa}^{3}\\ \mathbf{v}\end{pmatrix}\left(\mathbb{1}_{N}+K_{\mathrm{eff}}V\right), \tag{157}\]
where \(V\) is the diagonal matrix with the disorder-averaged variances \(v_{i}\) and
\[S=\begin{pmatrix}N+g^{2}\sum_{i}\kappa_{i}^{3}&\kappa_{i}^{4}&g\sum_{i}\kappa_ {i}^{3}\\ g\sum_{i}\kappa_{i}^{3}&\sum_{i}v_{i}\end{pmatrix}+\begin{pmatrix}g\mathbf{\kappa}^{ 3}\\ \mathbf{v}\end{pmatrix}K_{\mathrm{eff}}\left(g\mathbf{\kappa}^{3},\,\,\mathbf{v}\right) \tag{158}\]
The Fisher information, finally is then given by
\[\mathcal{I}_{\mathbf{n}}\left(\xi\right)=\sum_{x,y}\mathrm{U}^{\prime}\left(\xi-x \right)C_{xy}\mathrm{U}^{\prime}\left(\xi-y\right). \tag{159}\]
The last term from 157 contributes to the Fisher information with a term containing (twice) the expression
\[\begin{pmatrix}g\mathbf{\kappa}^{3}\\ \mathbf{v}\end{pmatrix}\left(\mathbb{1}_{N}+K_{\mathrm{eff}}V\right)\mathbf{U}^{\prime} \tag{160}\]
The space-dependence of the contribution from the covariance mostly determined by the shape of \(U\) (multiplications by \(K\) or \(K_{\mathrm{eff}}\) merely smear it out), so that multiplication with its derivative with \(U^{\prime}\) is well approximated by a spatial derivative and summation over space and therefore yields a contribution close to \(0\). This part of the covariance therefore only yields subleading contributions to the Fisher information (see fig. (6)) and we can neglect it in the analysis. This makes sense because it emerges from the source-dependence of \(q\) and \(\lambda\), the auxiliary variables representing the disorder and the global inhibition, so global quantities. It is therefore expected that their contribution to the spatial information is negligible.
### Analysis of the covariance matrix
Having an analytical expression for the covariance matrix at hand, we can investigate its behavior for special cases, in particular around \(g=0\). Because it depends on \(g\) only via the cumulants \(v\), \(\kappa^{3}\) and \(\kappa^{4}\), we primarily have to examine their behavior near \(g=0\). We observe that
\[\frac{d}{dg}v_{x} =\frac{\partial}{\partial g}v_{x}+\frac{\partial v_{x}}{\partial q }\frac{\partial q}{\partial g}+\int dy\,\frac{\partial v_{x}}{\partial\phi_{y }}\frac{\partial\phi_{y}}{\partial g}+\frac{\partial v_{x}}{\partial\lambda} \frac{\partial\lambda}{\partial g} \tag{101}\] \[=\frac{\partial}{\partial g}\int\mathcal{D}t\,m_{x}\left(1-m_{x} \right)=\int\mathcal{D}t\,t\sqrt{2q}\,m_{x}\left(1-3m_{x}+2m_{x}^{2}\right) \overset{g=0}{=}0, \tag{102}\]
where we have used the result from appendix (A.3) that the derivatives of the auxiliary variables with respect to \(g\) vanish as \(g\) goes to \(0\). Again because \(m_{x}\) does not depend on \(t_{x}\) for \(g=0\) and the remaining integral over \(t_{x}\) is antisymmetric, it also yields \(0\). With the same argument, the other cumulants vanish as well. Therefore, the linear orders of all \(g\)-dependent quantities, that the covariance \(C\) depends on, vanish. Thus, the derivatives of the covariances and of the Fisher information equal \(0\) for \(g=0\) as well, as apparent from the plots in figure (3).
### Relating inputs and tuning curves by means of \(K_{\rm eff}\)
Without disorder, the tuning curve \(\mathrm{T}\) in the thermodynamic limit is given by
\[\mathrm{T}_{x} =\int dy\,K\left(x-y\right)\frac{1}{1+e^{-\left(\phi_{y}+\mathrm{ U}\left(\xi-y\right)+\lambda\right)}} \tag{103}\] \[\phi_{y} =\int dx\,K\left(y-z\right)\mathrm{T}_{z} \tag{104}\]
and therefore, we can write for its derivative
\[\mathbf{T}^{\prime} =\mathbf{T}\left(1-\mathbf{T}\right)\left(\mathbf{U}^{\prime}+K \mathbf{T}^{\prime}\right) \tag{105}\] \[\Leftrightarrow\left[1-\mathbf{T}\left(1-\mathbf{T}\right)K \right]\mathbf{T}^{\prime} =\mathbf{T}\left(1-\mathbf{T}\right)\mathbf{U}^{\prime}\] (106) \[\mathbf{T}^{\prime} =\left(1-VK\right)^{-1}V\mathbf{U}^{\prime}=\left(V^{-1}-K\right) ^{-1}\mathbf{U}^{\prime}\] (107) \[\Leftrightarrow\mathbf{T}^{\prime} =V\left(1+K\left(V^{-1}-K\right)^{-1}\right)\mathbf{U}^{\prime}, \tag{108}\]
Figure 6: The Fisher information per neuron in dependence of the disorder, contributions from different parts of covariance as defined in eq. (23). Parameters as in fig. (3).
where we have abbreviated \(V_{ij}=\delta_{ij}f_{i}\left(1-f_{i}\right)\). We furthermore have
\[K_{\mathrm{eff}} =K+KVK_{\mathrm{eff}} \tag{101}\] \[\Leftrightarrow\left(1-KV\right)K_{\mathrm{eff}} =K\] (102) \[\Leftrightarrow K_{\mathrm{eff}} =\left(1-KV\right)^{-1}K=V^{-1}\left(V^{-1}-K\right)^{-1}K\] (103) \[\Leftrightarrow K_{\mathrm{eff}} =K\left(V^{-1}-K\right)^{-1}V^{-1}\] (104) \[\Leftrightarrow K_{\mathrm{eff}}V =K\left(V^{-1}-K\right)^{-1}, \tag{105}\]
where we obtained the second-to-last equivalence by transposing. Inserting this expression into eq. (100), we arrive at eq. (30).
## Appendix C Matrix-vector calculus
### Inversion of a matrix with blocks on the diagonal of the sizes \(N\) and \(M\)
Assume we have a matrix of the form
\[U:=\begin{pmatrix}A&b\\ b^{\mathrm{T}}&a\end{pmatrix}, \tag{106}\]
where
\[A\in\mathbb{R}^{N\times N},\ b\in\mathbb{R}^{N\times M},\ a\in\mathbb{R}^{M \times M} \tag{107}\]
and \(A\) is invertible. To invert it, we make the ansatz
\[V:=\begin{pmatrix}C&d\\ d^{\mathrm{T}}&c\end{pmatrix}. \tag{108}\]
Multiplying \(U\) and \(V\), we obtain the conditions
\[AC+bd^{\mathrm{T}} =\mathbb{1}_{N} \tag{109}\] \[Ad+bc =0\] (110) \[b^{\mathrm{T}}C+ad^{\mathrm{T}} =0\] (111) \[b^{\mathrm{T}}d+ac =\mathbb{1}_{M} \tag{112}\]
Solving (110) for \(d\) and inserting into (112), we obtain
\[c =\left(a-b^{\mathrm{T}}A^{-1}b\right)^{-1}\] \[\text{and }d =-A^{-1}b\left(a-b^{\mathrm{T}}A^{-1}b\right)^{-1}.\]
Solving (109) for \(C\) and inserting the results gained until here, we obtain
\[C=A^{-1}+A^{-1}b\left(a-b^{\mathrm{T}}A^{-1}b\right)^{-1}\left(A^{-1}b\right) ^{\mathrm{T}}.\]
Plugging these results into the left-hand side of (111), which we did not use so far, we obtain
\[b^{\mathrm{T}}\left(A^{-1}+A^{-1}b\left(a-b^{\mathrm{T}}A^{-1}b \right)^{-1}\left(A^{-1}b\right)^{\mathrm{T}}\right)-a\left[A^{-1}b\left(a-b^ {\mathrm{T}}A^{-1}b\right)^{-1}\right]^{\mathrm{T}}\] \[= \left(\left(A^{-1}b\right)^{\mathrm{T}}+\left(b^{\mathrm{T}}A^{- 1}b-a+a\right)\left(a-b^{\mathrm{T}}A^{-1}b\right)^{-1}\left(A^{-1}b\right)^{ \mathrm{T}}\right)-a\left(a-b^{\mathrm{T}}A^{-1}b\right)^{-1}\left(A^{-1}b \right)^{\mathrm{T}}\] \[= a\left(a-b^{\mathrm{T}}A^{-1}b\right)^{-1}\left(A^{-1}b\right)^{ \mathrm{T}}-a\left(a-b^{\mathrm{T}}A^{-1}b\right)^{-1}\left(A^{-1}b\right)^{ \mathrm{T}}=0,\]
therefore our ansatz is consistent. Summarizing, we can write the inverse of \(U\) as
\[U^{-1}=\begin{pmatrix}A^{-1}&0\\ 0&0\end{pmatrix}+\begin{pmatrix}-A^{-1}b\\ \mathbb{1}_{M}\end{pmatrix}\left(a-b^{\mathrm{T}}A^{-1}b\right)^{-1}\left(- \left(A^{-1}b\right)^{\mathrm{T}}\,\mathbb{1}_{M}\right). \tag{113}\]
### Vector-matrix-vectors multiplication
Calculating cross-covariances, we are interested in calculating objects of the type
\[\left(B\ \ b\right)\begin{pmatrix}A&b\\ b^{\mathrm{T}}&a\end{pmatrix}^{-1}\begin{pmatrix}B\\ b^{\mathrm{T}}\end{pmatrix}.\]
Making use of (102), we then obtain
\[\left(B\ \ b\right)\begin{pmatrix}A&b\\ b^{\mathrm{T}}&a\end{pmatrix}^{-1}\begin{pmatrix}B\\ b^{\mathrm{T}}\end{pmatrix}\] \[= BA^{-1}B+\left(\mathbb{1}_{N}-BA^{-1}\right)b\left(a-b^{\mathrm{ T}}A^{-1}b\right)^{-1}b^{\mathrm{T}}\left(\mathbb{1}_{N}-A^{-1}B\right) \tag{103}\]
Appendix D Relating Fisher and mutual information for uncoupled neurons with inhomogeneous place fields
Here, we consider independent neurons, but allow variability in the tuning curves \(\mathrm{T}\). The probability distribution of the neural population is then given by
\[P\left(\mathbf{n}\right)=\prod_{i=1}^{N}\left[n_{i}\mathrm{T}_{i}\left(\xi\right)+ \left(1-n_{i}\right)\left(1-\mathrm{T}_{i}\left(\xi\right)\right)\right]P_{ \xi}\left(\xi\right). \tag{104}\]
To compute the mutual information, we first need to compute the entropy of this distribution, which is given by
\[h_{\mathrm{uncond}}=-\lim_{N\to\infty}\frac{1}{N}\sum_{\mathbf{n}}\left\langle P \left(\mathbf{n}\right)\ln\left(P\left(\mathbf{n}\right)\right)\right\rangle_{\mathrm{ T}},\]
where we denote by \(\left\langle\dots\right\rangle_{\mathrm{T}}\) the average over the variability of the tuning curves. The tricky part here is that we average over a logarithm, a complication that we deal with by introducing replicas (\(n+1\) in this case because of the prefactor \(P\left(\mathbf{n}\right)\), compare [30]), which leads to
\[h_{\mathrm{uncond}}= -\lim_{N\to\infty}\frac{1}{N}\lim_{k\to 0}\frac{1}{k}\left\{ \int\prod_{\alpha=0}^{k}\left(d\xi_{\alpha}P_{\xi}\left(\xi_{\alpha}\right) \right)\left(\left(\left[\prod_{\alpha=0}^{k}\mathrm{T}\left(\xi_{\alpha} \right)+\prod_{\alpha=0}^{k}\left(1-\mathrm{T}\left(\xi_{\alpha}\right) \right)\right]\right)_{\mathrm{T}}\right)^{N}-1\right\} \tag{105}\] \[= -\lim_{N\to\infty}\frac{1}{N}\lim_{k\to 0}\frac{1}{k}\left\{ \int\prod_{\alpha=0}^{k}\left(d\xi_{\alpha}P_{\xi}\left(\xi_{\alpha}\right) \right)\left(G_{\mathrm{T}}\left(\mathbf{\xi}\right)\right)^{N}-1\right\}, \tag{106}\]
where we have introduced
\[G_{\mathrm{T}}\left(\mathbf{\xi}\right)\coloneqq\left\langle\left[\prod_{\alpha=0 }^{n}\mathrm{T}\left(\xi_{\alpha}\right)+\prod_{\alpha=0}^{n}\left(1-\mathrm{ T}\left(\xi_{\alpha}\right)\right)\right]\right\rangle_{\mathrm{T}}.\]
An obvious idea is now to evaluate eq. (105) in saddle-point approximation, as also shown in [14; 15]. This is indeed what we will do, but with a small twist because one of the eigenvalues of the Hessian of \(G_{\mathrm{T}}\) vanishes for \(n\to 0\). However, this replicon mode can be identified to be the one corresponding to the replica-symmetric direction. This allows us to transform the \(n+1\)-dimensional integral over the \(\xi_{\alpha}\) such that the first coordinate corresponds to the replica-symmetric direction (\(1,\dots,1\)) and the other \(n\) are orthogonal to it. Like this, we can perform the integral over the first coordinate exactly and only the orthogonal directions are evaluated in saddle-point approximation. Having determined the unconditioned entropy in this way, we obtain the mutual information \(\mathrm{MI}=h_{\mathrm{uncond}}-h_{\mathrm{cond}}\) by substracting
\[h_{\mathrm{cond}}=-\lim_{N\to\infty}\frac{1}{N}\sum_{\mathbf{n}}\int d\xi\ \left\langle P\left(\mathbf{n}|\xi\right)\ln\left(P\left(\mathbf{n}|\xi\right)\right) \right\rangle_{\mathrm{T}} \tag{107}\]
from the unconditioned entropy. Performing the limit of \(k\to 0\) in eq. (105), we see that, to zeroth order, \(h_{\mathrm{uncond}}\) equals \(h_{\mathrm{cond}}\), so that the mutual information is, to first order, given by the one-loop correction
\[\mathrm{MI}=\int d\xi P_{\xi}\left(\xi\right)\,\left[\frac{1}{2}\ln\left(- \frac{N\lambda_{\mathrm{T}}^{1,k=0}\left(\xi\right)}{2\pi}\right)-\frac{1}{2}- \ln\left(P_{\xi}\left(\xi\right)\right)\right]+\mathcal{O}\left(\frac{1}{N} \right). \tag{108}\]
Note that our computation neither requires the introduction of helping fields to perform the average over the state space of the neural population, as [15], nor do we assume it to be normally distributed, as in [14]. However, in return, we are assuming the neurons to be independent, which limits the applicability of our approach.
What is left to do is the computation of the Hessian of \(G_{\mathrm{T}}\). On the replica-symmetric line, we only have two values for its entries, the diagonal and the off-diagonal. We calculate
\[\frac{\partial^{2}G_{\mathrm{T}}}{\partial\xi_{\alpha}^{2}}\bigg{|} _{\xi_{0}=\cdots=\xi_{n}=\xi} =\sum_{n=0,1}\left\langle\prod_{\gamma=0,\gamma\neq\alpha}^{k} \left[n\mathrm{T}\left(\xi_{\gamma}\right)\left(1-n\right)\left(1-\mathrm{T} \left(\xi_{\gamma}\right)\right)\right]\left(2n-1\right)\mathrm{T}^{\prime \prime}\left(\xi_{\alpha}\right)\right\rangle_{r}\bigg{|}_{\xi_{0}=\cdots=\xi _{n}=\xi} \tag{106}\] \[=\sum_{n=0,1}\left(2n-1\right)\left\langle\left[n\mathrm{T} \left(\xi\right)+\left(1-n\right)\left(1-\mathrm{T}\left(\xi\right)\right) \right]^{k}\mathrm{T}^{\prime\prime}\left(\xi\right)\right\rangle_{r}\] (107) \[\overset{k\equiv 0}{=}0. \tag{108}\]
and
\[\frac{\partial^{2}G_{\mathrm{T}}}{\partial\xi_{\alpha}\partial \xi_{\beta}}\bigg{|}_{\xi_{0}=\cdots=\xi_{n}=\xi} \tag{109}\] \[=\sum_{n=0,1}\left\langle\prod_{\gamma=0,\gamma\neq\alpha,\beta }^{k}\left[n\mathrm{T}\left(\xi_{\gamma}\right)+\left(1-n\right)\left(1- \mathrm{T}\left(\xi_{\gamma}\right)\right)\right]\left(2n-1\right)^{2}\mathrm{ T}^{\prime}\left(\xi_{\alpha}\right)\mathrm{T}^{\prime}\left(\xi_{\beta}\right) \right\rangle_{r}\bigg{|}_{\xi_{0}=\cdots=\xi_{n}=\xi}\] (110) \[=\left\langle\left[\frac{1}{\mathrm{T}\left(\xi\right)}+\frac{1 }{1-\mathrm{T}\left(\xi\right)}\right]\left[\mathrm{T}^{\prime}\left(\xi \right)\right]^{2}\right\rangle_{r}=\left\langle\frac{\left[\mathrm{T}^{ \prime}\left(\xi\right)\right]^{2}}{\mathrm{T}\left(\xi\right)\left(1-\mathrm{ T}\left(\xi\right)\right)}\right\rangle_{r}. \tag{112}\]
The eigenvalues of the Hessian of \(G_{\mathrm{T}}\) are given by
\[\lambda_{0}\left(\xi\right) =\left.\frac{\partial^{2}G}{\partial\xi_{\alpha}^{2}}\right|_{ \xi_{0}=\cdots=\xi_{n}=\xi}+n\left.\frac{\partial^{2}G}{\partial\xi_{\alpha} \partial\xi_{\beta}}\right|_{\xi_{0}=\cdots=\xi_{n}=\xi}\] \[\lambda_{1}\left(\xi\right) =\left.\frac{\partial^{2}G}{\partial\xi_{\alpha}^{2}}\right|_{ \xi_{0}=\cdots=\xi_{n}=\xi}-\left.\frac{\partial^{2}G}{\partial\xi_{\alpha} \partial\xi_{\beta}}\right|_{\xi_{0}=\cdots=\xi_{n}=\xi},\]
where the first one is non-degenerate, whereas the second one is \(n\)-fold degenerate. Inserting eqs. (108) and (112), we obtain that, for \(k=0\),
\[\lambda_{0}\left(\xi\right) \overset{k=0}{=}0 \tag{113}\] \[\lambda_{1}\left(\xi\right) \overset{k=0}{=}-\left\langle\frac{\left[\mathrm{T}^{\prime} \left(\xi\right)\right]^{2}}{\mathrm{T}\left(\xi\right)\left(1-\mathrm{T} \left(\xi\right)\right)}\right\rangle_{r}, \tag{114}\]
where the latter expression equals minus the Fisher information \(\mathcal{I}_{\boldsymbol{n}}\left(\xi\right)\) for the stimulus \(\xi\). Therefore, inserting this result into eq. (105), we finally obtain
\[\mathrm{MI}=\frac{1}{2}\left\langle\ln\left(\frac{N\mathcal{I}_{\boldsymbol{n} }\left(\xi\right)}{2\pi}\right)\right\rangle_{\xi\sim P_{\xi}}-\frac{1}{2}- \left\langle\ln\left(P_{\xi}\left(\xi\right)\right)\right\rangle_{\xi\sim P_{ \xi}}+\mathcal{O}\left(\frac{1}{N}\right), \tag{116}\]
as expected according to [15].
|
2305.10500 | Learning Likelihood Ratios with Neural Network Classifiers | The likelihood ratio is a crucial quantity for statistical inference in
science that enables hypothesis testing, construction of confidence intervals,
reweighting of distributions, and more. Many modern scientific applications,
however, make use of data- or simulation-driven models for which computing the
likelihood ratio can be very difficult or even impossible. By applying the
so-called ``likelihood ratio trick,'' approximations of the likelihood ratio
may be computed using clever parametrizations of neural network-based
classifiers. A number of different neural network setups can be defined to
satisfy this procedure, each with varying performance in approximating the
likelihood ratio when using finite training data. We present a series of
empirical studies detailing the performance of several common loss functionals
and parametrizations of the classifier output in approximating the likelihood
ratio of two univariate and multivariate Gaussian distributions as well as
simulated high-energy particle physics datasets. | Shahzar Rizvi, Mariel Pettee, Benjamin Nachman | 2023-05-17T18:11:38Z | http://arxiv.org/abs/2305.10500v2 | # Learning Likelihood Ratios with Neural Network Classifiers
###### Abstract
The likelihood ratio is a crucial quantity for statistical inference in science that enables hypothesis testing, construction of confidence intervals, reweighting of distributions, and more. Many modern scientific applications, however, make use of data- or simulation-driven models for which computing the likelihood ratio can be very difficult or even impossible. By applying the so-called "likelihood ratio trick," approximations of the likelihood ratio may be computed using clever parametrizations of neural network-based classifiers. A number of different neural network setups can be defined to satisfy this procedure, each with varying performance in approximating the likelihood ratio when using finite training data. We present a series of empirical studies detailing the performance of several common loss functionals and parametrizations of the classifier output in approximating the likelihood ratio of two univariate and multivariate Gaussian distributions as well as simulated high-energy particle physics datasets.
## I Introduction
Claiming a scientific discovery requires a hypothesis test, i.e. a statistical threshold for claiming that one's experimental data reject the null hypothesis in favor of an alternative hypothesis. This might involve two probability densities:
* \(H_{0}\) (the null hypothesis)
* \(H_{1}\) (the alternative hypothesis)
By the Neyman-Pearson lemma [1], the strongest ("uniformly most powerful") measure of whether the experimental data \(x\) support \(H_{0}\) vs. \(H_{1}\) is a likelihood ratio test. These tests are particularly widespread in reporting results in High-Energy Physics (HEP), but are also commonly used for statistical analyses across astrophysics, biology, medicine, and other scientific domains concerned with hypothesis testing or confidence intervals. The need for likelihood ratios goes beyond hypothesis testing, too - they can also be used to reweight a distribution to align with a target distribution, such as reweighting simulation samples to match real data [2; 3; 4; 5; 6; 7; 8; 9].
In the simplest form of a likelihood ratio test, where \(H_{0}\) and \(H_{1}\) are fully-defined by parameters \(\theta_{0}\) and \(\theta_{1}\), the background-only hypothesis is either rejected (or not) depending on the value of the ratio of likelihoods \(p(\theta_{0}\mid x)\) under \(H_{0}\) and \(p(\theta_{1}\mid x)\) under \(H_{1}\) in relation to the desired significance level.
In practice, however, the probability densities \(H_{0}\) and \(H_{1}\) may not be explicitly known. Worse, they might be nearly impossible to compute, such as in instances where they are generated by a complex simulation model. In these cases, we can use machine learning to directly approximate the likelihood ratio itself, bypassing the need to approximate the individual probability densities.
A classifier function \(f(x)\) (for instance, from a neural network) designed to distinguish data sampled from \(H_{0}\) (\(f(x)\to 0\)) vs. \(H_{1}\) (\(f(x)\to 1\)) can be used to approximate the likelihood ratio by minimizing a proper loss functional (defined in Section II):
\[\operatorname*{argmin}_{f}L[f]=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{1})}= \mathcal{L}(x). \tag{1}\]
For instance, in the familiar case of training a classifier by minimizing the binary cross-entropy loss (see I), the optimal decision function \(f(x)\) is:
\[f(x)=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{0})+p(x\mid\theta_{1})}. \tag{2}\]
We can then approximate the likelihood ratio with a monotonic transformation of the neural network output \(f(x)\)1:
Footnote 1: This notation assumes balanced training sets for simplicity. With imbalanced classes, one would need to modify the likelihood ratio to include prior factors \(p(\theta_{i})\), though the likelihood ratio trick will still apply [10].
\[\frac{f(x)}{1-f(x)} =\frac{\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{0})+p(x\mid\theta _{1})}}{1-\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{0})+p(x\mid\theta_{1})}} \tag{3}\] \[=\frac{p(x\mid\theta_{0})}{\underline{p(x\!+\!\theta_{0})}+p(x \mid\theta_{1})-\underline{p(x\!+\!\theta_{0})}}\] (4) \[=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{1})}=\mathcal{L}(x). \tag{5}\]
This procedure, sometimes called the "likelihood ratio trick", is well-known in statistics (see e.g. [11; 12; 13]) and has been frequently used in particle physics [14; 15; 16; 2; 6; 10; 26].
A number of different loss functionals beyond binary cross-entropy can be defined to satisfy this setup, but in practice, not all such classifiers will perform equally well when approximating the likelihood ratio. In this paper, we perform a series of empirical studies to understand how different choices of loss functional and parametrization of the resulting classifier affect the performance of likelihood ratio approximation for pairs of distributions.
Several recent works have investigated some improved configurations for the likelihood ratio trick in certain scientific contexts. [27] introduces a new likelihood estimation procedure as an extension of [14] using binary cross-entropy loss with SELU [28] activation. [29] notes that for one- and two-dimensional toy simulations of particle physics datasets, the maximum likelihood classifier (MLC) loss performed better than the binary cross-entropy loss when estimating the likelihood ratio - the first application of MLC loss in particle physics. [10] directly compares linear and exponential parameterizations of maximum likelihood classifier loss with binary cross-entropy loss for one-dimensional Gaussians. [14] uses calibrated classifiers to improve likelihood ratio estimation, and [18; 19] define several different approaches to likelihood ratio estimation, including augmenting the likelihood ratio trick with score regression (Rascal, Sally, etc.). [30] introduces modified versions of the cross-entropy loss that show stronger performance under limited training dataset sizes than the typical cross-entropy loss, while [31] compares the estimation of the likelihood ratio via mean square loss with ELU [32] activation, cross-entropy loss with sigmoid activation, and a proposed exponential loss with no activation function on univariate Gaussian distributions. Still other methods use normalizing flows to determine the likelihood ratio by modeling the individual densities [33; 34] or to obviate the need for the likelihood ratio approximation for reweighting distributions [35].
In light of these existing studies, this work serves as a detailed comparison of a wide range of configurations of loss functionals and output parametrizations across datasets including one-dimensional Gaussians, multi-dimensional Gaussians, and simulated high-energy particle physics datasets. We aim to highlight some best practices and serve as a guide for approximating likelihood ratios with neural network classifiers in the wider scientific community, and particularly within the domains of particle physics and astrophysics.
This paper is organized as follows. In Section II, we summarize the theoretical foundation for learning likelihood ratios with neural network classifiers. In Section III, we present a series of studies focused on optimizing likelihood ratio estimation for one-dimensional Gaussian distributions where the true likelihood ratio is exactly known. In Section IV, we extend these studies to multi-dimensional Gaussian distributions. In Section V, we present some more realistic examples using simulated high-energy physics data where the true likelihood ratio is approximated using a Normalizing Flow model [33]. Finally, we summarize our conclusions and recommendations for further studies in Section VI.
## II Learning likelihood ratios
Let the parameters \(\theta_{0}\) and \(\theta_{1}\) define two distributions, \(p(x\mid\theta_{0})\) and \(p(x\mid\theta_{1})\), as described in Section II.A. of [10]. The goal is to determine or approximate the likelihood ratio
\[\mathcal{L}(x)=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{1})} \tag{6}\]
between the two distributions.
Consider the general loss functional that depends on a learnable function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) and rescaling functions \(A:\mathbb{R}\rightarrow\mathbb{R}\) and \(B:\mathbb{R}\rightarrow\mathbb{R}\):
\[\begin{split} L[f]=-\int\mathrm{d}x\bigg{(}& p(x\mid\theta_{0})A(f(x))\\ &+p(x\mid\theta_{1})B(f(x))\bigg{)}.\end{split} \tag{7}\]
We can take the functional derivative of the loss functional to show that the extremum can be transformed to obtain the likelihood ratio:
\[\frac{\delta L}{\delta f} =-\frac{\partial}{\partial f}\Big{(}p(x\mid\theta_{0})A(f(x))+p (x\mid\theta_{1})B(f(x))\Big{)} \tag{8}\] \[=-\bigg{(}p(x\mid\theta_{0})A^{\prime}(f(x))\cdot f^{\prime}(x)\] (9) \[\qquad+p(x\mid\theta_{0})B^{\prime}(f(x))\cdot f^{\prime}(x)\bigg{)}\] \[=0\iff-\frac{B^{\prime}(f(x))}{A^{\prime}(f(x))}=\frac{p(x\mid \theta_{0})}{p(x\mid\theta_{1})}=\mathcal{L}(x). \tag{10}\]
Given that \(-B^{\prime}(f)/A^{\prime}(f)\) is a monotonic rescaling of \(f\) and \(L[f]\) is convex, the learned function \(f\) is an optimal classifier.
In this paper, we first consider the four loss functionals defined by the rescaling functions in Table 1. While this is by no means an exhaustive list of all possible loss functionals, it includes a diverse array of different loss configurations. As detailed in Sec. III.3, we also consider generalized forms of two of these four loss functionals.
A neural network parametrizes the learned function \(f\) as \(\phi(z)\), where \(z\) is the pre-activation output of the network and \(\phi\) is the final activation function. For the binary cross entropy (BCE) and mean squared error (MSE) losses,
\[\mathcal{L}(x)=-\frac{B^{\prime}(f)}{A^{\prime}(f)}=\frac{f}{1-f}, \tag{11}\]
so the likelihood ratio is the odds ratio of the learned function. That is, minimizing the BCE and MSE losses defines a classifier that computes
\[\operatorname*{argmin}_{f}L[f]=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{0})+p(x \mid\theta_{1})}\in(0,1) \tag{12}\]
. To parametrize \(f\) such that the likelihood ratio is non-negative, we require that \(\phi:\mathbb{R}\rightarrow(0,1)\).
However, for the maximum likelihood classifier (MLC) and square root (SQR) losses,
\[\mathcal{L}(x)=-\frac{B^{\prime}(f)}{A^{\prime}(f)}=f, \tag{13}\]
so the likelihood ratio is the learned function, without transformation.
\[\operatorname*{argmin}_{f}L[f]=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{1})} \tag{14}\]
In this case the loss-minimizing classifier computes the likelihood ratio \(\mathcal{L}(x)\in(0,\infty)\). The requirement on \(\phi\) is that \(\phi:\mathbb{R}\rightarrow(0,\infty)\).
## III Univariate Gaussians
In our first case study, we consider two Gaussian distributions with slightly different means and unit variances: \(X_{0}\sim\text{Normal}(+0.1,1)\) and \(X_{1}\sim\text{Normal}(-0.1,1)\). We also considered univariate Beta and Gamma distributions - these results can be found in Appendix A.
While one could in principle use Boosted Decision Trees (BDTs) instead of neural networks for the classifiers, we found that neural networks outperformed BDTs across a variety of test cases, as shown in Appendix C. All of our classifiers are therefore implemented as neural networks using Keras [36] with a Tensorflow [37] backend and Adam[38] optimizer. Each classifier consists of three hidden layers with 64, 128, and 64 nodes, sequentially. Rectified Linear Unit (ReLU) activation functions are used for the intermediate layers, with the activation for the output layer depending on the loss used to train the neural network and the parametrization being tested. Each of the three hidden layers is followed by a dropout layer with dropout probability of 10%.
Unless otherwise stated, the networks were trained with 1,000,000 samples (750,000 used for training and 250,000 used for validation). 100,000 separate samples were used to evaluate the networks' performances (in particular, to calculate their mean absolute errors). Each network was trained for up to 100 epochs with a batch size of 10%, as in [10]. If the validation loss did not decrease for 10 consecutive epochs, the training was stopped (early stopping with a patience of 10). No detailed hyperparameter optimization was done.
### Naive Implementation
#### iii.1.1 Motivation
The naive parametrization for \(\phi(z)\) in the case of the BCE and MSE losses is \(\phi=\sigma\), the logistic function commonly used as the activation for classification tasks. In the case of the MLC and SQR losses, the most common parametrization would be \(\phi=\text{ReLU}\), the rectified linear unit activation. We chose these parametrizations for our naive implementation.
To better understand how these common parametrizations of the classifiers affect their ability to learn the likelihood ratio, we implemented neural network architecture with each of the four losses, trained them to classify between the two Gaussian distributions. Since the true likelihood ratio is known, we can compare how well each of the four classifiers learns the likelihood ratio function.
#### iii.1.2 Methods
We implemented each classifier using an identical neural network architecture, differing only in the final activation, which acted as either the logistic (for the BCE and MSE classifiers) or ReLU (for the MLC and SQR classifiers) parametrizations for the learned function.
We then trained each of the four classifier architecture on the dataset 100 times each, using the classifier's corresponding loss functional. Each classifier was evaluated on the interval \((-6,6)\) and transformed into the likelihood ratio over that same interval using the appropriate transformation from equations 11 and 13. We averaged the resulting 100 predictions for the likelihood ratio.
To numerically compare the performances of different classifiers in learning the likelihood ratio, we computed their empirical mean absolute errors over 100,000 samples. For \(\hat{\mathcal{L}}\) the estimated likelihood ratio, the mean absolute error is defined as
\[\text{MAE}[\hat{\mathcal{L}}]=\mathbb{E}\left[\big{|}\mathcal{L}(X)-\hat{ \mathcal{L}}(X)\big{|}\right]. \tag{15}\]
We computed this for each classifier as an empirical average over the 100 different likelihood ratio predictors to
\begin{table}
\begin{tabular}{l c c} Loss Name & \(A(f)\) & \(B(f)\) \\ \hline \hline Binary Cross-Entropy & \(\ln(f)\) & \(\ln(1-f)\) \\ Mean Squared Error & \(-(1-f)^{2}\) & \(-f^{2}\) \\ Maximum Likelihood Classifier & \(\ln(f)\) & \(1-f\) \\ Square Root & \(-\frac{1}{\sqrt{f}}\) & \(-\sqrt{f}\) \\ \end{tabular}
\end{table}
Table 1: The rescaling functions \(A\) and \(B\) used to assemble the four different loss functionals considered.
get a numerical measure of how well each predictor approximated the likelihood ratio.
Next, we examined how varying the amount of data upon which the classifiers were trained affected their performance. In particular, for each loss, we trained 100 classifiers for each \(N\in\{10^{2},10^{3},10^{4},10^{5},10^{6},10^{7}\}\). For each value of \(N\), \(0.75N\) observations were used for training and \(0.25N\) observations were used for validation. The value of \(N=10^{6}\) corresponds to our default sample size. As before, 100,000 samples were used to estimate the MAE for each value of \(N\).
#### iii.1.3 Results
Figure 1 displays the likelihood ratio fits averaged over 100 models for each of the four classifiers, compared against the true likelihood ratio. The largest deviations here are in regions far outside the bulk of the training data, where the models will largely be extrapolating. We are primarily concerned with evaluating the likelihood ratio approximation where the data has good coverage: approximately \(x\in[-3,3]\).
In Fig. 2, we show how the expected error for classifiers trained with each choice of loss functional decreases as the sample size increases.
#### iii.1.4 Discussion
The four losses result in similarly performing fits near \(x=0\); however, the MLC and SQR losses rapidly diverge from the true likelihood ratio in regions for which there is little data coverage. By comparison, the BCE and MSE perform much better, staying within 3% of the true likelihood ratio even in regions far outside the bulk of the data (\(|x|>4\)).
The performance of these classifiers varies with the size of the training dataset \(N\). For relatively small training sample sizes (\(N<1000\)), the scale of the mean absolute error is dominated by the inductive bias present in each activation function: BCE and MSE losses (both using \(\sigma(z)\) activation) are nearly identical in size, while MLC and SQR losses (both using \(\mathrm{ReLU}(z)\) activation) are similarly clustered. As \(N\) increases, the MLC and SQR classifier performances approach those of the BCE and MSE classifiers. However, even for values of \(N\) larger than \(10^{5}\), the SQR classifier's MAE remains at least 0.015 above the average performance of the BCE/MSE classifiers.
### Parametrizing \(f\)
The parametrization of the learned function can be adjusted. In the naive implementation, the BCE and MSE neural networks use a logistic activation function, while the MLC and SQR neural networks use a ReLU activation function.
Let \(z(x)\) be the function that the neural network represents. Then \(f=\phi(z)\) is our classifier, where \(\phi\) is some parametrization of the learned function. In the cases described before, we have either \(f=\sigma(z)\) (for BCE and MSE) or \(f=\mathrm{ReLU}(z)\) (for MLC and SQR).
However, for BCE and MSE, any function \(\phi:\mathbb{R}\rightarrow(0,1)\) will suffice. Two readily available such functions
Figure 1: Average likelihood ratio fits for the four different losses. The MAEs are 0.0083, 0.0081, 0.0150, and 0.0254, for the BCE, MSE, MLC, and SQR likelihood ratio models, respectively.
Figure 2: Mean absolute errors computed for the four different losses trained with increasingly larger sample sizes \(N\).
are hyperbolic tangent and arctangent, adjusted to the appropriate range:
\[f(z) =\frac{1}{2}\left(\tanh z+1\right), \tag{16}\] \[f(z) =\frac{1}{\pi}\left(\arctan z+\frac{\pi}{2}\right). \tag{17}\]
In Fig. 3, we show the likelihood ratio fits, averaged over 100 models, for the logistic, hyperbolic tangent, and arctangent parametrizations of the BCE and MSE classifiers. In both cases, the default logistic parametrization performs the best, followed closely by the hyperbolic tangent parametrization, and followed distantly by the arctangent parametrization. This result is not surprising, as the logistic function is known to be well-suited for classification.
For the MLC and SQR losses, we instead require any function \(\phi:\mathbb{R}\rightarrow(0,\infty)\). While the ReLU function is the default, there are other functions with such ranges, including:
\[f(z) =z^{2}, \tag{18}\] \[f(z) =\exp z. \tag{19}\]
Figure 4 displays the results of comparing the performances of the MLC and SQR losses in training classifiers with these parametrizations. The performances of the three parametrizations between the two losses are the same: in this case, the exponential parametrization performs remarkably better than the ReLU parametrization, and square parametrization performs the worst amongst all three.
### Generalized Loss Families
#### iii.3.1 Motivation
The MSE and SQR loss functionals are easily generalizable to a parametric family of loss functionals. While there are several possible parametrizations2 to choose from, we select the following for simplicity: for the MSE loss, we consider a power parameter \(p\in\mathbb{R}\), where \(p=2\) is the default value, and for the SQR loss, we consider a root parameter \(r\in\mathbb{R}\), where \(r=1\) is the default value. This yields the two families of losses presented in Table 2.
Footnote 2: For example, to enforce non-singular behavior at \(r=0\) for SQR, one could consider \(A(f)=(1-f^{-\frac{\pi}{2}})/|r|\) and \(B(f)=(1-f^{\frac{\pi}{2}})/|r|\). Another interesting parametrization is \(A(f)=(f^{q}-1)/q\) and \(B(f)=1-f^{(q+1)}/(q+1)\), which is minimized at \(q=1\).
Since the rescaling functions \(A\) and \(B\) have changed, the likelihood ratio recovered from \(f\) changes as well.
For the \(p\)-MSE losses, for \(p\notin(0,1)\),
\[\mathcal{L}(x) =-\frac{B^{\prime}(f)}{A^{\prime}(f)}=-\frac{-pf^{p-1}\cdot f^{ \prime}}{p(1-f)^{p-1}\cdot f^{\prime}} \tag{20}\] \[=\left(\frac{f}{1-f}\right)^{p-1}. \tag{21}\]
We exclude the case where \(p\in(0,1)\) since the corresponding loss functional is not convex, and as such the likelihood ratio trick no longer works.
And for the \(r\)-SQR losses,
\[\mathcal{L}(x) =-\frac{B^{\prime}(f)}{A^{\prime}(f)}=-\frac{\frac{r}{2}f^{\frac {\pi}{2}-1}\cdot f^{\prime}}{\frac{\pi}{2}f^{-\frac{\pi}{2}-1}\cdot f^{\prime}} \tag{22}\] \[=f^{r}. \tag{23}\]
A whole family of losses arises from both of the two original losses, each loss still maintaining the property that the function that minimizes the corresponding functional can recover the likelihood ratio. In addition to comparing how the four original losses performed against one another, we can compare among the losses in each of these two loss families.
#### iii.3.2 Methods
Since we were working over an uncountably infinite set of loss functionals, we decided to constrain our investigation to just the interval \([-2,2]\). We scanned along the interval \([-2,2]\); for each value \(p\) which we looked at, we trained 20 logistically-parametrized models on the \(p\)-MSE loss functional corresponding to that value of \(p\). Then we averaged the mean absolute errors of the 20 models together.
We did the same for values of \(r\) in the interval \([-2,2]\) as well; in that case, the models were parametrized with the exponential activation function instead.
We expect that near \(p^{*}=1\) and \(r^{*}=0\), where the generalized loss functionals will resemble the MAE loss, the figure-of-merit of MAE will likely be minimized, too. Due to this intrinsic relationship between the choice of loss functional and figure-of-merit, we also considered two additional figures-of-merit for evaluating these scans: the Mean Ratio and the Null Statistic, defined as:
\begin{table}
\begin{tabular}{c c c} Loss Name & \(A(f)\) & \(B(f)\) \\ \hline \hline \(p\)-MSE & \(-(1-f)^{p}\) & \(-f^{p}\) \\ \(r\)-SQR & \(-f^{-\frac{\pi}{2}}\) & \(-f^{\frac{\pi}{2}}\) \\ \end{tabular}
\end{table}
Table 2: The generalization of the MSE and SQR loss functionals to entire families of losses. Values of \(p=2\) and \(r=1\) correspond to the original definitions of the loss functionals.
Figure 4: Parametrizations of \(f\) for the MLC and SQR losses. (a) The average likelihood ratio fits of the ReLU, square, and exponential parametrizations for the MLC loss, with mean absolute errors 0.0148, 0.0684, and 0.0083, respectively. (b) The average likelihood ratio fits of the ReLU, square, and exponential parametrizations for the SQR loss, with mean absolute errors 0.0367, 0.6756, and 0.0075, respectively.
Figure 3: Parametrizations of \(f\) for the BCE and MSE losses. (a) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the BCE loss, with mean absolute errors 0.0080, 0.01240, and 0.0092, respectively. (b) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the MSE loss, with mean absolute errors 0.0084, 0.0127, and 0.0094, respectively.
\[\text{Mean Ratio}[\hat{\mathcal{L}}]=E[\hat{\mathcal{L}}(X)/\mathcal{L}(X)] \tag{24}\]
\[\text{Null Statistic}[\hat{\mathcal{L}}]=|E_{0}[\mathcal{L}(X)]-E_{0}[\hat{ \mathcal{L}}(X)]| \tag{25}\]
We found that the overall trends reported here using MAE were similar across these alternative figures-of-merit, though the trends were less dramatic when we used the Mean Ratio figure-of-merit.
#### iii.3.3 Results
In Figure 5, we show the performance of the classifiers trained by these losses when modifying their power and root parameters. The values \(p^{*}\) and \(r^{*}\) minimizing the MAE were \(p^{*}=1.08,1.24\) (with \(p^{*}=1.24\) having a similar performance to that of \(p^{*}=1.08\) while being more numerically stable) and \(r^{*}=0.018\).
#### iii.3.4 Discussion
In 5(a), we observe vertical features for \(p\in(0,1)\). This is to be expected, as the likelihood ratio trick does not apply in the range where the corresponding loss functional is non-convex. Similarly, the vertical feature in 5(b) is due to the fact that for \(r=0\), our loss functional is constant (\(L[f]=1\)), and thus it is not strictly convex; therefore the likelihood ratio again does not work.
Values of \(p\) slightly less than \(0\) or slightly greater than \(1\) resulted in the smallest mean absolute errors, while values of \(r\) close to \(0\) resulted in the smallest mean absolute errors.
This result was further investigated in Section III.5 in a simple, two-dimensional classifier model.
### Optimized Implementation
Altering the parametrization of the learned function \(f\) or using a more generalized loss functional yielded considerable increases in performance from the initial parametrizations and loss functionals.
In Figs. 6 and 7, we chose the best-performing parametrization for each loss (logistic for BCE and MSE; exponential for MLC and SQR), and, for the MSE and SQR, chose the best-performing loss functional from each loss family (\(p^{*}=1.24\) for MSE and \(r^{*}=0.018\) for SQR), and trained classifiers with each "optimized" parametrization and loss. This was done \(100\) times for each parametrization/loss, and the resulting likelihood ratio models were averaged.
In the naive implementation, the BCE and MSE models performed the best, while the SQR model had an
Figure 5: (a) The mean absolute errors averaged over models trained on the generalized MSE loss family for the logistic parametrization. The mean absolute error is minimized at \(p^{*}=1.08\), but we choose the second-lowest value (\(p^{*}=1.24\)) for stability, i.e. avoiding the steep increase in MAE near \(p=1\). The arrow indicates the typical choice of \(p=2\) for MSE loss. (b) The mean absolute errors averaged over models trained on the generalized SQR loss family for the exponential parametrization. The mean absolute error was smallest at \(r^{*}=0.018\). The arrow indicates the typical choice of \(r=1\) for SQR loss.
average error at least 0.015 larger than the other losses, even for large \(N\). In the optimized implementation with \(N=10^{6}\), all four loss functionals perform approximately equally, as shown in Figure 6. Figure 7 shows that for \(N>10^{5}\), the four optimized loss functionals continue to perform approximately equally well, but the new loss functionals \(p^{*}\)-MSE and \(r^{*}\)-SQR perform significantly better, reaching mean absolute errors about 2 to 4 times smaller than the other losses. The strong influence of the inductive bias of the activation function is also mitigated in the optimized implementation, as the losses are no longer grouped by activation function.
### Simple Classifiers
#### iii.5.1 Motivation
To better understand the behavior of the generalized loss models in Sec. III.3, we examined a much simpler classifier than the multi-layer fully-connected network used to train the models in this paper. This allows us to visualize the dynamics of each model, using numerical integration to compute the loss. The model is
\[f(x)=\phi(ax+b), \tag{26}\]
where \(a\) and \(b\) are the two weights of the model and \(\phi\) is its activation.
In the case of the \(p\)-MSE model with the logistic parametrization,
\[f_{\text{MSE}}(x) =\sigma(ax+b) \tag{27}\] \[\hat{\mathcal{L}}_{\text{MSE}}(x) =\left(\frac{\sigma(ax+b)}{1-\sigma(ax+b)}\right)^{p-1}=\left(e^{ ax+b}\right)^{p-1}. \tag{28}\]
The exponentially-parametrized \(r\)-SQR model is
\[f_{\text{SQR}}(x) =e^{ax+b}\] \[\hat{\mathcal{L}}_{\text{SQR}}(x) =\left(e^{ax+b}\right)^{r}.\]
As a result, only the analysis of one of the two models is necessary, since the resulting likelihood ratio model is the same for \(r=p-1\). In particular, we will analyze the \(r\)-SQR model, keeping in mind that for the model MAEs, the results will be identical for \(p=r+1\).
We continue working with \(X_{0}\sim\text{Normal}(+0.1,1)\) and \(X_{1}\sim\text{Normal}(-0.1,1)\). The exact likelihood ratio is given by
\[\mathcal{L}(x)=e^{0.2x}, \tag{29}\]
so the two-dimensional classifier will yield an exact solution at
\[a^{*}=\frac{0.2}{r},\qquad b^{*}=0. \tag{30}\]
#### iii.5.2 Methods
To better understand how the parameter \(r\) affects the optimization landscape, we first created a grid with fineness 0.005 of \((a,b)\) pairs in the box \([-1,1]^{2}\):
\[B=\frac{1}{200}\mathbb{Z}^{2}\cap[-1,1]^{2}. \tag{31}\]
Figure 6: Average likelihood ratio fits for the different loss categories. The MAEs are 0.0079, 0.0045, 0.0077, 0.0034, 0.0046, and 0.0034, for the BCE, MSE, MLC, SQR, \(p^{*}\)-MSE, and \(r^{*}\)-SQR likelihood ratio models, respectively.
Figure 7: Mean absolute errors computed for the different loss categories trained with increasingly larger samples.
The loss functional for a particular value of \(r\), \(L_{r}\) is given by
\[L_{r}[f]=\int\mathrm{d}x\bigg{(}p(x\mid\theta_{A})f(x)^{-\frac{r}{2}}+p(x\mid \theta_{B})f(x)^{\frac{r}{2}}\bigg{)} \tag{32}\]
Then, we visualized the loss landscape as the contour plot of \(L_{r}\) over the set of classifiers \(F=\{(e^{ax+b})^{r}:(a,b)\in[-1,1]^{2}\}\) for different values of \(r\). The loss functional \(L_{r}\) was computed via numerical integration.
#### iv.2.3 Results
Figure 8 displays in the first row the resulting contour plots for \(r\in\{0.1,0.25,0.5,1\}\). Drawn over each plot, in white, are level sets of the loss at increments of \(0.02\).
#### iv.2.4 Discussion
While the actual values of the losses are not comparable between different values of \(r\), since each value of \(r\) corresponds to a different loss functional, it is clear that the loss functional becomes increasingly steep as \(r\) increases. As expected, as \(r\to\infty\), \(a^{*}\to 0\), and as \(r\to 0\), \(a^{*}\to\infty\). In particular, the loss landscape of \(r=0.1\) is shaped like an extremely shallow pool, indicating that there is a large space of classifiers with close to optimal performance. The minimum value \(a^{*}=2\) is not visible in the box, since small values of \(r\) correspond to large values of \(a^{*}\). On the other hand, the loss landscape of \(r=1\) is much steeper, with a minimum at \(a^{*}=0.2\) around which the landscape quickly increases to high loss values.
However, since loss values between values of \(r\) are incomparable, it is unclear how the loss reflects the actual performance of the likelihood ratio model. In particular, given two classifiers \(f\) and \(g\) with \(L_{r}[f]<L_{s}[g]\), \(r\neq s\), we cannot be sure that \(f\) will yield a better likelihood ratio model than \(g\), since \(L_{r}\) and \(L_{s}\) are different loss functionals.
To this end, we visualized the error landscape as the contour plot of MAE over the same set of classifiers \(F=\{(e^{ax+b})^{r}:(a,b)\in[-1,1]^{2}\}\). Since the MAE is computed only from the expected absolute difference between a predicted likelihood ratio \(\hat{\mathcal{L}}\) and the true likelihood ratio \(\mathcal{L}\), we can compare across different values of \(r\) to see which values of \(r\) result in easily obtainable well-performing classifiers.
The second row of Figure 8 displays these error contour plots; indicated in white are the level sets of the error at increments of \(0.05\). For each of these, we can see that the error is zero at \((a^{*},0)\) and increases radially outwards from the minimum. The shape of the loss landscapes reflect the true nature of the performance of the classifiers; for \(r=0.1\), we still have a shallow pool of many well-performing classifiers, whereas for \(r=1\), there is a small set of well-performing classifiers around which the classifiers begin to perform much worse. That is to say, for small values of \(r\), there are many classifiers that perform well at modeling the likelihood ratio. It may be harder to find the true minimum, but most classifiers have comparable performance. On the other hand, for large values of \(r\), the loss landscape is steep with few
Figure 8: Contour plots of the losses and MAEs of the two-dimensional SQR classifier \(f(x)=\exp{(ax+b)}\) over \([-1,1]^{2}\). The first row plots the value of the loss functional \(L[f]\), obtained through numerical integration, on a grid of \((a,b)\) pairs over \([-1,1]^{2}\) for various values of \(r\), with contours curves at increments of \(0.02\). The second row plots an empirically computed absolute error, \(\mathrm{MAE}[f]\), over the same grid of points, for the same values of \(r\), with contour curves at increments of \(0.05\).
classifiers with decent performance. Slight perturbations around the minimum correspond to large errors.
## IV Multivariate Gaussians
### Parametrizing \(f\)
#### iv.1.1 Motivation
A natural extension from the univariate Gaussians analysis in the previous section would be to multivariate Gaussians, wherein the setting is complicated by the higher dimensions, but we still have knowledge of the true likelihood ratio. To this end, we first established five different case studies of different Gaussian arrangements to examine in our multivariate analysis.
The first case study, labeled "Vertical," corresponds to independent Gaussians with variance 1, and means at a distance of 0.2, as in the univariate case.
In this case, the background distribution is more likely over the right half-plane, whereas the signal distribution is more likely over the left half-plane.
\[X_{0} \sim\text{Normal}\left(\begin{bmatrix}+0.1\\ 0\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{33}\] \[X_{1} \sim\text{Normal}\left(\begin{bmatrix}-0.1\\ 0\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{34}\]
The next case study, "Slant," simply rotates the vertical case study by \(45^{\circ}\). This results in the same likelihood ratio as the vertical case, except rotated by \(45^{\circ}\).
\[X_{0} \sim\text{Normal}\left(\begin{bmatrix}+\frac{0.1}{\sqrt{2}}\\ -\frac{0.1}{\sqrt{2}}\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{35}\] \[X_{1} \sim\text{Normal}\left(\begin{bmatrix}-\frac{0.1}{\sqrt{2}}\\ +\frac{0.1}{\sqrt{2}}\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{36}\]
In "Circle," we consider the case where the background distribution has low variance in comparison to the signal distribution. As a result, values close to the origin are more likely to be from the background, whereas values far from the origin are more likely to be from the signal. This likelihood structure is visualized in Figure 9.
\[X_{0} \sim\text{Normal}\left(\begin{bmatrix}+0.1\\ 0\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{37}\] \[X_{1} \sim\text{Normal}\left(\begin{bmatrix}-0.1\\ 0\end{bmatrix},\begin{bmatrix}2&0\\ 0&2\end{bmatrix}\right) \tag{38}\]
The "Hyperbola" case study looks at the case when both the background and the signal have different variances in each coordinate. This results in a hyperbola-like likelihood structure, as visualized in Figure 9.
#### iv.1.2 Methods
The methodology was similar to that done in Sec. III. For each case study, we implemented all four classifiers with each of the three parametrizations. Each resulting classifier architecture was trained 100 times to minimize the corresponding loss functional. We evaluated each classifier on the box \([-2,2]^{2}\), and we averaged the resulting 100 predictions for the likelihood ratio over that
Figure 9: Two of the five multivariate Gaussian cases we examined, as well as some of the likelihood ratio model fits. The first row corresponds to the circle case, while the second row corresponds to the hyperbola case. The first column plots the likelihood structure of each case; red regions are regions where \(\mathcal{L}(x)\leq 1\), and blue regions are regions where \(\mathcal{L}(x)>1\). The second and third columns display contour plots of the mean absolute error for some models trained with the various losses to learn the likelihood ratios. The plot is suggestively colored to show how the structure of the data corresponds to the structure in the likelihood ratio models.
box. We used the MAE as the performance metric, again as an empirical average over 100,000 samples.
#### iv.1.3 Results
The resulting MAEs are shown in Figure 10. Some contour plots of the mean absolute errors of some of the different parametrizations are presented in Figure 9; the remaining contour plots are provided in Appendix B.
#### iv.1.4 Discussion
In the univariate case, we found that the logistic and exponential parametrizations were uniformly the best parametrizations for the BCE/MSE and MLC/SQR losses, respectively. This trend bears out for the most part in this higher-dimensional case. In almost all of the case studies, the logistic and exponential parametrizations perform the best; in the cases where they don't have the smallest MAE, the difference between their MAE and the best MAE is not large.
Unlike in Section III, once the optimal parametrizations are chosen for each of the four loss functionals, some differences persist in the performance of each loss. Across all five cases, the SQR loss yields the largest errors. For the Vertical and Slant cases, all four optimized loss functionals perform equally well, overlapping within one standard deviation. For the remaining cases (Checker, Circle, and Hyperbola), the optimized MLC loss with exponential parametrization performs significantly better than the other three optimized losses.
It is striking to note that the MLC loss with exponential parametrization emerges as the best-performing loss configuration in some of the more complex datasets considered for these studies. The typical choice for a neural network classifier loss is arguably BCE. For the purposes of the likelihood ratio trick, however, we are interested in reinterpreting the classifier output to approximate the likelihood ratio, so it is possible that optimizing for raw classification performance alone is misguided. The MLC loss has the advantage of explicitly relating the signal and background probability distributions; in particular, the MLC loss can be intuitively understood to maximize the likelihood of \(\hat{\mathcal{L}}(x)\) with respect to \(p(x\mid\theta_{0})\) subject to the constraint that \(\hat{\mathcal{L}}(x)p(x\mid\theta_{1})\) is a probability distribution [10]. Therefore, it may be a more natural choice for this particular application than the default BCE loss.
### Generalized Loss Families
#### iv.2.1 Motivation
By treating the square in the MSE loss and the root in the SQR loss as parameters \(p\) and \(r\), respectively, we were able to generalize those loss functional to entire continuous parametric families of losses. We saw in the univariate case that we can optimize over \(p\) and \(r\), and were even able to see, through examining the landscapes of the different loss functionals in a simple case, what kinds of values of \(p\) and \(r\) will correspond to "better" loss functionals.
We now continue this investigation in the situation of multivariate Gaussians to get a sense of how much the trend we observe continues into more complex situations.
#### iv.2.2 Methods
We used the same methods as in Section III.3; in this case, however, we worked with the five different multivariate Gaussians cases rather than the single univariate Gaussians case.
#### iv.2.3 Results
In Table 3, we list the optimal values \(p^{*}\) and \(r^{*}\) in each of the five cases. An overall comparison of the four loss functionals with optimized parameterizations alongside \(p^{*}-\)MSE and \(r^{*}-\)SQR losses is shown in Figure 11. The plots of the MAE for the various values of \(p\) and \(r\) are presented in Appendix B.
#### iv.2.4 Discussion
The simpler multivariate cases considered (Vertical, Slant) result in very similar values to those found in the univariate Gaussian case: \(p^{*}\) and \(r^{*}\) are close to \(1.24\) and \(0.018\), respectively.
In the more complex multivariate cases (Circle, Hyperbola, and Checker), the optimal values of \(p^{*}\) are also between \(1\) and \(2\), with the expection of Hyperbola, for which \(p^{*}=-0.44\). It's possible that an equally-performing
\begin{table}
\begin{tabular}{l c c} Case & \(p^{*}\) & \(r^{*}\) \\ \hline \hline Vertical & \(1.12\) & \(0.018\) \\ Slant & \(1.16\) & \(0.018\) \\ Circle & \(1.28\) & \(-0.1\) \\ Hyperbola & \(-0.44\) & \(-0.2\) \\ Checker & \(1.6\) & \(-0.1\) \\ \end{tabular}
\end{table}
Table 3: The optimal values for \(p^{*}\) and \(r^{*}\) for the five different multivariate Gaussian cases. Note that in the univariate Gaussian case, the optimal values chosen were \(p^{*}=1.24\) and \(r^{*}=0.018\).
value of \(p\) larger than 2 could also exist, but our studies did not scan far enough to probe the asymptotic behavior in that direction. The optimal values \(r^{*}\) are all negative. However, it is worth noting that the MAE landscapes for the \(r\)-SQR are symmetric, and the corresponding MAEs for \(|r^{*}|\) are small, so the signs of these values is likely due to random chance. For these cases, the optimal values of \(r^{*}\) are less than 1, as in the univariate case, but very small values of \(r^{*}\) (\(|r^{*}|<0.01\)) are too numerically unstable to consistently yield useful outputs.
Overall, as shown in Figure 11, if one chooses only from the four loss functionals as defined in Table 1, but with optimized parametrizations, all four show equally good performance for the simpler cases (Vertical, Slant), but the MLC loss is significantly better than the other three choices in the more complex cases (Circle, Hyperbola, Checker). However, if one chooses \(p^{*}\) and \(r^{*}\) by scanning along these generalized loss families, the improvements are immense: for all cases except Hyperbola, the optimized \(r^{*}-\)SQR MAE is between 30% and 50% smaller than the optimized MLC MAE.
## V Physics data
### Parametrizing \(f\)
#### v.1.1 Motivation
In our final case study, we extended our comparison of classifier parametrizations and loss functionals to simulated high-energy particle physics datasets [39]. While there are a number of observables present in the datasets, in our analysis we considered only the leading jet transverse momentum (\(p_{T}\)), rapidity (\(y\)); azimuthal angle (\(\phi\)), and invariant mass (\(m\)).
The datasets consist of particle-level and detector-level simulated QCD jets originating from \(Z\) + jets events. \(Z\) + jets events from proton-proton collisions generated at \(\sqrt{s}=14\) TeV were simulated using Herwig 7.1.5 [40, 41, 42] with the default tune and Pythia 8.243 [43, 44, 45] tune 21 [46] (ATLAS A14 central tune with NNPDF2.3LO). We call the Pythia simulation "Monte Carlo" and Herwig "data". For the generated events, the \(p_{T}\) of the \(Z\) boson is required to be larger than 150 GeV. Events then
Figure 10: Mean absolute errors are compared for the four different losses considered (binary cross-entropy, mean squared error, maximum likelihood classifier, and square root), each with 3 different activation functions. For each loss, five different multivariate normal cases are studied: “Vertical”, “Slant”, “Circle”, “Checker”, and “Hyperbola”. For each case study, the best performing parametrization for each loss is shown in either red or blue. Errors represent the standard deviation across 100 independent model trainings.
Figure 11: Mean absolute errors are compared for the four different losses considered (binary cross-entropy, mean squared error, maximum likelihood classifier, and square root), each with their respective optimal parametrizations. For each loss, five different multivariate normal cases are studied: “Vertical”, “Slant”, “Circle”, “Checker”, and “Hyperbola”. Errors represent the standard deviation across 100 independent model trainings.
Figure 12: Histograms of the four jet features, transverse momentum \(p_{T}\) [GeV], rapidity \(y\), azimuthal angle \(\phi\), and mass \(m\) [GeV], for the Monte Carlo and data \(Z\) + jet events. Here, we treat the Monte Carlo as the signal and the data as the background.
are passed through the Delphes 3.4.2 fast detector simulation [47] of the CMS detector. The datasets consist of the highest-momentum jet from \(Z\) boson events with \(p_{T}\geq 200\) GeV. This process ultimately yields about 1.6 million jets for each simulation. Figures 12 and 13 display histograms of each of the four observables for both the "Monte Carlo" and the "data".
In this more complex setting, we no longer have access to the true likelihood ratio, as we do not know the underlying distributions generating these datasets. To allow for a more complete comparison of the different parametrizations' ability to model the "true" likelihood ratio, we therefore fit Normalizing Flows [48] to each sample. These flows estimate the generating distribution of the samples, and thus allow us to compute "true" likelihood ratios for these datasets.
#### iv.2.2 Methods
We first trained a FFJORD Normalizing Flow [48] for each of the "Monte Carlo" and "data" simulated samples. The models were tuned by comparing the performance of a classifier on distinguishing between data generated from
Figure 13: A corner plot of the four jet features. Blue contours correspond to the particle-level data and red contours correspond to the detector-level data.
the flow and the true data. We then used the flows as proxies for the underlying distributions of these datasets, creating new proxy datasets by sampling from the flows.
The methodology following this point was again similar to what has been established before in Sec. III. We implemented all four classifiers with each of the three parametrizations on the dataset, training 100 independent copies of each classifier architecture to minimize the corresponding loss functional. We used the MAE as the performance metric, computed as in Equation 15. In particular, it was computed with the true likelihood ratio \(\mathcal{L}(X)\) from the flows and the model likelihood ratio \(\hat{\mathcal{L}}(X)\) averaged over the 100 copies of each classifier. As before, the MAE was computed as an empirical average over 100,000 samples.
#### iv.1.3 Results
The distributions of the "data" and "Monte Carlo" learned by the flows are plotted along side the empirical distributions in Figure 12. To quantify the quality of the flows' learned distributions, we trained classifiers to try to distinguish between proxy datasets sampled from the flows and the original datasets; for the "Monte Carlo", the AUC was 0.54, and for the "data", the AUC was 0.56. These AUCs close to 0.5 indicate that the classifier has difficulty distinguishing between these two distributions, and therefore that the flows have performed reasonably well at reflecting the target distributions.
To visualize the performance of these classifiers, we performed a scan of the likelihood ratio along \(\phi\). We fixed the three other observables at values near their medians (\(p_{T}=221.8\), \(y=0.0\), \(m=16.0\)) and compared the flow likelihood ratio to the likelihood ratio modeled from the variously parametrized classifiers trained on the different loss functionals.
The scans for the logistic, arctangent, and hyperbolic tangent parametrizations of the likelihood ratio, trained with the BCE and MSE losses, are displayed in Figure 14. The analogous scans for the ReLU, square, and exponential parametrizations with the MLC and SQR losses are displayed in Figure 15.
The resulting MAEs are shown in Figure 16.
#### iv.1.4 Discussion
The observed trend in the Gaussian studies that the log-gistical and exponential parametrizations were the best for the BCE/MSE and MLC/SQR losses, respectively, also holds in the physics case. Of the four optimized loss functionals, the MLC loss with exponential parametrization performs distinctly better than the other three loss configurations, as shown in Figure 16.
### Generalized Loss Families
#### iv.2.1 Motivation
In the previous studies with univariate and multivariate Gaussians, we found that the performances of likelihood ratio models trained with losses from the generalized families of \(p\)-MSE and \(r\)-SQR losses followed a similar structure across various cases. In order to examine the robustness of this observed structure, we repeated the same study with the high energy particle physics dataset.
#### iv.2.2 Methods
The methodology for this study was similar to that of the previous studies. We scanned over values of \(p\) and \(r\) in the interval \([-2,2]\). For each increment of \(p\), we trained 20 models with the \(p\)-MSE loss functional defined by that value of \(p\) and averaged together their mean absolute errors. Likewise, for each increment of \(r\), we trained 20 models with the \(r\)-MSE loss functional defined by that value of \(r\) and averaged together their mean absolute errors. We parametrized the \(p\)-MSE classifiers with logistic activation functions and the \(r\)-SQR classifiers with exponential activation functions. All models were trained on the same set of one million samples from the flows fit to the distributions of the physics data.
#### iv.2.3 Results
The plots of the MAEs of the likelihood ratio models for the loss functionals are provided in Figure 17. As before, we observe vertical features in the plots when the loss functional is no longer strictly convex (\(p\in(0,1)\) and \(r=0\)). The MAE was minimized at \(p^{*}=-0.28\) and \(r^{*}=-1.3\). A comparison of the \(p^{*}-\)MSE and \(r^{*}-\)SQR losses with these chosen values alongsize the other four losses with optimized parametrizations is shown in Figure 16.
#### iv.2.4 Discussion
The shape of the \(p\) and \(r\) scans looks approximately similar to those observed in the previous case studies (e.g. Figure 5); however, since the MAE landscape is flat away from the non-convex regions (\(p\in(0,1)\) for \(p\)-MSE and \(r=0\) for \(r\)-SQR), the best choices \(p^{*}=-0.28\) and \(r^{*}=-1.3\) perform about the same as the unoptimized choices of \(p=2\) and \(r=1\). In this particular case, the evidence does not suggest that changing \(p^{*}\) or \(r^{*}\) from their default values of 2 and 1, respectively, would yield a significant benefit in reducing mean absolute error. It is possible that better values exist beyond the ranges \(r,p\in[-2,2]\) considered here. Overall, as shown
in Figure 16, the best-performing loss is MLC with exponential parameterization.
## VI Conclusions
The likelihood ratio \(\mathcal{L}(x)=\frac{p(x|\theta_{0})}{p(x|\theta_{1})}\) is a statistical quantity essential for characterizing whether an experimental dataset \(x\) better supports one of two hypotheses defined by sets of parameters \(\theta_{0}\) and \(\theta_{1}\). It is used beyond hypothesis testing, too, for applications such as reweight
Figure 14: The performance of different parametrizations of \(f\) for the BCE and MSE losses for the azimuthal angle \(\phi\). (a) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the BCE loss. (b) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the MSE loss.
Figure 15: The performance of different parametrizations of \(f\) for the MLC and SQR losses for the azimuthal angle \(\phi\). (a) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the BCE loss. (b) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the MSE loss.
Figure 16: The MAEs are compared for the Pythia/Herwig + Delphes particle physics jet datasets [39] for the four different losses considered. Errors represent the standard deviation across 100 independent model trainings. In **(a)**, each loss is shown with 3 different parametrizations. In **(b)**, the best-performing parametrization is chosen for each loss, and these optimized losses are then directly compared.
Figure 17: (a) The mean absolute errors averaged over logistically-parametrized models trained with the generalized MSE loss family. The mean absolute error is minimized at \(p^{*}=-0.28\); however, the MAE landscape here is rather flat. The arrow indicates the typical choice of \(p=2\) for the standard MSE loss. (b) The mean absolute errors averaged over exponentially-parametrized models trained with the generalized SQR loss family. The mean absolute error was smallest at \(r^{*}=-1.3\). The arrow indicates the typical choice of \(r=1\) for the standard SQR loss.
ing high-dimensional distributions for background estimation and more. In contexts where calculating the likelihood ratio is impossible or very tedious, researchers can use the "likehood ratio trick", leveraging a neural network classifier to approximate the likelihood ratio.
Often, the likelihood ratio trick is implemented by minimizing a typical choice of loss functional for a classifier: the binary cross-entropy loss. However, many loss functionals satisfy the likelihood ratio trick setup.
In this paper, we presented detailed studies comparing four choices of loss functionals: binary cross-entropy (BCE), mean squared error (MSE), maximum likelihood classifier (MLC), and square root (SQR). For each of these four loss functionals, we also explored a suite of choices of final activation functions for parametrizing the neural network output. For the MSE and SQR losses, we performed a scan along the exponential parameter (replacing \(2\to p\) for MSE and replacing \(\frac{1}{2}\rightarrow\frac{r}{2}\) for SQR) to understand the behavior of these generalized families of loss functionals.
As a result of these studies, we present the following recommendations for optimized implementations of each of these loss functionals in the likelihood ratio trick:
\[\begin{split}\text{Loss}&\text{Activation}\\ \hline\hline\text{Binary Cross-Entropy (BCE)}&\sigma(z)\\ \text{Mean Squared Error (MSE)}&\sigma(z)\\ \text{Maximum Likelihood Classifier (MLC)}&\exp(z)\\ \text{Square Root (SQR)}&\exp(z)\\ \end{split}\]
For MLC and SQR losses, we find that choosing small, nonzero values of \(r\) (and, correspondingly, \(p=r+1\)) tend to result in smaller mean absolute errors than the default choices (\(r=1\) and \(p=2\)) for these loss functionals. As we illustrate by mapping the loss landscape of a simple neural network, this is because smaller values of \(r\) can yield shallower loss landscapes where many values are nearly optimal, while larger values of \(r\) have steeper landscapes for models to traverse, with a much smaller proportion of the phase space corresponding to optimum values of the loss.
The loss landscape will vary with each new application, so we recommend that future researchers perform a scan along \(p\) or \(r\) to find an optimum value as part of hyperparameter optimization. If a scan over \(p\) or \(r\) is not feasible, we recommend comparing the default selections (i.e. \(p=2\) and \(r=1\)) with our alternative recommendations derived from the average optimum values across our various trials \(p^{*}=1.15\) and \(r^{*}=0.1\), or:
\[L_{\text{MSE}^{*}}[f]=-\int\mathrm{d}x\bigg{(} (1-f)^{1.15}p(x\mid\theta_{0})\] \[+f^{1.15}p(x\mid\theta_{1})\bigg{)}\] \[L_{\text{SQR}^{*}}[f]=-\int\mathrm{d}x\bigg{(} f^{-0.05}p(x\mid\theta_{0})+f^{0.05}p(x\mid\theta_{1})\bigg{)}.\]
Across the majority of the various datasets we considered, these choices tend to have significantly smaller mean absolute errors than the default selections while maintaining good numerical stability across multiple trainings. An interesting future investigation would be to consider how to dynamically optimize \(p\) and \(r\) as learned parameters during training.
When tested on univariate Gaussians and simple multivariate Gaussians (Vertical and Slant cases), all four loss implementations with optimized parametrizations perform similarly when approximating the desired likelihood ratio. For larger datasets (\(N>10^{5}\)), choosing different exponents in the definitions of MSE and SQR loss functionals results in an additional \(\geq 50\%\) reduction in errors for these cases.
On more complex datasets, including multidimensional Gaussians (Checker, Hyperbola, Circle) as well as simulated high-energy physics data, the Maximum Likelihood Classifier (MLC) loss with exponential parametrization performs the best out of the four default losses considered. Choosing different exponents in the definitions of MSE and SQR loss functionals additionally results in between \(30\%\) and \(50\%\) smaller errors for the Checker and Circle cases. For the Hyperbola and simulated high-energy physics case, choosing alternate \(p^{*}\) and \(r^{*}\) values in the range \([-2,2]\) does not yield a significant performance improvement, though it is possible that better values could exist outside of this range.
While these configurations performed well in our chosen case studies, these results should not be read as a guarantee that these choices will result in optimal performance for any dataset. We therefore recommend that other researchers compare the results of several of the optimized losses described in this work to yield the most effective setup for a given dataset.
There remain several open questions in this line of inquiry. For instance, can an analytical analysis of these loss functionals explain some of the performance differences observed? How much can we further characterize the uncountably many possible loss functionals that satisfy this setup? How else can we generalize certain loss functionals? Pursuing these answers could help us achieve even better scientific measurements enabled by machine learning in the near future.
## Acknowledgements
We are grateful to Jesse Thaler for very helpful feedback about figures-of-merit and ways to generalize our loss functions. We thank Dag Gillberg for the suggestion to compare NNs with BDTs and Vinicius Mikuni for the idea of using normalizing flows to model the LR of the physics datasets. M.P. thanks Shirley Ho and the Flatiron Institute for their hospitality while preparing this paper. S.R., M.P., and B.N. are supported by the Department of Energy, Office of Science under contract number DE-AC02-05CH11231. |
2306.06632 | The role of all-optical neural networks | In light of recent achievements in optical computing and machine learning, we
consider the conditions under which all-optical computing may surpass
electronic and optoelectronic computing in terms of energy efficiency and
scalability. When considering the performance of a system as a whole, the cost
of memory access and data acquisition is likely to be one of the main
efficiency bottlenecks not only for electronic, but also for optoelectronic and
all-optical devices. However, we predict that all-optical devices will be at an
advantage in the case of inference in large neural network models, and the
advantage will be particularly large in the case of generative models. We also
consider the limitations of all-optical neural networks including footprint,
strength of nonlinearity, optical signal degradation, limited precision of
computations, and quantum noise. | Michał Matuszewski, Adam Prystupiuk, Andrzej Opala | 2023-06-11T09:26:08Z | http://arxiv.org/abs/2306.06632v2 | # The role of all-optical neural networks
###### Abstract
In light of recent achievements in optical computing and machine learning, we consider the conditions under which all-optical computing may surpass electronic and optoelectronic computing in terms of energy efficiency and scalability. When considering the performance of a system as a whole, the cost of memory access and data acquisition is likely to be one of the main efficiency bottlenecks not only for electronic, but also for optoelectronic and all-optical devices. However, we predict that all-optical devices will be at an advantage in the case of inference in large neural network models, and the advantage will be particularly large in the case of generative models. We also consider the limitations of all-optical neural networks including footprint, strength of nonlinearity, optical signal degradation, limited precision of computations, and quantum noise.
## I Introduction
In recent years, remarkable strides have been made in the field of machine learning and artificial intelligence, heralding a new era of practical applications that are swiftly permeating various industries and our daily lives. However, this remarkable progress comes at the price of the rise in energy consumption, which is primarily driven by the exponential growth in the volume of data being processed [1] and to the apparent flattening of the improvement of computing performance. For many decades, progress was governed by remarkable principles of Moore's law and Dennard scaling. However, it seems that we are now entering a phase where these principles are gradually approaching a more stable plateau [2; 3]. It has become apparent that the cost of data movement through electronic wires, requiring charging their capacity for each bit of information, dominates the energy budget for data-intensive applications such as large-scale machine learning [4; 5].
To circumvent this limitation and make computations more efficient, a natural strategy is to reduce the physical distance between memory and processing units. This motivates interest in computing systems that go beyond the von Neumann architecture, such as in-memory computing [4]. Many physical implementations of machine learning have been realized with emulators of neural networks on specialized hardware, where the structure of the network is resembled physically [6; 7; 8; 9]. In these cases, computing is often analog rather than digital. Since neural network models are themselves analog, this approach appears to be more adequate in the case of neural networks than in the case of traditional algorithm-based computing.
Another way to increase the efficiency of computations is to realize them on a non-standard physical platform [10; 11; 12]. A particularly promising approach is to use photons instead of electrons [13; 14; 15; 16; 17; 18; 19]. The advantage of optical systems is that they do not require charging the capacitance of communication channels, so data movement can be almost lossless. For this reason, optical systems are used for communications over large distances and at high data rates, when the energy cost of data movement is particularly important [20]. However, computation with photons, while being researched for many decades [21], has not found mainstream applications yet. Optical computing has been hampered by many factors including the weakness of optical nonlinearity, bulkiness of optical elements, difficulty to regenerate optical signals, and to integrate optical sources. It has been difficult to realize a device capable of realizing general digital operations with appropriate fidelity and at a low energy cost [22]. However, the recent resurgence of interest in optical computing led to breakthroughs that alleviated many of these limitations [14; 15], and it seems that taking advantage of optical computing in practical devices is within reach.
A natural solution to overcome the disadvantages of electronic and optical computing is to combine them in a single system, using the advantages of both light and matter. This approach typically assumes constructing an optoelectronic device where communication or linear operations are realized optically, while other operations, including nonlinear transformations and signal regeneration, fan-in, and fan-out are taken care of by electronics. While this approach is very promising, it has its own limitations. One of them is the limited compatibility of electronic and optical systems and the difficulty to integrate the two. For example, typical length scales for state-of-the-art electronics are in the range of a few nanometers. In the case of optics, it is difficult to squeeze light below the micrometer length scale without incurring significant losses. On the other hand, conversion from electronic to optical signals and vice versa and from analog to digital signals creates additional energy costs and technological difficulties.
Here, we consider the viability of all-optical computing as an alternative to electronic and optoelectronic approaches, and attempt to identify the applications where it may excel. This topic has been considered previously, and it was pointed out that an all-optical approach may
not be well suited for digital computations [22]. We look at this problem from a new perspective, taking into account recent advancements both in the optical technology and in the field of machine learning. We assume that the main technological bottleneck of high-intensity computing is, as it appears, the efficiency of data movement via electronic wires. This is justified both by fundamental physical limitations, and by the observation that electronic computation efficiency is apparently saturating after decades of exponential progress. On the other hand, there is still room for improvement before the fundamental limits are reached for optics. Consequently, we consider the energy cost of operations requiring data movement by electronic channels, such as memory access, to be the most stringent limitation. Importantly, to provide a fair comparison, we consider the efficiency of the complete computing system, and not only a certain part of it. In particular, we take into account the cost of data acquisition, and the electronic memory access cost necessary to provide data, which is often overlooked in estimates of energy efficiency. It is important to mention that our considerations do not apply to the case where input data can be supplied in the form of optical signals, without accessing external electronic memory [23; 24].
Based on these assumptions, we try to answer the question of the viability and practicality of all-optical neural networks. In other words, we consider whether there are applications in which all-optical neural networks can outperform their electronic and optoelectronic counterparts, and to what extent. We conclude that electronic memory access cost will be likely the main limitation not only for electronic and optoelectronic, but also for all-optical networks. However, we find that one application where all-optical networks will be at an advantage is the inference in large-scale neural networks, where the number of neurons in the hidden layer is much larger than the dimensionality of inputs and outputs. The advantage will be particularly large in the case of generative models, where input data can be reused in many subsequent inferences, reducing data acquisition costs. These conditions are fulfilled in many machine learning models used in practice.
In addition, we consider the limitations of all-optical neural networks that must be overcome before they become practical. We analyze optical neural networks taking into account the specifics of information processing with light, such as quantization of light. By performing numerical simulations, we show that all-optical neural networks can be accurate even if the precision of optical transformations is reduced by noise and fabrication errors. We discuss the issues of signal regeneration, network depth, and scalability of optical networks.
## II Can all-optical neural networks be efficient?
In this section, we show the main motivation of our paper, that is the advantage of using all-optical systems as an efficient platform for analog neural networks, as opposed to electronic or optoelectronic devices. We consider the energy cost of calculations, which is currently the most important limitation of computing systems [2; 4]. We leave the considerations of footprint, speed, precision, and other limitations of optical systems to the next section.
The main assumption of our considerations is that the data movement cost in electronic wires will be difficult to improve in the future. This assumption can be justified by two arguments. One is the physical lower limit of the energy required to charge an electronic wire to send a single bit of information. The cost of charging wire capacitance per unit length is approximately independent of the wire cross section, and can be estimated as 100 fJ/bit per 1 mm of connection length [5]. Another argument for considering data movement cost as a physical lower limit is the apparent saturation of energy efficiency of computations and memory access costs [2; 3; 4], despite decades-long developments and huge investments in the complementary metal-oxide (CMOS) technology. In fact, it appears that state-of-the-art efficiencies are reaching the fundamental estimates. In machine learning applications, which require great number of memory access operations to perform multiplications of large tensors, the cost of data movement is at least comparable the cost of logic operations [4]. Therefore, it seems that the room for improvement for the efficiency of current CMOS technology is limited, unless a significant technological breakthrough is achieved.
How optics can be advantageous from the perspective of hardware-implemented neural networks? As was mentioned, optical links do not require charging of communication lines. Optical energy dissipation corresponds to effects such as optical absorption, light leakage in waveguides, optical dispersion, and spatial or temporal mode misalignment, however these effects typically lead to a much lower dissipation than in the case of electronics. This is the reason why optical connections are used for long-haul communications, in data centers, and can be applied for communications even at short length scales [25].
In this work, we focus on the aspects specific to neural networks. The structure of neural networks and the specifics of the required computations make optics a much better match than in the case of algorithmic digital computations. In this context, one can point out several advantages that we list below.
### Optical fan-out and fan-in
In a typical artificial neural network, neurons perform two kinds of operations. Summation of neuron inputs \(x_{i}\) multiplied by weights \(w_{ij}\) is a linear operation (i.e. linear as a function of neuron inputs), which is followed by a nonlinear activation given by a function \(f_{j}\) such as the sigmoid or rectified linear unit (ReLU)
\[y_{j}=f_{j}\left(\sum_{i}^{N}w_{ij}x_{i}\right). \tag{1}\]
Note that \(f_{j}\) can act on vectors rather than scalars as is the case in the softmax function. In the case of electronic systems, the efficiency is strongly tied to the cost of a single multiply and accumulate operation (MAC). This operation occurs once for every multiplication of a neuron input \(x_{i}\) with the corresponding input (synaptic) weight \(w_{ij}\). Performing such an operation in the von Neumann machine requires accessing memory for all the inputs and all the corresponding weights. If the number of neuron inputs is large, so is the required data movement. On the other hand, the nonlinear activation function is applied only once per neuron activation, and its energy cost can be much lower. In practice, input-weight multiplications are performed in batches, so the weights can be to a large extent reused if stored in a local memory. However, due to the scale of large machine learning models, the memory access cost is still a major source of energy dissipation.
On the other hand, in the case of optical neural networks, the summation of optical signals can be performed at almost no cost of data movement by directing or focusing optical pulses or beams to certain regions in space, which is the optical fan-in. This can take the form of either simple intensity addition in the case of mutually incoherent light pulses, or optical interference in the case of mutually coherent pulses. On the other hand, fan-out of output optical signals to a very large number of copies can be realized with linear optical elements such as diffractive optical elements [26], spatial light modulators, microlens arrays [24], or in integrated circuits [27]. There is no fundamental lower limit for the energy cost of these operations. Therefore, it is the nonlinear activation function, rather than the weighted linear summation that creates a bottleneck for the highest possible efficiency of all-optical devices. This is a particularly important limitation due to the weakness of nonlinear interactions between photons, which are much weaker than the interactions between electrons in semiconductors.
### Static weights in neural network inference
In machine learning, the inference phase follows the training phase. From the point of view of energy consumption, the inference phase is often more important that the training phase, since the trained model can be used for inference arbitrarily many times [28]. Specialized CMOS inference systems include Google TPUv1 and nVidia inference platform. In the inference phase, weights of neurons do not change. Therefore, if weights can be implemented in optical hardware without the need for external memory access, the cost of data movement can be greatly reduced. In the case of electronics, this approach is the basis of in-memory computing [4]. However, electronic chips with in-memory computing can require complicated wiring to connect computing units with each other [7]. In the case of optics, hardware-encoded static weights can be combined with almost dissipationless transport to drastically reduce the cost of weighted summation in Eq. (1). The multiplication of inputs by the corresponding weights can be implemented with linear optical elements which apply a certain amplitude or phase modulation to the optical signals. One of widely used methods is implementing Mach-Zehnder interferometers, or a mesh of such interferometers that perform an arbitrary linear operation represented by a matrix [27]. Weights encoded in phase change materials [29] do not require power to sustain their state. In the case of free-space propagating beams, spatial light modulators can be used for applying weights, with millions of independently tunable parameters [26]. While some of these methods require an energy supply to keep the state of optical weights, this cost can be often reduced to a level that is much below the energy dissipation cost of data movement in electronics. In some other cases, it can be completely eliminated.
### Structure of large neural network models
One of the reasons for the recent progress in machine learning is that hardware, such as specialized parallel computing units, allowed the implementation of models with increasing complexity, that could accommodate and process large datasets. Usually, to obtain high accuracy of predictions, it is necessary to use models with a large number of parameters and neurons. It was noticed that optics could be particularly effective in comparison to electronics in applications that require large scale computations [30]. This advantage is particularly large in the case of all-optical networks. Let us, for the moment, focus on the memory access cost as the main bottleneck and compare the potential efficiency of all-optical and optoelectronic devices (we will justify this assumption later). In the case of a typical optoelectronic device, the data needs to be converted to a digital signal and stored in memory after each layer of neural network computation, while in the all-optical system this is not necessary, see Fig. 1. We assume that the input to an all-optical device is provided in an electronic form since most of the data in our world is encoded electronically. It needs to be read from a digital memory both in the case of an optoelectronic and an all-optical neural network. However, the advantage of an all-optical network is that once
converted to the optical domain, it does not need to be re-converted to the electronic, digital domain until the entire computation on the data sample is finished. In Fig. 1, memory access cost occurs only at the input and the output of the all-optical network, and at each hidden neuron of an optoelectronic network. Moreover, in the cases where data is provided in an optical form, it does not occur at the input layer [23; 24].
### Comparison of optoelectronic and all-optical network efficiencies
How does this lower memory access cost affect the efficiency of optical networks? This largely depends on the particular structure of the neural network model, and we analyze some examples here. Generally, in the case of large scale neural networks, the size of hidden layers in Fig. 1, measured as the number of neurons or the number of parameters, is much larger than the size of input and output layers [31; 32]. The energy cost of computations per sample in the inference stage can be very roughly estimated for an optoelectronic network as
\[E_{\text{OE}}\geq E_{\text{electronic}}(N_{\text{input}}+N_{\text{output}}+N_{ \text{hidden}}), \tag{2}\]
and for an all-optical network as
\[E_{\text{AO}}\geq E_{\text{electronic}}(N_{\text{input}}+N_{\text{output}})+E_{ \text{optical}}N_{\text{hidden}}, \tag{3}\]
where \(E_{\text{electronic}}\) is the average cost of electronic operations per neuron per inference, including memory access cost, and \(E_{\text{optical}}\) is the average cost of optical operations per neuron per inference, including the energy of light pulses, optical losses, and electronics necessary for an optical neuron to operate. Under the assumption that the memory access cost is the main bottleneck of computation efficiency, and considering the case where the majority of neurons are in the hidden layer, \(N_{\text{hidden}}\gg N_{\text{input}}+N_{\text{output}}\), the ratio of energy costs can be estimated as
\[\frac{E_{\text{AO}}}{E_{\text{OE}}} =\frac{N_{\text{input}}+N_{\text{output}}}{N_{\text{input}}+N_{ \text{output}}+N_{\text{hidden}}}+\] \[+\frac{E_{\text{optical}}}{E_{\text{electronic}}}\frac{N_{\text{ hidden}}}{N_{\text{input}}+N_{\text{output}}+N_{\text{hidden}}}\approx\] \[\approx\frac{E_{\text{optical}}}{E_{\text{electronic}}}\]
If \(E_{\text{electronic}}\gg E_{\text{optical}}\), i. e. the energy cost of computation per inference per neuron in the optoelectronic network is much larger than the cost of the same operation performed all-optically, the energy cost will be much lower in the case of an all-optical device.
To justify the above reasoning, we consider whether the above conditions: \(N_{\text{hidden}}\gg N_{\text{input}}+N_{\text{output}}\) and \(E_{\text{electronic}}\gg E_{\text{optical}}\), can be fulfilled in practice. To estimate the average cost of electronic operations \(E_{\text{electronic}}\) in an optoelectronic device, one needs to take into account the costs of conversion from an optical signal to electronic signal, analog to digital conversion, the costs of the reverse processes, and the cost of memory access. In certain optoelectronic devices, some of these costs may be absent, for example, if the electronic part of computation is analog as well. However, it appears to be difficult to reduce the cost of all these operations below the level of picoloules per data unit such as a byte. In particular, the cost of memory access appears to be the main bottleneck. For an 8-bit input, it ranges from several picoloules to several nanojules depending on the technology used and the size of memory. For example, single access to 100 MB memory requires around 10 pJ of energy [3; 4]. Even if the system is designed in such a way that access to memory is not required for each neuron operation, the cost of analog-to-digital conversion and electronic to optical conversion results in a similar bottleneck [33; 34; 35].
On the other hand, all-optical neural networks in the inference mode do not require optoelectronic conversion and the energy \(E_{\text{optical}}\) is mainly bounded by the required power of the light source. This bound depends on the optical nonlinearity of the system, optical losses, and the sensitivity of detectors at the output layer. While weak nonlinear response is one of the main disadvantages of optical systems, the use of materials characterized by strong and ultrafast nonlinearities, such as semiconductor quantum wells, organic materials or two-dimensional materials can result in high efficiency of nonlinear operations at high data rates. In particular, exciton-polaritons are
Figure 1: All-optical and optoelectronic neural networks. (a) In an all-optical network, input data is transformed to optical form at the input layer, and all subsequent operations up to the output layer are realized all-optically. (b) In an optoelectronic network, the signal is transformed from optical to electronic and back at each layer of the network. If the size of the hidden (middle) layers is much larger than input and output layers, this results in a bottleneck of system efficiency.
quasiparticles of light and matter that can exhibit optical nonlinearity that is orders of magnitude stronger than in other materials. Using exciton-polaritons, the energy cost of a single nonlinear operation can be as low as few attojoules per neuron [36]. Taking into account the possibility of optical fan-in with thousands of linear operations per neuron [18], optical efficiency at the level of zeptoules per operation is foreseeable. At the same time, the required sensitivity of detectors scales proportionally to \(N_{\mathrm{hidden}}/N_{\mathrm{output}}\), since at the output layer the light is collected from all of the hidden neurons. Accordingly, in the limit of large \(N_{\mathrm{hidden}}/N_{\mathrm{output}}\) considered here, detector sensitivity will not be the main bottleneck.
Large size of a neural network model translates to a large size of hidden layers \(N_{\mathrm{hidden}}\). As a result, large-scale neural networks used in practice are usually characterized by very high ratios \(N_{\mathrm{hidden}}/N_{\mathrm{input}}\). For example, one of the leading models in the ImageNet competition, Amoeba-Net, has \(10^{9}\) hidden nodes and performs \(10^{11}\) operations per inference, while the ImageNet input size is \(165\times 165\times 3\approx 10^{5}\), resulting in \(N_{\mathrm{hidden}}/N_{\mathrm{input}}\approx 10^{4}\). In recent language models, this ratio is even higher, with the large BERT model consisting approximately of \(24\times 2\times 512\times 1024\) nonlinear nodes for a 512-long input token sequence, resulting in the ratio \(N_{\mathrm{hidden}}/N_{\mathrm{input}}\approx 10^{5}\). Therefore, the condition \(N_{\mathrm{hidden}}\gg N_{\mathrm{input}}+N_{\mathrm{output}}\) is fulfilled in many practical large neural network models. It is interesting that the same relation appears to hold for the network of neurons in the human brain. The number of neurons in the brain is of the order of \(10^{11}\), which is likely to be much greater than the dimensionality of the input information from all stimuli.
To give a concrete example of potential efficiency, we estimate the energy cost per operation for a hypothetical large-scale neural network with \(N_{\mathrm{input}}+N_{\mathrm{output}}=10^{3}\) and \(N_{\mathrm{hidden}}=10^{8}\), assuming \(E_{\mathrm{electronic}}=1\,\mathrm{pJ}\), and \(E_{\mathrm{optical}}=100\,\mathrm{aJ}\). In both cases, we assume the optical fan-in of 1000 inputs per neuron in the hidden layer. For a fair comparison, the energy cost per operation is calculated as the total energy cost of the complete network, including memory access for each electronic neuron operations. The number of operations is calculated as two operations (multiply and accumulate) per each neuron input and one for nonlinear activation in a neuron. According to Eqs. (2) and (3) the lower bounds for the energy cost are estimated to be 500 aJ for an optoelectronic network and 55 zJ for an all-optical network, almost four orders of magnitude lower.
These estimations are not complete unless we consider the cost of acquiring data. This includes the cost of access to external memory, such as DRAM memory, from where input data has to be retrieved. In the case of data that needs to be transmitted over a distance, as is usually the case in cloud computing, the cost of accessing input data may be further increased. The costs of both reading from DRAM memory and fiber link data transmission are in the range of 1-100 pJ per bit, or up to 1 nJ per byte [37; 4]. These costs may be significantly higher in the case of wireless communication or less efficient data transmission channels. The cost of acquiring input data may also be high in the case of edge computing, for example, if input information is gathered by a camera with a high energy cost per pixel. In all of these cases, the input data costs may dominate over all other costs of computations.
However, there is an important class of practical machine learning tasks where input data costs can be drastically reduced by "recycling" input data acquisition. All generative tasks in which most of the input information used at one step of computation can be used for the next step belong to this category. These include applications such as text completion, language translation, question answering, chatbots, image and sound synthesis. In these cases, input information known as "context" can be often reused to a great extent across inferences, for example, \(4\times 10^{3}\) times in the case of large language models. As a result, the cost of data acquisition may be orders of magnitude smaller than the cost of local memory access for input neurons, which is already included in our estimations.
In Table 1 we present examples of complete estimates for electronic, optoelectronic, and all-optical neural networks in the case of various machine learning model sizes. We take into account the costs of all contributions to energy usage, including optical, optoelectronic, memory, and data acquisition. To this end, in the calculation of energy cost per inference we include additional terms in addition to the ones present in Eqs. (2) and (3)
\[E_{\mathrm{OE,AO}}^{\mathrm{total}}=E_{\mathrm{OE,AO}}+N_{\mathrm{input}} \left(\frac{E_{\mathrm{acquisition}}}{M}+E_{\mathrm{memory}}\right) \tag{4}\]
where \(E_{\mathrm{acquisition}}\) is the cost of acquiring input data, which is divided by the number of inferences \(M\) where it is reused, and \(E_{\mathrm{memory}}\) is the cost of accessing local memory that stores the input data. It is clear from Table 1 that all-optical neural networks will have the advantage in the case of large models, and in particular in the case of generative models which require less input data.
\begin{table}
\begin{tabular}{l l l l} \hline Network type & Small & Large & Large \\ & & & generative \\ \hline \hline \(N_{\mathrm{input}}+N_{\mathrm{output}}\) & 100 & 1000 & 1000 \\ \(N_{\mathrm{hidden}}\) & 1000 & 10\({}^{8}\) & 10\({}^{8}\) \\ \hline Electronic & 1 pJ & 1 pJ & 1 pJ \\ Optoelectronic & 7 fJ & 1 fJ & 1 fJ \\ All-optical & 6 fJ & 600 zJ & 100 zJ \\ \hline \end{tabular}
\end{table}
Table 1: Estimates of average energy cost per operation in inference for the system as a whole, including data acquisition cost. Parameters are \(E_{\mathrm{electronic}}=2\,\mathrm{pJ}\), \(E_{\mathrm{optical}}=100\,\mathrm{aJ}\), \(E_{\mathrm{memory}}=10\,\mathrm{pJ}\), \(E_{\mathrm{acquisition}}=100\,\mathrm{pJ}\). We assume \(10^{3}\) inputs per neuron on average and the possibility to reuse input data \(10^{3}\) times in a generative network.
## III Limitations of all-optical computing
In this section, we discuss the possible limitations of all-optical computing. We consider the footprint and speed of operations, cascadability and signal degradation, implementing useful nonlinear transformations, precision of computations, fabrication errors, and quantum noise.
### Footprint
One of the arguments commonly raised against using optics for computing is the footprint of optical systems. The rationale of this argument is that the wavelength of visible light is of the order of a single micrometer, while electronic systems can be integrated in chips with nanometer-sized transistors. While the footprint is certainly a limitation for optics, it is actually not as severe as it may seem. In the case of neural network implementations in electronics, the nanometer size of a transistor does not directly translate into nanometer-sized neurons. For example, in the IBM TrueNorth neuromorphic chip [7], fabricated in 28-nm process technology, the footprint is approximately 200 \(\mu\)m\({}^{2}\) per neuron and 1 \(\mu\)m\({}^{2}\) per synaptic weight. Such length scales result from the complicated circuitry that must be implemented in an electronic chip to emulate a neuron.
Moreover, it is important to realize that there exists a direct relation between the energy efficiency of a chip and its footprint, which results from the need to dissipate heat generated by the computation. Heat removal is space consuming. In CMOS chips, circuit structure is usually two-dimensional, and the third dimension is sacrificed for a heat sink. One exception to this rule are memory chips, which often have a multilayer stacked structure with more than 100 layers. This is possible due to the reduced amount of heat generation as compared to information processing chips. As a result, the reduction of energy dissipation leads to the reduction of footprint.
Moreover, there have been great advancements in the miniaturization of optical systems. Integrated silicon photonics chips can be fabricated and processed in large quantities by specialized foundries. A typical size of an element of a photonic chip, such as a Mach-Zehnder interferometer is of the order of micrometers [27, 29, 38]. If energy dissipation in such optical chips can is lower than the dissipation in electronic chips, stacking of optical chip layers should be possible, thus reducing the footprint. On the other hand, the free-space approach to computing, while requiring the third dimension for light propagation, also permits achieving very low footprint. For example, commercially available spatial light modulators with a few \(\mu\)m pixel pitches are able to encode synaptic weight information with a density comparable to the density in electronic chips. It could be further increased if holographic encoding of some form was be used. Assuming a conservative estimate of encoding a single weight parameter on 10 \(\mu\)m\({}^{2}\) of surface, the size of a weight bank encoding the full BERT language model with 110 million parameters would require a surface of only 11 cm\({}^{2}\).
### Speed
Speed of a neural network inference can be measured in several ways. While the number of operations per second is a valid measure of computational power, probably more important ones from the practical point of view are latency and performance density, which is the number of operations per second per area [3]. In terms of latency, all-optical networks can certainly outperform electronic and optoelectronic networks in most applications, since apart from input generation and output detection, they require only propagation of light across the network layers at the speed of light. For a centimeter-sized system, this results in latency of the order of picoseconds, which is many orders of magnitude lower than millisecond latency typical for electronics [28]. Performance density of all-optical networks can be estimated by considering the number of synaptic weights multiplications per second, taking into account the 10 \(\mu\)m\({}^{2}\) weight footprint as estimated above and 10 GHz inference rate corresponding to commercial optical modulators. The resulting performance density of the order of 10\({}^{6}\) GOP s\({}^{-1}\) mm\({}^{-2}\) is three orders of magnitude higher than in state-of-the-art electronics [3].
### Strength of nonlinearity
An efficient all-optical neural network requires strong optical nonlinearity. This nonlinearity has to be characterized by a fast response time, ideally in the GHz to THz range, since the optical pulse energy required for realizing an operation scales linearly with its duration at the same light intensity. A range of optical materials have been considered for this purpose [29, 40, 41]. In particular, semiconductor materials possess characteristics that make them good candidates for nonlinear elements of artificial neurons [42, 40, 39].
In this context, microcavity exciton-polaritons [16, 17, 36, 43] are a particularly promising alternative. These quasiparticles are half-light half-matter excitations existing in semiconductors, which induce optical nonlinearity orders of magnitude stronger than in standard semiconductor materials [36]. They can operate at room temperature [44, 45, 46] and have response times of hundreds of femtoseconds to nanoseconds [47]. Moreover, the nonlinearity of polaritons is significantly enhanced in two-dimensional materials [48], in the case of trion-polaritons [49], or Rydberg polaritons [50].
### Activation functions
Electronic implementations of neural networks make it possible to realize virtually any nonlinear activation function at a very low energy cost. In all-optical networks, one does not have such a flexibility and usually has to deal with a nonlinear response of the system that is either fixed or exhibits some limited tunability. Moreover, it is usually not possible to realize the activation function that is optimal for a particular network model. However, as is well known in the field of machine learning, the particular form of activation function often has only limited impact on system performance. On the other hand, the use of real and imaginary parts of the complex optical field amplitude may lead to certain improvements in accuracy [51]. We checked the impact of the type of activation function and the complex nature of light field by considering a simple example of a feedforward neural network with one hidden layer performing the MNIST handwritten digit recognition task. We consider the following real and complex activation functions
\[f_{j}(x) =\begin{cases}\text{ReLU}(x)\\ \frac{1}{1+\text{e}^{-x}}\end{cases} \tag{5}\] \[f_{j}(z) =\begin{cases}|z|\\ \frac{1}{1+\text{e}^{-|z|}}\end{cases} \tag{6}\]
The first two functions are standard activation functions used in machine learning. Note that in the case of functions in Eq. (6), which take complex arguments, both the inputs \(x_{i}\) and the weights \(w_{ij}\) in Eq. (1) can be complex-valued, which reflects the amplitude and phase of the optical field. The first function in Eq. (6) is one of the simplest nonlinear functions that takes advantage of the complexity of variables. To justify the form of the second function in Eq. (6), we consider a simple optical setup shown in Fig. 2. This setup is based on the nonlinear refractive index change induced by both the optical control beams and the signal beam, which is transmitted through the nonlinear medium. The optical nonlinearity can be enhanced by enclosing the medium in a microcavity and achieving strong light-matter coupling [36]. At the point of the optical bistability threshold Fig. 2 (b), (red line), the dependence of the transmitted signal intensity on the incident light amplitude is strongly nonlinear. It exhibits an S shape analogous to the sigmoid activation function known from machine learning models. We assume that control beams and the input beam are not coherent (eg. formed by different lasers), which allows us to discard the effects of interference. On the other hand, control beams are assumed to be coherent with each other. This assumption is natural if coherent light is used in the linear vector-matrix multiplication setup, which precedes the nonlinear activation stage [27, 21]. The phase of the transmitted beam is therefore not related to the phases of control beams, but its intensity is strongly modulated by the intensity of the superposition of control beams. Here, we assume a simplified, complex sigmoid dependence of the transmitted light intensity at the threshold
\[I_{\text{out}}\sim I_{\text{pump}}\frac{1}{1+\text{e}^{-|z|}} \tag{7}\]
where \(z=\sum A_{i}\) is the sum of complex amplitudes of all of the control beams corresponding to this nonlinear node. These beams can be treated as synaptic inputs to the nonlinear node. Therefore, we consider the situation where the intensity of the pump beam is tuned to the middle of the sigmoidal dependence near the optical bistability threshold, see red line in Fig. 2 (b).
In Fig. 3 we present the estimated accuracy of fully
Figure 2: A possible realization of an optical neuron (a). A nonlinear system is tuned close to the bistability threshold which results in sigmoid-like response to the total intensity of control beams. The transmittance of the strong pump beam depends on the nonlinear index change induced by weak control beams. As a result, both nonlinear activation and signal amplification can be realized. (b) Schematic examples of output intensity \(I\), as a function of total incident amplitude \(F\) in the bistable, threshold, and single-valued cases.
connected complex-valued and real-valued neural network models with different nonlinear activation functions. Two conclusions can be drawn from these results. First, complex-valued networks can perform slightly better than real-valued networks with the corresponding activation functions and the same number of parameters. This is the case even if biases are not used in the complex valued networks, which simplifies the implementation with optics. Second, the particular form of activation functions can have some influence on the accuracy, but there is no substantial difference between "optimal" functions such as the ReLU function and the sigmoid. In particular, the physically relevant complex sigmoid activation controlled by complex-valued inputs gives optimal results.
### Precision
Limited precision of analog systems is a potential obstacle for applications. In the context of neural networks, it is known that low precision can be good enough to perform machine learning tasks with very high accuracy as long as it is kept above a certain, task-dependent level. Examples include quantized and binarized neural networks [52, 53]. In the context of analog computing, there are examples where 2-6 bit precision is sufficient to achieve accuracy close to an optimal one [53, 54, 7]. It appears that the required precision of computations strongly depends on the task to be solved and must be considered on a case-by-case basis.
### Fabrication errors
The influence of fabrication variability can strongly impact system performance. Ideally, robustness would mean that a neural network model trained _in silico_ can perform equally well in a physical system where the parameters are not fully controllable. This can be achieved by reducing device variability, but it is not always possible. However, additional post-processing correction methods or tunable "control knobs" may be used to adjust the system. For example, in the scheme shown in Fig. 2, such knobs is the phase of the pump beam, and the weights of the linear vector-matrix multiplication, which can correct for the variability of the nonlinear response of the sample. Another method is to fine-tune a pretrained model taking into account the imperfections existing in a particular device, or specific training methods that directly take into account the response of the physical system [55, 56]. If the system is to be used many times in the inference phase, performing such procedures once for each device may be reasonable, even if they are lengthy or expensive.
Apart from these correction methods, neural networks are characterized by an intrinsic robustness to imperfections. To investigate the robustness of optical networks, we analyze the accuracy of the neural network used for the MNIST dataset classification in the case when an additional disorder is introduced to individual neurons.
Figure 3: Accuracy of handwritten digit recognition for fully connected complex- and real-valued networks with 1 hidden layer as a function of the number of trainable parameters. Different activation functions correspond to electronic, optoelectronic and all-optical neurons. Results after 150 epochs of training are shown. Shaded regions correspond to the estimated uncertainty based on multiple training iterations. Neurons in the hidden layer with complex activation functions were modeled without biases due to the possible difficulty of their optical implementation.
Figure 4: Influence of imperfections on the performance of optical neural networks. A static disorder of relative amplitude \(\sigma\) is applied to each neuron as described in Eq. (8). Networks with one and two hidden layers (HL) are considered, with 100 or 1000 neurons (N) in each hidden layer. Error rate is defined here as \(1-\eta\), where \(\eta\) is the accuracy of the model.
In Fig. 4, we show the accuracy in function of disorder strength, which is perturbing the response of hidden neurons only in the inference phase, according to
\[f_{j}(z)=\frac{a_{j}}{b_{j}+c_{j}e^{-|z|}} \tag{8}\]
where \(a_{j},b_{j},c_{j}\) are parameters chosen individually for each neuron from the distributions described by a Gaussian probability density centered around unity i.e. \(p(x)=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{1}{2}\left(\frac{x-1}{\sigma} \right)^{2}}\), where \(p(x)\) is the probability density of \(a_{j},b_{j},c_{j}\) taking the value of \(x\) and \(\sigma\) is the factor describing the width of the distribution and thus the strength of disorder. The results shown in Fig. 4 indicate that up to a certain disorder strength, the accuracy of the network inference does not suffer significantly. Robustness certainly increases with the number of neurons, but is decreases in with the number of layers, which can be interpreted as propagation of errors. This shows that even in the case when correction of device imperfections is not possible, reducing the disorder below a certain level may be sufficient.
### Quantum Noise
One of the benefits of all-optical computing is the absence of thermal noise at the intermediate layers of computation, which is inevitably present in electronic elements whenever optoelectronic conversion occurs. The fundamental limit of energy efficiency of all-optical computing is related to quantum noise, or shot noise, which becomes significant near the single photon level. In the context of optoelectronic networks, it was shown both theoretically [18] and experimentally [57] that vector-matrix multiplication operations in neural networks can be performed, even below the single photon per operation level. The reason for this surprising result is that when many such operations contribute to the result of the weighted summation in a single neuron as in Eq. (1), the signal to noise ratio of the sum is approximately a factor of \(\sqrt{N}\) higher than the ratio for individual elements of the sum. This property can greatly increase the energy efficiency of neural networks if the number of neuron inputs is large, which is the case in many practical neural network models.
We analyze to what extent all-optical neural networks, where information is encoded with coherent light amplitudes, can benefit from a similar quantum noise reduction. We assume that the weighted input of an optical neuron \(j\), \(w_{ij}x_{i}\) in Eq. (1) is encoded by a coherent optical laser beam of amplitude proportional to \(w_{ij}x_{i}\). Recall that both the weights \(w_{ij}\) and the inputs \(x_{i}\) are complex-valued. A superposition of such beams results in optical amplitude proportional to the weighted summation as in Eq. (1) for a given neuron \(j\). In the following, focus on a particular neuron and drop index \(j\) for convenience.
In our quantum treatment, weighted inputs are optical laser pulses represented by coherent photon states \(|\alpha_{i}\rangle\) such that \(\alpha_{i}=\beta w_{i}x_{i}\), where \(\beta\) is a factor that relates the coherent state amplitude to the amplitude of the neuron input. For a given neural network model, it can be chosen arbitrarily, with higher values of \(\beta\) resulting in stronger light intensities and a higher signal-to-noise ratio. The approximation of treating inputs as coherent states may not be correct when one is dealing with input states that are quantum themselves, for example, when they have been affected by a strong single-photon nonlinearity in the previous computation layer. In the following, we will exclude this possibility, which is consistent with the fact that the nonlinearity of optical materials does not allow to achieve such a strong single-photon nonlinearity except of very specific configurations [58; 59].
For convenience, we denote weighted inputs with \(a_{i}=w_{i}x_{i}\). Thus, we take input states of the neuron as \(|\alpha_{i}\rangle=\beta a_{i}\) where \(i=1\ldots N\), and assume that the output state is approximately a coherent state. The weighted sum of inputs, i.e., the state of light in the spatial and temporal mode corresponding to the neuron, is a superposition of \(N\) input states, i.e., \(|\alpha\rangle=\sum_{i=1}^{N}|\alpha_{i}\rangle\), and it is also a coherent state with \(\alpha=\sum_{i}\alpha_{i}\). We neglect phase factors such as \(\mathrm{e}^{i(kr-\omega t)}\) since we can select the basis for input coherent states in such a way that they are eliminated.
We can now determine all the quantum properties of light, in particular its intensity and fluctuations. To determine the fluctuations it is convenient to use quadratures \(\tilde{X}_{1}\) and \(\tilde{X}_{2}\), with \(\alpha=\langle\alpha|\tilde{X}_{1}|\alpha\rangle+i\langle\alpha|\tilde{X}_{2}|\alpha\rangle\). In our neural network model, we simulate quantum noise by introducing \(a^{\prime}_{i}=a_{i}+\delta a_{i}\) with \(\delta a_{i}\) being random variables reproducing quantum shot noise, with appropriate statistics. It is clear that the expectation value of \(a^{\prime}_{i}\) should be equal to \(\bar{a^{\prime}_{i}}=a_{i}=\alpha_{i}/\beta=(\langle\alpha_{i}|\hat{X}_{1}| \alpha_{i}\rangle+i\langle\alpha_{i}|\hat{X}_{2}|\alpha_{i}\rangle)/\beta\). On a similar basis, the fluctuations \(\delta a_{i}\) will also scale proportionally to \(1/\beta\), so finally we get \(a^{\prime}_{i}=a_{i}+\delta a_{i}\), where \(\delta a_{i}\) is a complex Gaussian noise with variance
\[(\Delta\Re(\delta a_{i}))^{2} =((\Delta\hat{X}_{1})^{2})/\beta^{2}=1/4\beta^{2}, \tag{9}\] \[(\Delta\Im(\delta a_{i}))^{2} =((\Delta\hat{X}_{2})^{2})/\beta^{2}=1/4\beta^{2}. \tag{10}\]
This defines the statistical properties of \(a^{\prime}_{i}\) which we use in numerical simulations. At the same time, we can determine the average energy of input light pulses from the formula \(E_{i}=\hbar\omega|\alpha_{i}|^{2}\), which scales proportionally to \(|\beta|^{2}\).
We present the result of simulations of optical networks with quantum noise included in Fig. 5. As in the optoelectronic case [18], we find that the error rate of predictions can remain low even in the case when the number of photons per operation is lower than unity. In the optical range, this corresponds to hundreds of zeptojoules per operation. As a result, we may expect that quantum noise will not be a limiting factor up to this level of energy efficiency.
### Signal degradation and network depth
While optical signal regeneration and amplification is possible [39; 40], signal decay and degradation is one of the most important challenges for all-optical systems, especially in the case of multi-layer networks. Although cascadability of optical neurons has been achieved [38], full cascadability in a large-scale system may be difficult to realize in practice. Moreover, elements of the optical setup necessary for light beam steering may lead to significant losses [60]. In the case where full regeneration is not viable, or signal distortion at each layer is significant, these factors will limit the possible number of network layers. Since the most successful applications of machine learning are based on deep networks, with up to thousands of layers in networks such as ResNet, this is an obstacle that could limit the practical use of all-optical networks.
Recent results in the field of machine learning suggest that the number of neural networks layers can often be greatly reduced without the loss of accuracy, if model designs are appropriately modified. Examples include shallow networks for speech and image recognition [61; 62], non-deep networks achieving state-of-the-art results with a reduced number of layers [63] and shallow transformer networks for language models, that successfully compete with recurrent neural networks [64]. In many of these cases, one or two hidden layers are enough to obtain high accuracy of predictions. Some authors suggested that in the case of fully connected or convolutional networks, making networks deeper beyond a relatively shallow level does not improve accuracy [65; 66]. These observations are aligned with the arguments considering models of physical systems, which are usually described by Hamiltonians that are low-order polynomials [67].
### Neural network architectures
All-optical networks have limited flexibility of possible architectures. This concerns also the structure of computations. Some neural network architectures are easier to implement than others. For example, it may be straightforward to design an optical feed-forward neural network with scalar nonlinear activation functions, but more complex models require vectorial nonlinear operations. It is known that vector-matrix multiplications can be implemented optically as long as the vector component is encoded optically, while the matrix component is encoded in the material part of the device [21]. The same is true for convolutions [68; 69; 70]. However, it is not known how to implement all-optically some other nonlinear transformations that are important for neural network models. These include vector-matrix multiplications and softmax activations, which are key components of attention layers [71; 31], where both the vector and the matrix are to be encoded optically. It is important to either find a way to realize these functions all-optically, or to determine alternative models that do not require these functions but are able to perform the same tasks with comparable accuracy.
### Constant radiance theorem
Constant radiance theorem imposes a fundamental limitation on the geometry and optical energy required for performing computations with light [72]. This theorem states that in the case of linear propagation of light the generalized etendue, which measures the spread of light in real and momentum space, remains constant. In the case of neural networks, this condition imposes a limit on the optical energy per neuron, which inversely scales with the number of neurons. In particular, if the number of neurons in a hidden layer is much higher than in the input layer, the energy per neuron in the hidden layer is allowed to be much lower than the average energy of optical inputs. As a result, if the condition stated in Sec. II is fulfilled, that is \(N_{\rm hidden}\gg N_{\rm input}+N_{\rm output}\), high energy efficiency of operations in the hidden layer is not excluded by constant radiance theorem. On the other hand, in the case of small neural networks or networks in which this condition is not fulfilled, constant radiance theorem will impose a limit on the achievable energy efficiency. For all-optical networks, this has to be considered as an important factor in system design.
Figure 5: Error rate of neural networks as a function of the number of photons per operation, with quantum noise included. Results are shown for networks with a single hidden layer (1HL) and two hidden layers (2HL). It was assumed that the \(\beta\) factor relating coherent state amplitude to the amplitude of the neuron input is the same in all layers.
Conclusions
In conclusion, under certain plausible assumptions about the limitations of electronics, we showed that all-optical neural networks can find an important role in the applications of machine learning. It is estimated that all-optical devices could outperform both electronic and optoelectronic devices by orders of magnitude in energy efficiency in the case of inference in large neural network models. This estimate takes into account all the components of the complete system, including the cost of memory access and data acquisition from remote resources. All-optical networks are predicted to give the biggest advantage in the case of generative models, where the cost of data acquisition and memory access is reduced due to the possibility to reuse input data.
On the other hand, it is clear that there are still important issues that need to be solved before all-optical networks become practical. These include scalability of optical neurons, signal decay and distortion, strength of nonlinearity, and non-universal character of optical computing. To overcome these obstacles, studies on both the physical implementations of optical networks and on accommodating neural network models to the capabilities of optical systems may be necessary. It is likely that an interdisciplinary approach will be the key to successful implementations.
###### Acknowledgements.
We acknowledge support from the National Science Center, Poland grants 2019/35/N/ST3/01379, 2020/37/B/ST3/01657 and 2021/43/B/ST3/00752.
|
2301.00582 | Sparse neural networks with skip-connections for identification of
aluminum electrolysis cell | Neural networks are rapidly gaining interest in nonlinear system
identification due to the model's ability to capture complex input-output
relations directly from data. However, despite the flexibility of the approach,
there are still concerns about the safety of these models in this context, as
well as the need for large amounts of potentially expensive data. Aluminum
electrolysis is a highly nonlinear production process, and most of the data
must be sampled manually, making the sampling process expensive and infrequent.
In the case of infrequent measurements of state variables, the accuracy and
open-loop stability of the long-term predictions become highly important.
Standard neural networks struggle to provide stable long-term predictions with
limited training data. In this work, we investigate the effect of combining
concatenated skip-connections and the sparsity-promoting $\ell_1$
regularization on the open-loop stability and accuracy of forecasts with short,
medium, and long prediction horizons. The case study is conducted on a
high-dimensional and nonlinear simulator representing an aluminum electrolysis
cell's mass and energy balance. The proposed model structure contains
concatenated skip connections from the input layer and all intermittent layers
to the output layer, referred to as InputSkip. $\ell_1$ regularized InputSkip
is called sparse InputSkip. The results show that sparse InputSkip outperforms
dense and sparse standard feedforward neural networks and dense InputSkip
regarding open-loop stability and long-term predictive accuracy. The results
are significant when models are trained on datasets of all sizes (small,
medium, and large training sets) and for all prediction horizons (short,
medium, and long prediction horizons.) | Erlend Torje Berg Lundby, Haakon Robinsson, Adil Rasheed, Ivar Johan Halvorsen, Jan Tommy Gravdahl | 2023-01-02T10:13:33Z | http://arxiv.org/abs/2301.00582v2 | # Sparse neural networks with skip-connections for nonlinear system identificationfootnoteinfo
###### Abstract
Data-driven models such as neural networks are being applied more and more to safety-critical applications, such as the modeling and control of cyber-physical systems. Despite the flexibility of the approach, there are still concerns about the safety of these models in this context, as well as the need for large amounts of potentially expensive data. In particular, when long-term predictions are needed or frequent measurements are not available, the open-loop stability of the model becomes important. However, it is difficult to make such guarantees for complex black-box models such as neural networks, and prior work has shown that model stability is indeed an issue. In this work, we consider an aluminum extraction process where measurements of the internal state of the reactor are time-consuming and expensive. We model the process using neural networks and investigate the role of including skip connections in the network architecture as well as using \(\ell_{1}\) regularization to induce sparse connection weights. We demonstrate that these measures can greatly improve both the accuracy and the stability of the models for datasets of varying sizes.
D +
Footnote †: footnote
2016). Skip-connections were originally proposed by He et al. (2016) as a way to circumvent this, by introducing a shorter path between the early layers and the output. They were not only found to enable the training of significantly deeper networks, but Li et al. (2017) also demonstrated that they may help improve training convergence.
In the field of dynamical systems and control, we often design a model with a purpose in mind, such as the design of a control system or state observer. Crucially, we are interested in the behavior and performance of the controlled system in terms of objectives such as energy efficiency or yield. This implies that the model does not need to be perfectly accurate for the entire state space, so long as the resulting closed-loop performance is sufficient (known as _identification for control_ (I4C)). If high-frequency measurements from the system are available, only the short-term behavior of the model is important, since any drift out of the operational space is quickly corrected. However, if measurements are rarely available, such as in the aluminum electrolysis process that we consider, the long-term model behavior and open-loop stability become much more important. Stable long-term predictions can be important for decision-making, meaning that a model with good long-term stability and accuracy is inherently important.
In this work, we investigate the effects of adding skip connections and \(\ell_{1}\) regularization on the accuracy and stability of these models for short, medium, and long horizons. We address the following questions:
* How do skip connections affect the stability and generalization error of neural networks trained on high-dimensional nonlinear dynamical systems?
* How does sparsity affect stability and generalization error for neural networks with skip connections that model nonlinear dynamics?
* How does the amount of training data affect neural networks with skip connections compared to neural networks without skip connections?
We make the following contributions:
* We perform a black box system identification of an aluminum electrolysis cell using different NN architectures.
* We demonstrate that the accuracy and open-loop stability of the resulting models is greatly improved by using \(\ell_{1}\) weight regularization and incorporating skip connections into the architecture.
* This advantage is consistent across datasets of varying sizes.
## 2 Theory
### Physics-based model for aluminum extraction
We evaluate NNs for nonlinear system identification by first training them on synthetic data generated from a known physics-based models (PBM). The model used in this work describes the internal dynamics of an aluminum electrolysis cell based on the Hall-Heroult process. Figure 1 shows a diagram of the electrolysis cell. Traditional PBMs of such systems are generally constructed by studying the mass/energy balance of the chemical reactions. Lundby et al. (2022) presents a more detailed exposition of the model that we use in this work. The system is described by a set of ordinary differential equations (ODE):
\[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x},\mathbf{u}), \tag{1}\]
where \(\mathbf{x}\in\mathbb{R}^{8}\) and \(\mathbf{u}\in\mathbb{R}^{5}\) represent the time-varying states and inputs of the system respectively. The full set of equations are:
\[\dot{x}_{1} =\frac{k_{1}(g_{1}-x_{7})}{x_{1}k_{0}}-k_{2}(x_{6}-g_{1}) \tag{2a}\] \[\dot{x}_{2} =\,u_{1}-k_{3}u_{2}\] (2b) \[\dot{x}_{3} =\,u_{3}-k_{4}u_{1}\] (2c) \[\dot{x}_{4} =\,-\frac{k_{1}(g_{1}-x_{7})}{x_{1}k_{0}}+k_{2}(x_{6}-g_{1})+k_{ 5}u_{1}\] (2d) \[\dot{x}_{5} =\,k_{6}u_{2}-u_{4}\] (2e) \[\dot{x}_{6} =\,\frac{\alpha}{x_{2}+x_{3}+x_{4}}\Bigg{[}u_{2}g_{5}+\frac{u_{ 2}^{2}u_{5}}{2620g_{2}}-k_{7}(x_{6}-g_{1})^{2}\] (2f) \[+k_{8}\frac{(x_{6}-g_{1})(g_{1}-x_{7})}{k_{0}x_{1}}-k_{9}\frac{x_ {6}-x_{7}}{k_{10}+k_{11}k_{0}x_{1}}\Bigg{]}\] \[\dot{x}_{7} =\,\frac{\beta}{x_{1}}\Bigg{[}\frac{k_{9}(g_{1}-x_{7})}{k_{15}k_{ 0}x_{1}}-k_{12}(x_{6}-g_{1})(g_{1}-x_{7})\] (2g) \[+\,\frac{k_{13}(g_{1}-x_{7})^{2}}{k_{0}x_{1}}\,-\frac{x_{7}-x_{8}} {k_{14}+k_{15}k_{0}x_{1}}\Bigg{]}\] \[\dot{x}_{8} =\,k_{17}k_{9}\left(\frac{x_{7}-x_{8}}{k_{14}+k_{15}k_{0}\cdot x _{1}}-\,\frac{x_{8}-k_{16}}{k_{14}+k_{18}}\right), \tag{2h}\]
where the intrinsic properties \(g_{i}\) of the bath mixture are given as:
Figure 1: Schematic of the setup
\[g_{1} =991.2+112c_{x_{3}}+61c_{x_{3}}^{1.5}-3265.5c_{x_{3}}^{2.2} \tag{3a}\] \[-\frac{793c_{x_{2}}}{-23c_{x_{2}}c_{x_{3}}-17c_{x_{3}}^{2}+9.36c_{x_ {3}}+1}\] \[g_{2} =\exp\,\left(2.496-\frac{2068.4}{273+x_{6}}-2.07c_{x_{2}}\right)\] (3b) \[g_{3} =0.531+3.06\cdot 10^{-18}u_{1}^{3}-2.51\cdot 10^{-12}u_{1}^{2}\] (3c) \[+6.96\cdot 10^{-7}u_{1}-\frac{14.37(c_{x_{2}}-c_{x_{2},crit})-0.431 }{735.3(c_{x_{2}}-c_{x_{2},crit})+1}\] \[g_{4} =\frac{0.5517+3.8168\cdot 10^{-6}u_{2}}{1+8.271\cdot 10^{-6}u_{2}}\] (3d) \[g_{5} =\frac{3.8168\cdot 10^{-6}g_{3}g_{4}u_{2}}{g_{2}(1-g_{3})}. \tag{3e}\]
See Table 1 for a description of these quantities.
The values of these constants can be found in Lundby et al. (2022). The dynamics of the system are relatively slow. The control inputs \(u_{1},\ u_{3}\) and \(u_{4}\) are therefore well modeled as impulses that represent discrete events involving the addition or removal of substances. This results in step changes in the linear states \(x_{2},x_{3},x_{5}\), which act as accumulator states for the mass of the corresponding substance (see Table 1). The control inputs \(u_{2}\) and \(u_{5}\) are piecewise constant, and always nonzero. The inputs \(\mathbf{u}\) are determined by a simple proportional controller \(\boldsymbol{\pi}(\mathbf{x})\). The simulation model is derived in Lundby et al. (2022), and we refer to that article for further details.
### Deep neural network with skip connections
A NN with \(L\) layers can be compactly written as an alternating composition of affine transformations \(\mathbf{Wz}+\mathbf{b}\) and nonlinear activation functions \(\boldsymbol{\sigma}:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}\):
\[\hat{\mathbf{f}}(\mathbf{z})=\hat{\mathbf{f}}_{L}\circ\cdots\circ\hat{ \mathbf{f}}_{2}\circ\hat{\mathbf{f}}_{1} \tag{4}\]
where the activation function \(\boldsymbol{\sigma}_{i}\), weight matrix \(\mathbf{W}_{i}\), and bias vector \(\mathbf{b}_{i}\) correspond to the \(i\)th layer of the network. The universal approximation property of NNs makes them very attractive as a flexible model class when a lot of data is available. The representation capacity is generally understood to increase with both the depth and the width (the number of neurons in each layer), although early attempts to train very deep networks found them challenging to optimize using backpropagation due to the vanishing gradients problem. One of the major developments that enabled researchers to train deep NNs with many layers is the _skip connection_. A skip connection is simply an additional inter-layer connection that bypasses some of the layers of the network. This provides alternate pathways through which the loss can be backpropagated to the early layers of the NN, which helps mitigate the issues of vanishing and exploding gradients, which were major hurdles to training deeper models. In this work, we utilize a modified DenseNet architecture as proposed by Huang et al. (2017), where the outputs of earlier layers are concatenated to all the consecutive layers. We simplify the structure such that the model only contains skip connections from the input layer to all consecutive layers. We call this architecture InputSkip, which has reduced complexity compared to DenseNet. This design is motivated by the fact that the output of each layer (including the final output) becomes a sum of both a linear and a nonlinear transformation of the initial input \(\mathbf{x}\). Hence, the skip connections from the input layer to consecutive layers facilitate the reuse of the input features for modeling different linear and nonlinear relationships more independently of each other.
## 3 Method and setup
In this section, we present all the details of data generation and its preprocessing, and the methods that are required to reproduce the work. The steps can be briefly summarized as follows:
* Use Equation (2) with random initial conditions to generate 140 trajectories with 5000 timesteps each. Set aside 40 for training and 100 for testing. Construct 3 datasets by selecting 10,20 and 40 trajectories respectively.
* For each model class and dataset, train 10 instances on the training data.
* Repeat all experiments with \(\ell_{1}\) regularization, see loss function in Equation (5).
* Use trained models to generate predicted trajectories along the test set and compare them to the 100 test trajectories.
### Data generation
Equation (2) was discretized using the RK4 scheme with a fixed timestep \(h=10\,\mathrm{s}\) and numerically integrated on the interval \([0,5000h]\). We used uniformly randomly sampled initial conditions from the intervals shown in Table 2 to generate 140 unique trajectories. We set aside 40 trajectories for training and 100 of the trajectories as a test set. The 40 training trajectories were used to create 3 datasets of varying sizes (small, medium, large), namely 10, 20, and 40 trajectories. In total, the datasets contained 50000, 100000, and 200000 individual data points respectively.
Equation (2) also depends on the input signal \(\mathbf{u}\). In practice, this is given by a deterministic control policy \(\mathbf{u}=\boldsymbol{\pi}(\mathbf{x})\) that stabilizes the system and keeps the state \(\mathbf{x}\) within some region of the state space that is suitable for safe operation. We found that this was insufficient to successfully train our models, because the controlled trajectories showed very little variation after some time, despite having different initial conditions. This lack of diversity in the dataset resulted in models that could not generalize to unseen states, a situation that frequently arose during evaluation. To inject more variety into the data and sample states \(\mathbf{x}\) outside of the standard operational area, we used a stochastic controller
\[\boldsymbol{\pi}_{s}(\mathbf{x})=\boldsymbol{\pi}(\mathbf{x})+\mathbf{r}(t)\]
that introduced random perturbations \(\mathbf{r}(t)\) to the input. These perturbations were sampled using the Amplitude-modulated Pseudo-Random Binary Signal (APRBS) method proposed by Winter and Breitsamter (2018) for nonlinear system identification.
In system identification it is typical to optimize the model to estimate the function \(\hat{\mathbf{x}}=\mathbf{f}(\mathbf{x},\mathbf{u})\). However, this is not feasible for Equation (2) because the inputs \(\mathbf{u}\) are not differentiable. Instead, we discretize the trajectories using the forward Euler difference and use this as the regression variable:
\[\mathbf{y}_{k}=\frac{\mathbf{x}_{k+1}-\mathbf{x}_{k}}{h}\]
The datasets are then constructed as sets of the pairs \(([\mathbf{x}_{k},\mathbf{u}_{k}],\mathbf{y}_{k})\).
### Training setup
We optimize the models by minimizing the following loss function using stochastic gradient descent:
\[\mathbf{J}_{\theta}=\frac{1}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}(\mathbf{y}_{ i}-\mathbf{\hat{f}}(\mathbf{x}_{i},\mathbf{u}_{i}))^{2}+\lambda\sum_{j=1}^{L}| \mathbf{W}_{j}| \tag{5}\]
where \(\mathcal{B}\) is a _batch_ of randomly sampled subset of indices from the dataset, \(L\) is the number of layers of the NN, and \(\lambda\) is the regularization parameter. This loss function is the sum of the mean squared error (MSE) of the model \(\hat{\mathbf{f}}\) with respect to the regression variables \(\mathbf{y}\), and the \(\ell_{1}\) norm of the connection weight matrices \(\mathbf{W}_{i}\) in all layers. We used a batch size of \(|\mathcal{B}|=128\). We used the popular ADAM solver proposed by Kingma and Ba (2014) with default parameters to minimize Equation (5).
### Evaluation of model accuracy
As previously mentioned, we are interested in evaluating the long-term predictive accuracy of the models. Starting from a given initial condition \(\mathbf{x}(t_{0})\), the model \(\mathbf{\hat{f}}(\mathbf{x},\mathbf{u})\) is used to generate an estimated trajectory using the recurrence:
\[\hat{\mathbf{x}}_{k+1}=\hat{\mathbf{x}}_{k}+h\,\hat{\mathbf{f}}(\hat{\mathbf{ x}}_{k},\mathbf{u}_{k}) \tag{6}\]
where \(\hat{\mathbf{x}}_{0}=\mathbf{x}_{0}\). Note that the input signal \(\mathbf{u}_{k}\) is replayed directly from the test trajectory. Borrowing a term from the field of time-series analysis, we refer to this as a _rolling forecast_. To evaluate the accuracy of a model over multiple trajectories, we define the Average Normalized Rolling Forecast Mean Squared Error (AN-RFMSE):
\[\text{AN-RFMSE}=\frac{1}{p}\sum_{i=1}^{p}\frac{1}{n}\sum_{j=1}^{n}\left(\frac{ \hat{x}_{i}(t_{j})-x_{i}(t_{j})}{\text{std}(x_{i})}\right)^{2}, \tag{7}\]
where \(\hat{x}_{i}(t_{j})\) is the model estimate of the simulated state variable \(x_{i}\) at time step \(t_{j}\), \(\text{std}(x_{i})\) is the standard deviation of variable \(x_{i}\) in the training set \(\mathcal{S}_{train}\), \(p=8\) is the number of state variables and \(n\) is the number of time steps being averaged over.
### Evaluation of model stability
A symptom of model instability is that its predictions can _blow-up_, which is characterized by a rapid (often exponential) increase in prediction error. More precisely, we say that a blow-up occurs when the normalized mean absolute error for all system states exceeds three (this corresponds to standard deviations). We detect this as follows:
\[\max_{j<n}\left[\frac{1}{p}\sum_{i=1}^{p}\left(\frac{|\hat{x}_{i}(t_{j})-x_{i} (t_{j})|}{\text{std}(x_{i})}\right)\right]>3 \tag{8}\]
where \(p=8\) is again the number of state variables and \(n\) is the number of time steps to consider. This is a conservative estimate. However, this does not lead to any significant underestimation of the number of blow-ups. This is because once a model starts to drift rapidly, it very quickly exceeds the normal error of three standard deviations.
\begin{table}
\begin{tabular}{l|l} \hline Variable & Initial condition interval \\ \hline \(x_{1}\) & [2060, 4460] \\ \(c_{x_{2}}\) & [0.02, 0.05] \\ \(c_{x_{3}}\) & [0.09, 0.13] \\ \(x_{4}\) & [11500, 16000] \\ \(x_{5}\) & [9550, 10600] \\ \(x_{6}\) & [940, 990] \\ \(x_{7}\) & [790, 850] \\ \(x_{8}\) & [555, 610] \\ \hline \end{tabular}
\end{table}
Table 2:
## 4 Results and Discussions
We characterize the different model classes (PlainDense, PlainSparse, InputSkipDense, InputSkipSparse) by estimating their blow-up frequencies and their rolling forecast mean squared error (RFMSE) on the validation data. The blow-up frequency is an interesting measure since it can indicate how stable the model is in practice.
We perform a Monte Carlo analysis by training 10 instances of each model class and evaluating these on 100 trajectories randomly generated using the true model, yielding 1000 data points for each model class. We repeat the experiments for 3 different dataset sizes to study the data efficiency of the models.
Figure 2 presents the total number of blow-ups recorded within each model class after \(100h\), \(2000h\), and \(5000h\) (short, medium, and long term respectively). For simplicity, blow-ups were detected by thresholding the computed variance of a predicted trajectory and manually inspected. It is clear that for short time horizons all the models exhibit robust behavior independently of the size of the training datasets. However, for medium and long time horizons, PlainDense, PlainSparse, and InputSkipDense architectures exhibit a significant number of blow-ups and therefore instability. Figure 1(a) - 1(c) show that PlainDense is generally the most unstable, with up to 67% of all trajectories resulting in a blow-up. For the smallest amount of training data (Figure 1(a)) PlainSparse and InputSkipDense have similar blow-up frequencies. For larger datasets, the PlainSparse architecture shows significantly better stability than both PlainDense and InputSkipDense. InputSkipDense and PlainDense both show better stability with increasing amounts of training data in terms of fewer blow-ups. However, both these dense models still suffer from significant amounts of blow-ups.
In comparison, almost no blow-ups are recorded when using the InputSkipSparse architecture, even for the small training dataset. In Figure 2, the orange bars corresponding to the blow-up frequency of InputSkipSparse models are not visible for any of the training sets due to the significantly lower number of blow-ups. For InputSkipSparse models trained on the smallest dataset, only 3 out of 1000 possible blow-ups were reported for the longest horizon. Apart from that, no blow-ups were reported for the InputSkipSparse models.
Only a few blow-ups were recorded after \(5000h\) in the medium term.
Figure 3 presents a violin plot of the accuracy of each model class, expressed in terms of RFMSE over different time horizons. Only the plot for the smallest dataset (50000 points) is shown, due to the results being very similar. A larger width of the violin indicates a higher density of that given RFMSE value, while the error bars show the minimum and maximum recorded RFMSE values. The model estimates that blew up (see Figure 2) are not included. In this way, we estimate the generalization performance of the models only within their regions of stability. Note that the violin plots for model classes with many blow-ups are made using fewer samples, and can be seen as slightly "cherry-picked". Nonetheless, the InputSkipSparse architecture consistently yields more accurate results, up to an order of magnitude better than the others in the long term.
## 5 Conclusion and Future Work
In this work, we compared the performance of two different model structures trained both with and without sparsity promoting \(\ell_{1}\) regularization. The two model types are standard Multi-Layer Perceptrons (MLP), and a more specialized architecture that includes skip connections from the input layer to all consecutive layers. This yields four different model structures, which we call PlainDense, PlainSparse, InputSkipDense, and InputSkipSparse. The main conclusions of the article are as follows:
* NNs with skip connections are more stable for predictions over long time horizons compared to standard MLPs. Furthermore, the accuracy of NNs with skip
Figure 2: Divergence plot: Number of trajectories that blow-up over different time horizons. The total number of trajectories is 1000, so the values can be read as a permille.
connections is consistently higher for all forecasting horizons.
* The application of sparsity-promoting \(\ell_{1}\) regularization significantly improves the stability of both the standard MLP and InputSkip architectures. This improvement was more apparent for models with the InputSkip architecture.
* The InputSkipSparse showed satisfactory stability characteristics even when the amount of training data was restricted. This suggests that this architecture is more suitable for system identification tasks than the standard MLP structure.
The case study shows that both sparsity-promoting regularization and skip connections can result in more stable NN models for system identification tasks while requiring less data, as well as improving their multi-step generalization for both short, medium, and long prediction horizons. Despite the encouraging performance of the sparse-skip networks, we can not guarantee similar performance for noisy data, as we have only investigated the use of synthetic data devoid of any noise. However, such a study will be an interesting line of future work. This case study also has relevance beyond the current setup. In more realistic situations, we often have a partial understanding of the system we wish to model (see Equation (2)), and only wish to use data-driven methods to correct a PBM when it disagrees with the observations (e.g. due to a faulty assumption). As shown in Robinson et al. (2022), combining PBMs and data-driven methods in this way also has the potential to inject instability into the system. Finding new ways to improve or guarantee out-of-sample behavior for data-driven methods is therefore of paramount importance to improve the safety of such systems.
## Acknowledgements
This work was supported by the industry partners Borgregaard, Elkem, Hydro, Yara and the Research Council of Norway through the projects TAPI: Towards Autonomy in Process Industries (grant no. 294544) and EXAIGON: Explainable AI systems for gradual industry adoption (grant no. 304843)
|
2305.12471 | Mapping Biological Neuron Dynamics into an Interpretable Two-layer
Artificial Neural Network | Dendrites are crucial structures for computation of an individual neuron. It
has been shown that the dynamics of a biological neuron with dendrites can be
approximated by artificial neural networks (ANN) with deep structure. However,
it remains unclear whether a neuron can be further captured by a simple,
biologically plausible ANN. In this work, we develop a two-layer ANN, named as
dendritic bilinear neural network (DBNN), to accurately predict both the
sub-threshold voltage and spike time at the soma of biological neuron models
with dendritic structure. Our DBNN is found to be interpretable and well
captures the dendritic integration process of biological neurons including a
bilinear rule revealed in previous works. In addition, we show DBNN is capable
of performing diverse tasks including direction selectivity, coincidence
detection, and image classification. Our work proposes a biologically
interpretable ANN that characterizes the computation of biological neurons,
which can be potentially implemented in the deep learning framework to improve
computational ability. | Jingyang Ma, Songting Li, Douglas Zhou | 2023-05-21T14:23:18Z | http://arxiv.org/abs/2305.12471v1 | # Mapping Biological Neuron Dynamics into an Interpretable Two-layer Artificial Neural Network
###### Abstract
Dendrites are crucial structures for computation of an individual neuron. It has been shown that the dynamics of a biological neuron with dendrites can be approximated by artificial neural networks (ANN) with deep structure. However, it remains unclear whether a neuron can be further captured by a simple, biologically plausible ANN. In this work, we develop a two-layer ANN, named as dendritic bilinear neural network (DBNN), to accurately predict both the sub-threshold voltage and spike time at the soma of biological neuron models with dendritic structure. Our DBNN is found to be interpretable and well captures the dendritic integration process of biological neurons including a bilinear rule revealed in previous works. In addition, we show DBNN is capable of performing diverse tasks including direction selectivity, coincidence detection, and image classification. Our work proposes a biologically interpretable ANN that characterizes the computation of biological neurons, which can be potentially implemented in the deep learning framework to improve computational ability.
## 1 Introduction
Neurons are fundamental units of the nervous system to perform complex computational functions. Dendrites, which are the branched extensions of neurons, receive multiple spatio-temporal inputs from other neurons via synapses and play a vital role in single neuron computation. The integration of signals within dendrites enables diverse computations such as direction selectivity [1; 2], coincidence detection [3], and logical operations [4; 5]. The powerful computational abilities of individual biological neurons, particularly due to their dendritic arborizations, present a formidable challenge for the development of artificial neuronal models that can accurately capture the full range of dynamics exhibited by real neurons.
Since the integration mechanisms of dendrites are complicated, the early models of neurons usually consider the point neuron which only contain the somatic structure but ignore the dendrites. For example, the McCulloch and Pitts neuron model linearly sums all the synaptic inputs at the soma and generates the outputs through an activation function [6]. This kind of artificial neuron is widely used in many common machine learning models, such as the Multilayer Perceptrons [7], LeNet [8], AlexNet [9] etc. However, there are both experimental and theoretical results indicating that dendrites process signals in a nonlinear manner and should be regarded as an independent computational unit [10; 11; 12; 13; 14]. Hence the classical point neuron model is oversimplified to capture the full characteristics
of the biological neurons and thus the feature of dendritic nonlinearities should be taken into account. Recently, some studies have attempted to incorporate dendrites into single neuron models [15; 16; 17; 18; 19; 20; 21]. However, there is still a lack of a simple and biologically interpretable model that can reflect both the sub- and supra-threshold behaviors of the biological neurons with dendritic structure.
In this paper, we introduce a novel two-layer neural network named as dendritic bilinear neural network (DBNN) that can accurately replicate the input-output(I/O) mapping of biological neurons. We train DBNN with the data from the biological neurons when receiving multiple spatio-temporal synaptic inputs. We show that DBNN can faithfully predict the somatic response including both the sub-threshold voltage and the spike time of the neuron. 95% variance of the sub-threshold voltage can be explained and the precision of the spike time prediction can be greater than 80%. Our DBNN is concise and the number of parameters is much less compared with multi-layer ANN. The predictive power and computational capacity of DBNN have been verified on different types of biological neuron models. Furthermore, we find that the trained parameters in DBNN are biologically interpretable which reflect both the single post-synaptic potential (PSP) response and a bilinear dendritic integraion rule for multiple synaptic inputs revealed in previous studies [13; 14]. We can then use proper intial values of parameter based on the biological properties to accelerate the training speed. Moreover, we demonstrate that DBNN can characterize the dendritic computation power of biological neurons through solving direction selectivity and coincidence detection problems. And we also apply DBNN for image classification task and it can outperform the ANN with the same number of trainable parameters. Our work presents a comprehensive framework for how to incorporate dendritic features into a neural network.
Related worksPrevious works have attempted to map the dynamics of biological neurons with dendrites onto artificial neural networks (ANNs). While two-layer ANNs are successful in fitting the firing rate of hippocampus CA1 pyramidal neurons [15] or fast spiking basket neurons [19], they cannot predict the sub-threshold voltage and exact spike time. To address these limitations, ANNs incorporating temporal convolutions have been developed for accurate predictions of spike time [16; 17; 21]. Additionally, hierarchical cascade models have been proposed to capture the sub-threshold voltage of L2/3 pyramidal neurons [18]. Recently, a state-of-the-art seven-layer temporal-convolutional network (TCN) [22] has been developed that fully captures both sub-threshold voltage and spike time of L5 pyramidal neurons[20], but at high computational cost due to the large number of parameters. Moreover, it is unclear how these parameters are related to the properties of biological neurons.
## 2 Results
To establish a mapping from the dynamics of a biological neuron to an ANN, we develop a two-layer ANN called the dendritic bilinear neural network (DBNN) based on the features of single neuronal computation. DBNN is trained using the input-output data of a biological neuron model with dendritic structure simulated by NEURON software [23] to fully capture both the sub-threshold voltage and spike time dynamics of the neuron.
### Dendritic bilinear neural network
A single neuron receives thousands of synaptic inputs through its dendrites from other neurons. These inputs are integrated and transmitted to the soma, where output is generated. Given a total of N synapses on the dendrites receiving pre-synaptic spike trains \(x_{i}(t)\), the somatic response \(v(t)\) and spike time \(\hat{t}\) can be theoretically calculated using Rall's cable theory [24; 25; 26] by solving a large system of differential equations. However, this method requires expensive computational resources. Here, we aim to construct a biologically interpretable ANN that accurately captures the dynamics of biological neurons. Specifically, when presented with the same input \(x_{i}(t),i=1,2,\cdots,N\) to a biological neuron, the proposed ANN will generate the output that precisely predicts both the somatic voltage and spike time of the biological neuron.
To develop an ANN that can accurately capture the intricate input-output relationships of biological neurons, we first describe the simplest case of how the neuronal voltage responds to a single synaptic input. To this end, we employ a double-exponential function (equation (1)) to describe the
postsynaptic potential [16; 27],
\[k_{i}(t)=\omega_{i}(1-e^{-t/\tau_{r,i}})e^{-t/\tau_{d,i}}, \tag{1}\]
where \(i\) is the synapse index, \(\omega_{i}\) is the weight of synaptic input and \(\tau_{r,i}\) and \(\tau_{d,i}\) are rising and decay time constant, respectively. We can then express the response at the synapse when multiple inputs are received at different times as the convolution of the corresponding pre-synaptic spike train \(x_{i}(t)\) with the response kernel defined as \(k_{i}(t),i.e.\),
\[v_{i}(t)=x_{i}(t)\otimes k_{i}(t). \tag{2}\]
We now describe the scenario that a neuron receives inputs from multiple synapses located at different dendritic sites. Previous experimental results have suggested that a neuron integrates these inputs in a nonlinear manner [28; 11]. In contrast to studies that use sigmoid functions to describe the integration process, here we attempt to characterize this process by utilizing the simplest possible nonlinear function: the quadratic polynomial. To be specific, given the voltage responses \(v_{1}(t),v_{2}(t),...,v_{N}(t)\) induced by individual input received at each single synapse, the integrated response described by a quadratic integration function is as follows:
\[v(t)=\sum_{i=1}^{N}v_{i}(t)+\sum_{j=1}^{N}\sum_{k=1}^{j-1}a_{jk}v_{j}(t)v_{k}( t)+v_{0}.\]
In equation (2.1), note that we exclude all the square terms \(v_{j}^{2}\) and retain only the cross terms \(v_{j}v_{k}\) in order to make the integration rule valid even when an individual input \(i\) is given, i.e., the response is \(v_{i}(t)\) in such a case. Based on equations (1)(2)(2.1), we build the two-layer dendritic bilinear neural network (DBNN) with its architecture illustrated in Figure 1(a).
### Data generation and training protocol
We first use DBNN to approximate the activity of a biological neuron. To generate the training data for DBNN, we simulate biologically realistic neuron models with dendrites using NEURON software [23]. To be specific, we utilize three representative types of neuron models, namely a basal ganglion neuron [29], a layer 2/3 pyramidal neuron [30], and a layer 5 pyramidal neuron [31]. Each neuron is endowed with dendritic structure and active ion channels including \(Na^{+},K^{+}\), and \(Ca^{2+}\). Excitatory and inhibitory synapses are randomly distributed on the dendrites in varying numbers depending on the specific neuron model, with \(N=9,749,1278\) synapses for the basal ganglion, layer 2/3 pyramidal, and layer 5 pyramidal neurons, respectively. The excitatory synapses include both AMPA and NMDA receptors, and the inhibitory synapses include GABA-A receptors. Each synapse receives independent inhomogeneous Poisson spike trains. The input spike trains and the corresponding somatic voltage response of the neuron are recorded for subsequent use as input and output data to DBNN, respectively. Each stimulation lasts for six seconds with millisecond resolution, and the stimulation is repeated 1000 times for training and 100 times for testing with different initialization. The detailed information for the data generation can be found in the Supplementary Material.
In the training procedure of DBNN, we utilize the mean square error (MSE) as the loss function to quantify the difference between DBNN's output and the voltage of the biological neuron model. To optimize the parameters, the standard stochastic gradient descent (SGD) [32] algorithm is employed. We set the learning rate to be \(0.001\) and use a mini-batch size of 128. DBNN is trained for 1000 epochs. All of the parameters in DBNN, including the time constants \(\tau_{r,i}\) and \(\tau_{d,i}\), weight \(\omega_{i}\), quadratic coefficient \(a_{jk}\) and bias term \(v_{0}\), are updated during training. The training process is carried out on an Nvidia A100 GPU and takes approximately 2-3 hours to complete.
### Predictive performance of DBNN
Following the training process, we observe that DBNN is capable of accurately predicting the output of biological neurons, including the subthrehold voltage and spike time. To measure the performance of DBNN for predicting the sub-threshold voltage, we utilize the variance explained (VE) metric defined as
\[\text{VE}=1-\frac{\sum_{i}(y(t_{i})-y^{\prime}(t_{i}))^{2}}{\sum_{i}(y(t_{i})- \mathbb{E}[y(t)])^{2}}\]
where \(y(t_{i})\) and \(y^{\prime}(t_{i})\) are the true and predicted voltage for different discrete time and \(\mathbb{E}(y(t))\) is the mean value of the true voltage. The closer the VE is to 1, the more accurate prediction made by DBNN is.
After training, DBNN is first applied to predict the sub-threshold voltage of a passive layer 2/3 pyramidal neuron without active ion channels, which yields the VE of approximately 99%. For neurons with \(Na^{+},K^{+}\), and \(Ca^{2+}\) channels included, the dendritic integration process becomes more nonlinear. In this case, the VE can still reach 95% when AMPA receptors are used exclusively, and 93% when both AMPA and NMDA receptors are used. (Figure 1(b)(c) and Table 1).
We next use DBNN to predict spike time, which is a more challenging task because spike time results from a highly nonlinear process of action potential generation beyond quadratic nonlinearity in DBNN. To circumvent the difficulty of fitting action potentials, the prediction of spike time by DBNN is made when the subthreshold voltage crosses a firing threshold. To be specific, when the predicted output \(v(t_{n})\) at time \(t_{n}\) satisfies that \(v(t_{n})\geq v_{th}\) and \(v(t_{n-1})<v_{th}\), we then define \(\hat{t}=t_{n}\) as the predicted spike time, where \(t_{n-1}\) and \(t_{n}\) are two consecutive discrete time points and \(v_{th}\) is a firing threshold value. It shall be noted that the DBNN does not account for a reset mechanism, which is another strong nonlinear effect in the biological neuron that the somatic voltage will rapidly drop after a spike due to the outflow of \(K^{+}\). To further improve DBNN, we modify equation (2.1) to incorporate a reset term in DBNN
\[v(t)=\sum_{i=1}^{N}v_{i}(t)+\sum_{j=1}^{N}\sum_{k=1}^{j-1}a_{jk}v_{j}(t)v_{k}( t)+v_{0}+v_{reset}\sum_{l}\Theta(t-\hat{t}_{l})e^{\frac{-(t-\hat{t}_{l})}{reset}}, \tag{3}\]
Figure 1: DBNN is capable of predicting the dynamics of layer 2/3 pyramidal neurons, including the sub-threshold voltage and the spike time. (a) The architecture of DBNN, where the symbol \(\sum\) represents linear summation and \(\sum+\times\) represents quadratic multiplication. The rest notations are the same as those described in equations (1)(2)(2.1). (b) Left: the morphology of the pyramidal neuron. Right: the output of DBNN (cyan) well agrees with the membrane potential (MP) of the pyramidal neuron model shown in Left (black). Red dashed line is the firing threshold. (c) The scatter plot of the predicted sub-threshold MP by DBNN versus that of the biological neuron model. Lower right is the bar plot of variance explained (VE) for passive neurons (Pas), active neurons with only AMPA receptors (A) and both AMPA and NMDA receptors (A+N). (d) The precision (Pre) and recall (Rec) values when using DBNN to predict the spike time of the neuron. Two different electrophysiology conditions are considered here as A and A+N in Figure 1(c).
where \(v_{reset}\) is the amplitude of voltage reset after a spike, \(l\) is the index of spike, \(\Theta\) is the Heaviside function and \(\tau_{reset}\) is the reset time constant.
The accuracy of spike time prediction is then evaluated by using the metrics of precision and recall. We treat a predicted spike time \(\hat{t}\) as a true positive (TP) case if the true spike time is within a 10 millisecond time window of \(\hat{t}\). Then precision is calculated by the ratio of the number of true positives to the sum of true positives and false positives, while recall is calculated by the ratio of the number of true positives to the sum of true positives and false negatives. For the case that neurons with only AMPA receptors, a precision rate of \(91\%\) and a recall rate of 89% can be achieved, while for the biological neuron model equipped with both AMPA and NMDA receptors, the precision rate is 82% and the recall rate is 78%. These metrics suggest that DBNN can make predictions for spike time with high accuracy (Figure 1(d) and Table 1).
To show the advantage of DBNN, we next perform a comparative analysis of DBNN and other ANNs developed recently [18; 20] in predicting a biological neuron's activity. We compare these models in various cases, including passive membranes, active membranes only with AMPA receptors (A), and with both AMPA and NMDA receptors (A+N). The results, summerized in Table 1, demonstrate that DBNN outperforms the hierarchical cascade of linear-nonlinear model (hLN) [18] in predicting sub-threshold voltage and the hLN fails to predict spike time. When compared to the state-of-the-art seven-layer TCN [20], DBNN can achieve similar prediction accuracy for both sub-threshold voltage and spike time but with much fewer parameters (\(\mathcal{O}(10^{5})\) in DBNN versus \(\mathcal{O}(10^{7})\) in seven-layer TCN). The results suggest that DBNN provides a simple and effective framework to accurately capture the dynamics of a biological neuron with dendritic structures.
The DBNN also possesses a variety of generalization ability. After training on a dataset in which all synapses are activated in the biological neuron model, DBNN can successfully predict the somatic sub-threshold voltage for entirely different input patterns, such as varied synaptic input frequency or the activation of partial synapses (Figure S1). Furthermore, the DBNN's predictive capacity is verified across different biological neuron types, including ganglion and pyramidal neurons (Figure S1).
## 3 Biological interpretation of DBNN
We next demonstrate another advantage of DBNN that the parameters after training are biologically interpretable. The parameters capture the key features of the post-synaptic potential (PSP) induced by a single input and a bilinear dendritic integration rule of synaptic inputs - both are important characteristics of biological neurons. The biological interpretability of DBNN suggests that DBNN can effectively exploit dendritic features to achieve computational capabilities.
### DBNN captures the post-synaptic potentials
To understand how DBNN can successfully predict the activity of a biological neuron, we first examine the input kernel \(k_{i}(t)\) (as equation (1)) in DBNN after training. This kernel reflects the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Neuron type & Objective & DBNN & hLN[18] & TCN[20] \\ \hline Passive & Sub-threshold voltage & **VE:0.99** & VE:0.95 & **VE:0.99** \\ \hline \multirow{3}{*}{Active(A)} & Sub-threshold voltage & **VE:0.95** & VE:0.92 & **VE:0.95** \\ & Spike time & **Pre:0.91** & N/A & Pre:0.64 \\ & & **Rec:0.89** & Rec:0.64 \\ \hline \multirow{3}{*}{Active(A+N)} & Sub-threshold voltage & VE:0.93 & VE:0.91 & **VE:0.94** \\ & Spike time & **Pre:0.82** & N/A & Pre:0.58 \\ & & **Rec:0.78** & & Rec:0.60 \\ \hline \multirow{3}{*}{Active(A+N)} & Number of parameters & \(\mathcal{O}(10^{5})\) & \(\mathcal{O}\big{(}\mathbf{10^{3}}\big{)}\) & \(\mathcal{O}(10^{7})\) \\ & Running time for training & **2-3 hours** & **2-3 hours** & 2-14 days \\ \hline \multicolumn{5}{c}{Note: N/A means that the model cannot predict the spike time} \\ \end{tabular}
\end{table}
Table 1: Results compared with related works
output when only an input spike \(x_{i}(t)\) is given while no input is given to all other nodes. Therefore, it may relate to the postsynaptic potential of a biological neuron that receives only an individual synaptic input. To investigate this relation, we simulate each synapse located at different positions of the dendritic branches and measure the corresponding PSP at the soma of the biological neuron. Subsequently, we compare the PSPs with the double exponential kernels based on the parameters \(\omega_{i}\), \(\tau_{r,i}\), and \(\tau_{d,i}\) after training using the data from the same neuron. Interestingly, our results reveal a significant similarity between the PSPs and the double exponential kernels, which is not trivial considering that the training dataset only involves the scenario that all synapses are activated, but not include the scenario of a single input is given. Furthermore, the similarity holds for both the activation of an individual excitatory synapse and inhibitory synapse (Figure 2(a) and Figure 2(b)).
These findings suggest that using the double-exponential form as the input kernel is valid, given its ability to capture how an individual synaptic input propagates to the soma. Furthermore, our results indicate that the parameters in the double-exponential kernel can be trained effectively, even in the case that a single input is not presented in the training data. Our DBNN's training setup associates each index \(i\) with a specific synaptic location on the dendrites. In turn, the weight \(\omega_{i}\) represents the PSP magnitude activated by this corresponding synapse, while the time constants \(\tau_{r,i}\) and \(\tau_{d,i}\) are related to the location of the dendrites where the synapse is located. In general, the farther a synapse is located from the soma, the larger its associated time constant will be. Therefore, the parameters of the double-exponential kernel can reflect the synaptic input location and the characteristics of single PSPs of the biological neuron.
### DBNN captures the dendritic integration rule
As the input kernel corresponds to a single PSP of a biological neuron, the next layer in DBNN described as the quadratic function in equation(2.1) is supposed to relate to the non-linear integration of multiple synaptic inputs. Previous investigations [13; 14], which entailed electrophysiological experiments and theoretical analyses, reveal that the integration of a pair of synaptic inputs follows a bilinear rule which is in the form of
\[V_{S}(t)\approx V_{1}(t)+V_{2}(t)+\kappa(t)V_{1}(t)V_{2}(t), \tag{4}\]
where \(V_{1}(t)\) and \(V_{2}(t)\) are the PSPs when the two synapses are activated individually, and \(V_{S}(t)\) is the somatic response to the same pair of synaptic inputs activated simultaneously, \(\kappa(t)\) is the integration coefficient for the pair of synaptic inputs that only depends on the location of the inputs but not their strengths. And this rule is further generalized to all types of inputs, including a pair of excitatory inputs, a pair of inhibitory inputs, and multiple excitatory and inhibitory inputs. In the generalized case, the integration coefficients are denoted as \(\kappa_{EE}(t)\), \(\kappa_{EI}(t)\), and \(\kappa_{II}(t)\).
Upon observing the similarities between the quadratic function designed in DBNN (equation (2.1)) and the bilinear dendritic integration rule (equation (4)), we began to contemplate whether DBNN could be trained to learn the bilinear dendritic integration rule. We first measure the values of the integration coefficients (\(\kappa_{EE}(t)\), \(\kappa_{EI}(t)\), and \(\kappa_{II}(t)\)) in the biological neuron by giving a pair of inputs first separately and then simultaneously to all possible pairs of synapses. Then we compare these integration coefficients with the corresponding quadratic coefficients \(a_{ij}\) in DBNN (Equation2.1) after training. We find that the quadratic coefficients well matches with the integration coefficients (Figure 2(c) and Figure 2(d)). This finding suggests that DBNN accurately employs the bilinear dendritic integration rule for the integration of multiple synaptic inputs.
From the above, we have gained insight into how the dynamics of the real biological neuron with dendrites can be captured by DBNN. For a single synaptic input at different locations, the double-exponential kernels in DBNN learn the shape of both excitatory and inhibitory PSPs of the biological neuron. For multiple inputs from varies locations, the DBNN learns the bilinear rule to integrate all PSPs in a simple nonlinear manner. As a result, the sub-threshold voltage is well-fitted by the DBNN, leading to the moderately accurate inference that each time the membrane potential reaches the firing threshold, the neuron will emit a spike. The result highlights the biological interpretability of DBNN.
### Training speed and low dimensional structure
The number of parameters in DBNN is much less in comparison to deep neural networks with many layers. However, owing to the quadratic function specified in equation (2.1), the number of parameters contained in DBNN exceeds that of the linear integration or sigmoid nonlinearity
as utilized in Ujfalussy et al. [18]. Without additional constraints, training DBNN requires more computational time. Nonetheless, Sections 3.1 and 3.2 have uncovered that parameters of DBNN are biologically interpretable, which inspires one to implement appropriate strategies that expedite the training process. For example, all the rising time constant \(\tau_{r,i}\) for different kernels can be initialized to \(5ms\), an biologically meaningful value for synaptic time constants. By adopting this method of initialization, the training time of DBNN will be reduced considerably and even be closed to models employing sigmoid nonlinearity (Table 1 and Figure S2).
Furthermore, Beniaguev et al. [21] have discovered that the PSPs induced by synaptic inputs at various sites of the biological neuron exhibit low dimensional structures that can be reduced to 3-dimensional vectors. Consequently, if we use the 3-dimensional vectors to represent all the PSPs of the real neuron, we can transform the original problem into an much easier linear regression problem. This approach also facilitates acceleration of the training speed (Figure S2). The proof of turning into a linear regression problem is illustrated in the Supplementary material.
## 4 Applications
Biological neurons with dendritic morphology are capable of performing the computation of direction selectivity and coincidence detection. Here, we also show that our DBNN is able to perform these tasks. We further utilize DBNN to address the MNIST classification task and its can outperform the traditional two-layer ANN. These results indicate that DBNN is capable of capturing the complex dendritic computation power of biological neurons.
Figure 2: Biological interpretation of the parameters in DBNN after training. (a) The well match between the PSP of the biological neuron induced by a single excitatory synaptic (black) input and the double exponential kernel in DBNN (cyan). (b) As in Figure 2(a), the double exponential kernel (cyan) also matches the corresponding inhibitory PSP (IPSP) curve (black). (c) The scatter plot of the bilinear integration coefficient \(\kappa_{ij}\) for each pair of synaptic locations w.r.t. the corresponding quadratic coefficient \(a_{ij}\) in DBNN. (d) The mean values of the bilinear integration coefficient and the corresponding quadratic coefficient \(a_{ij}\) for different pairs of excitatory (E) or inhibitory (I) synapse including EE, EI or II.
### Direction selectivity
According to the experiments by Branco et al. [2], dendrites of cortical pyramidal neurons exhibit selectivity for the direction of spatio-temporal synaptic inputs. Specifically, if the a sequence of excitatory synaptic inputs are received from the distal dendrite toward the soma which is the preferred direction, they are more likely to promote somatic firing. Conversely, if inputs are received in non-preferred direction, they will less likely to generate a somatic spike (Figure 3(a)).
Notably, direction selectivity can also be present in DBNN. After training to fit the response of a biological neuron, we perform the following direction selectivity experiment: stimulating the input nodes of DBNN associated with different synapses leads to emit a spike when the synaptic inputs are activated in the preferred direction. However, when the synapses are activated in the opposite (i.e., non-preferred) direction, the resulting output of DBNN is insufficient to exceed the threshold required for spike generation (Figure 3(a)).
### Coincidence detection
In addition to direction selectivity, biological neurons with dendrites can also conduct the coincidence detection. For example, in the early auditory pathway, sound detected from the left and right ear generates synaptic inputs located on different dendritic branches of bipolar neurons [3]. These neurons can then detect the input coincidence that whether the sounds detected come from different sides (Figure 3b).
We find that DBNN is also capable of performing coincidence detection. As the same with Section 4.1, the well-trained DBNN is used to verify this capability. When two synaptic inputs are simultaneously activated at separate dendritic branches, the resulting DBNN's response is stronger enough to generate
Figure 3: Applications of DBNN for direction selectivity and coincidence detection and MNIST classification. (a) Left: the illustration for direction selectivity. Right: the realization of direction selectivity in DBNN with the synaptic inputs for the preferred direction (cyan) and the non-preferred direction (black). Red dashed line is the firing threshold. (b) Left: the illustration for coincidence detection. Right: the realization of coincidence detection in DBNN with the synaptic inputs for the different sides (cyan) and the same side (black). (c) Divide the original MNIST digit into nine equal parts. (d) DBNN with logistic regression (LR) outperforms using LR directly when solving MNIST binary classification task. (e) DBNN with linear classifier (LC) outperforms using LC directly or two-layer neural network(NN) when solving MNIST 10-category classification task.
a spike. In contrast, if we activate two synaptic inputs located at the same dendritic branch, the DBNN's response can not reach the firing threshold (Figure 3b). In this way, DBNN can perform the coincidence detection. Our findings indicate that DBNN exhibits dendritic computational power that is similar to the biological neurons.
### MNIST classification
The MNIST classification task [33] is a well-known benchmark problem in the field of machine learning. Each sample in this dataset represents a \(28\times 28\) pixels image that corresponds to a handwritten digit ranging from 0 to 9. We transform the original MNIST dataset so that it can be used as input to DBNN. To achieve this, we first remove the first row and column from each image, resulting in \(27\times 27\) pixels images. Next, we convert the original gray-scale map to a black-white map by setting all pixels with values greater than 0 to 1 and segment the images into 9 regions (as shown in Figure 3(c)). We then flatten each region of \(9\times 9\) pixels into a row vector and expand it by making 9 consecutive copies of each element. In this way, each handwritten digit is associated with a \(9\times 729\) spatio-temporal sequence, which can be served as the input for DBNN through 9 distinct synapses.
Initially, we train DBNN on the dataset generated from the ganglion neurons that had 9 distinct synapses on the dendrites [29]. Once DBNN's parameters are fixed, we give the modified spatio-temporal sequences from MNIST dataset as input into DBNN and we collect the generated spike train corresponding to each digit. Consequently, we obtain various spike trains with 729 milliseconds duration that served as decoding signal for the original MNIST digits. Here, we consider both a simpler binary classification task and the traditional ten-category classification task. For the former problem, we utilize logistic regression to classify the spike trains produced by DBNN and compare with the results obtained from directly using logistic regression on the MNIST digits. Notably, the classification accuracy can be improved to \(99\%\) (as illustrated in Figure 3(d)). As for the ten-category classification task, we employ a linear classifier to process the spike trains generated by DBNN and make comparisons with the results obtained solely via the linear classifier or a two-layer ANN with 10 hidden neurons (to maintain the number of trainable parameters the same) and relu activation function. Again, the using of DBNN results in a higher accuracy (Figure 3(e)). The detailed information for the logistic regression, linear classifier and two-layer ANN can be found in the Supplementary Material.
## 5 Discussion
We have presented a dendritic bilinear neural network (DBNN) that can accurately capture the somatic dynamics of biological neurons with dendritic structures, including sub-threshold voltage and spike time. We further demonstrate that DBNN's parameters are biologically interpretable. The double exponential kernels in the first layer of DBNN represent post-synaptic potentials for single synaptic input located at different dendritic branches. Meanwhile, the second layer's quadratic terms reflect the dendritic bilinear integration rule for multiple synaptic inputs. Additionally, we find that DBNN has the capacity to perform direction selectivity and coincidence detection which demonstrates that DBNN's dendritic computation power can approach the level of biological neurons. We also utilize DBNN to solve the image classification task, outperforming the traditional two-layer ANN.
Although we have shown that DBNN can capture the dynamics of different biological neuron types including ganglion neurons and pyramidal neurons, there are other types of neurons with diverse morphologies and electrophysiology properties. It remains an open question whether DBNN can capture the dynamics of all types of neurons or we should modify its structure to apply to different biological neuron models. And the integration of DBNN into existing deep learning frameworks as computational units is also an interesting direction which could possibly enhance the computational capacity of the ANN models. Our work provides a method of mapping dendritic function into a biologically interpretable neural network and the neural network can gain more computational power through this way.
## Acknowledgments
This work was supported by Science and Technology Innovation 2030 - Brain Science and Brain-Inspired Intelligence Project with Grant No. 2021ZD0200204 and the Lingang Laboratory Grant No. LG-QS-202202-01 (S.L., D.Z.,); National Natural Science Foundation of China Grant 12271361
(S.L.); National Natural Science Foundation of China with Grant No. 12071287, 12225109 (D.Z.), Shanghai Municipal Science and Technology Major Project 2021SHZDZX0102 and the Student Innovation Center at Shanghai Jiao Tong University (S.L., D.Z.).
|
2307.07956 | Automated Polynomial Filter Learning for Graph Neural Networks | Polynomial graph filters have been widely used as guiding principles in the
design of Graph Neural Networks (GNNs). Recently, the adaptive learning of the
polynomial graph filters has demonstrated promising performance for modeling
graph signals on both homophilic and heterophilic graphs, owning to their
flexibility and expressiveness. In this work, we conduct a novel preliminary
study to explore the potential and limitations of polynomial graph filter
learning approaches, revealing a severe overfitting issue. To improve the
effectiveness of polynomial graph filters, we propose Auto-Polynomial, a novel
and general automated polynomial graph filter learning framework that
efficiently learns better filters capable of adapting to various complex graph
signals. Comprehensive experiments and ablation studies demonstrate significant
and consistent performance improvements on both homophilic and heterophilic
graphs across multiple learning settings considering various labeling ratios,
which unleashes the potential of polynomial filter learning. | Wendi Yu, Zhichao Hou, Xiaorui Liu | 2023-07-16T06:14:12Z | http://arxiv.org/abs/2307.07956v1 | # Automated Polynomial Filter Learning
###### Abstract.
Polynomial graph filters have been widely used as guiding principles in the design of Graph Neural Networks (GNNs). Recently, the adaptive learning of the polynomial graph filters has demonstrated promising performance for modeling graph signals on both homophilic and heterophilic graphs, owning to their flexibility and expressiveness. In this work, we conduct a novel preliminary study to explore the potential and limitations of polynomial graph filter learning approaches, revealing a severe overfitting issue. To improve the effectiveness of polynomial graph filters, we propose Auto-Polynomial, a novel and general automated polynomial graph filter learning framework that efficiently learns better filters capable of adapting to various complex graph signals. Comprehensive experiments and ablation studies demonstrate significant and consistent performance improvements on both homophilic and heterophilic graphs across multiple learning settings considering various labeling ratios, which unleashes the potential of polynomial filter learning.
Graph neural networks, automated learning, polynomial filter +
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
* We conduct a novel investigation to study whether existing approaches have fully realized the potential of graph polynomial filter learning. Surprisingly, we discover that graph polynomial filter learning suffers from a severe overfitting problem, which provides a potential explanation for its failures.
* To overcome the overfitting issue, we propose Auto-Polynomial, a novel and general automated polynomial filter learning framework to enhance the performance and generalization of polynomial graph spectral filters in GNNs.
* Comprehensive experiments and ablation studies demonstrate significant and consistent performance improvements on both homophilic and heterophilic graphs across multiple learning settings, which efficiently unleashes the potential of polynomial filter learning.
## 2. Preliminary
**Notations.** An undirected graph can be represented by \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) where \(\mathcal{V}=\{v_{1},...,v_{n}\}\) is the set of the nodes and \(\mathcal{E}=\{e_{1},...,e_{m}\}\) is the set of the edges. Suppose that the nodes are associated with the node feature matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\), where \(n\) denotes the number of nodes and \(d\) denotes the number of features per node. The graph structure of \(\mathcal{G}\) can be represented by the adjacency matrix \(\mathbf{A}\), and \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) denotes the adjacency matrix with added self-loops. Then the symmetrically normalized adjacency matrix and graph Laplacian matrix can be defined as \(\hat{\mathbf{A}}=\hat{\mathbf{D}}^{-\frac{1}{2}}\hat{\mathbf{A}}\hat{\mathbf{D }}^{-\frac{1}{2}}\) and \(\hat{\mathbf{L}}=\mathbf{I}-\hat{\mathbf{A}}=\mathbf{I}-\hat{\mathbf{D}}^{- \frac{1}{2}}\hat{\mathbf{A}}\hat{\mathbf{D}}^{-\frac{1}{2}}\) respectively, where \(\hat{\mathbf{D}}\) is the diagonal degree matrix of \(\hat{\mathbf{A}}\). Next, we will briefly introduce the polynomial graph filter as well as the concepts of homophilic and heterophilic graphs.
### Polynomial Graph Filter
Spectral-based GNNs operate graph convolutions in the spectral domain of the graph Laplacian. Many of these methods utilize polynomial graph spectral filters to achieve this operation. The polynomial graph signal filter can be formulated as follows:
\[\mathbf{Y}=\mathbf{U}diag[h(\lambda_{1}),\ldots,h(\lambda_{n})]\mathbf{U}^{ \top}\mathbf{X}=\mathbf{U}h(\mathbf{A})\mathbf{U}^{\top}\mathbf{X}\approx \sum_{k=1}^{K}\theta_{k}\hat{\mathbf{L}}^{k}\mathbf{X},\]
where \(\mathbf{X}\in\mathbb{R}^{n\times d}\) is the input signal and \(\mathbf{Y}\in\mathbb{R}^{n\times d}\) is the output signal after filtering. \(\hat{\mathbf{L}}=\mathbf{U}\mathbf{A}\mathbf{U}^{\top}\) denotes the eigendecomposition of the symmetric normalized Laplacian matrix \(\hat{\mathbf{L}}\), where \(\mathbf{U}\) denotes the matrix of eigenvectors and \(\mathbf{\Lambda}=diag[\lambda_{1},\ldots,\lambda_{n}]\) is the diagonal matrix of eigenvalues. \(h(\lambda)\) is an arbitrary filter function that can be approximated by various polynomials \(h_{\Theta}(\lambda)=\sum_{k=0}^{K}\theta_{k}\lambda^{k},\lambda\in[0,2]\), where the polynomial coefficients \(\Theta=\{\theta_{k}\}_{k=0}^{K}\) determine the shape and properties of the \(k\)-th order filter.
The filter approximation using polynomials has multiple unique advantages. First, the filtering operation represented by polynomial filters can be equivalently and efficiently computed in the spatial domain using simple recursive feature aggregations, i.e., \(\sum_{k=1}^{K}\theta_{k}\hat{\mathbf{L}}^{k}\mathbf{X}\), without resorting to expensive eigendecomposition that is infeasible for large-scale graphs. Second, the polynomial filters enable localized computations such that it only requires \(k\)-hop feature aggregation, and only \(k\) parameters are needed compared with the non-parametric filter \(h(\mathbf{\Lambda})=diag[h(\lambda_{1}),\ldots,h(\lambda_{n})]\). Most importantly, the polynomial filter is flexible and general to represent filters with various properties (Kang et al., 2017) such as low-pass filters, high-pass filters, band-pass filters, band-rejection filters, comb filters, etc. Generally, the polynomial graph filter can be written in the general form:
\[\mathbf{U}h(\mathbf{\Lambda})\mathbf{U}^{\top}\mathbf{X}\approx\mathbf{U}h_{ \Theta}(\mathbf{\Lambda})\mathbf{U}^{\top}=\sum_{k=0}^{K}\theta_{k}P_{k}( \mathbf{M}), \tag{1}\]
where \(\{\theta_{k}\}_{k=0}^{K}\) are the polynomial coefficients, \(\{P_{k}(x)\}_{k=0}^{K}\) are the polynomial basis, and \(\mathbf{M}\) is the basic matrix (e.g., normalized graph Laplacian matrix \(\hat{\mathbf{L}}\) or normalized adjacency matrix \(\hat{\mathbf{A}}\)). Among all the works based on polynomial filters, we will introduce the foundational ChebNet (Kang et al., 2017) and other improved models such as GPRGNN (Chen et al., 2017) and BernNet (Kang et al., 2017).
**ChebNet: Chebyshev polynomials.** ChebNet is a foundational work that uses the Chebyshev polynomials to approximate the graph spectral filter:
\[\sum_{k=0}^{K}\theta_{k}T_{k}(\frac{2\hat{\mathbf{L}}}{\lambda_{max}}- \mathbf{I}), \tag{2}\]
where \(\frac{2\hat{\mathbf{L}}}{\lambda_{max}}-\mathbf{I}\) denotes the rescaled Laplacian matrix, and \(\lambda_{max}\) is the largest eigenvalue of \(\hat{\mathbf{L}}\). The Chebyshev polynomial \(T_{k}(x)\) of order \(k\) can be computed by the stable recurrence relation \(T_{k}(x)=2xT_{k-1}(x)-T_{k-2}(x)\), with \(T_{0}(x)=1\) and \(T_{k}(x)=x\). The polynomial filter Eq. (2) can be regarded as a special case of the general form in Eq. (1) when we set \(P_{k}(x)=T_{k}(x)\) and \(\mathbf{M}=\frac{2\hat{\mathbf{L}}}{\lambda_{max}}-\mathbf{I}\).
**GPRGNN: Monomial polynomials.** GPRGNN approximates spectral graph convolutions by Generalized PageRank (Chen et al., 2017). The spectral polynomial filter of GPRGNN can be represented as follows:
\[\sum_{k=0}^{K}\theta_{k}\hat{\mathbf{A}}^{k}, \tag{3}\]
which is equivalent to the general form in Eq. (1) when we set \(P_{k}(x)=x^{k}\) and \(\mathbf{M}=\hat{\mathbf{A}}\).
**BernNet: Bernstein polynomials.** The principle of BernNet is to approximate any filter on the normalized Laplacian spectrum of a graph using K-order Bernstein polynomials. By learning the coefficients of the Bernstein Basis, it can adapt to different signals and get various spectral filters. The spectral filter of BernNet can be formulated as:
\[\sum_{k=0}^{K}\theta_{k}\frac{1}{2^{K}}\left(\begin{array}{c}K\\ k\end{array}\right)(2\mathbf{I}-\hat{\mathbf{L}})^{K-k}\hat{\mathbf{L}}^{k}, \tag{4}\]
which is equivalent to the general form in Eq. (1) when we set \(P_{k}(x)=\frac{1}{2^{K}}\left(\begin{array}{c}K\\ k\end{array}\right)(2-x)^{K-k}x^{k}\) and \(\mathbf{M}=\hat{\mathbf{L}}\).
In the aforementioned GNNs, the coefficients, i.e., \(\Theta\), determining the polynomial filters are treated as learnable parameters together with other model parameters that are learned by gradient descent algorithms based on the training loss. They have shown encouraging performance on both homophilic and heterophilic graphs since the filter can be adapted to model various types of graph signals whose proper assumptions are generally complex and unknown.
### Homophily and Heterophily
Graphs can be classified into homophilic graphs and heterophilic graphs based on the concept of node homophily (Zhou et al., 2017; Li et al., 2018; Li et al., 2019). In homophilic graphs, nodes with similar features or belonging to the same class tend to connect with each other. For example, in citation networks, papers in the same research field tend to cite each other (Beng et al., 2017). In other words, homophilic graphs usually satisfy the smoothness assumption that the graph signal is smooth over the edges. The majority of GNNs are under homophily assumption, and their manually designed low-pass filters filter out the high-frequency graph signal via simple neighborhood aggregation mechanisms that propagate the graph information. For instance, GCN (Girshick et al., 2015) uses the first-order Chebyshev polynomial as graph convolution, which is proven to be a fixed low-pass filter (Zhou et al., 2017). APPNP (Zhou et al., 2017) uses a Personalized Pagerank(PPR) to set fixed filter coefficients, which has been shown to surpass the high-frequency graph signals (Beng et al., 2017). Generally, these manually designed graph filters exhibit superior performance in various prediction tasks on homophilic graphs.
On the other hand, in heterophilic graphs, nodes with different features or belonging to different classes tend to connect with each other. For example, in dating networks, most users tend to establish connections with individuals of the opposite gender, exhibiting heterophily (Zhou et al., 2017). In molecular networks, protein structures are more likely to be composed of connections between different types of amino acids (Li et al., 2019). In order to quantify the degree of homophily of the graphs, Pei et al (Pei et al., 2019) have proposed a metric to measure the level of node homophily in graphs:
\[H(\mathcal{G})=\frac{1}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}\frac{|\{u\in \mathcal{N}(v):y_{v}=y_{u}\}|}{|\mathcal{N}(v)|} \tag{5}\]
where \(\mathcal{N}(v)\) denotes the neighbour set of node \(v\), and \(y_{v}\) denotes the label of node \(v\). This metric \(H(\mathcal{G})\) measures the average probability that an edge connects two nodes of the same label over all nodes in \(\mathcal{V}\). When \(H(\mathcal{G})\to 1\), it indicates the graph is of high homophily since the edge always connects nodes with the same label, while \(H(\mathcal{G})\to 0\) indicates the graph is of high heterophily where edge always connect nodes with different labels. In fact, the proper prior knowledge of the heterophilic graphs is generally unknown. Therefore, those classic GNNs cannot perform well on heterophilic graphs since their expressive power is limited by fixed filters.
In recent years, many works have been actively exploring GNNs under heterophily. Some examples of heuristic approaches include MixHop (Beng et al., 2017), H2GCN (Li et al., 2019), and GCNII (Beng et al., 2017). For example, MixHop (Beng et al., 2017) is an exemplary method that aggregates messages from higher-order neighbors, thereby enabling the integration of information from different distances. Another notable approach is H2GCN (Li et al., 2019), which aims to address heterophily by separating the ego and neighbor embeddings. GCNII (Beng et al., 2017) adopts various residual connections and more layers to capture distant information in graphs. They all attempt to aggregate information from higher-order neighbors. There also exist multiple approaches inspired by polynomial graph filters (Beng et al., 2017; Li et al., 2019; Li et al., 2019). These polynomial graph filter learning approaches have better interpretability than heuristic methods, and they exhibit encouraging performance on some heterophilic graphs, owing to the flexibility and adaptivity to graph signals of various properties. However, there is no consistent performance advantage for either heuristic or graph polynomial filter learning approaches. Both types of methods sometimes underperform graph-agnostic MLPs.
## 3. Overfitting of Graph Polynomial Filter Learning
In this section, we design novel preliminary experiments to investigate the potential and limitations of polynomial graph filter learning in GNNs.
### Search of optimal polynomials
Theoretically speaking, polynomial-based GNNs have strong flexibility, adaptivity, and expressiveness as discussed in Section 2. Although they achieve encouraging performance in some cases, they often fail to show consistent improvement and advantages over simpler designs in practice. On one hand, polynomial-based GNNs do not exhibit notably better performance than manually designed GNNs such as APPNP (Zhou et al., 2017) and GCNII (Beng et al., 2017) on homophilic datasets even though these manually designed GNNs simply emulate low-pass filters and can be covered and approximated as special cases of polynomial-based GNNs. On the other hand, polynomial-based GNNs (Beng et al., 2017; Li et al., 2019; Li et al., 2019) tend to learn inappropriate polynomial weights and often exhibit inferior performance. For example, ChebNet performs noticeably worse than its simplified version, GCN (Girshick et al., 2015), which only utilizes the first two Chebyshev polynomials; The improved variants such as GPRGNN (Beng et al., 2017) and BernNet (Li et al., 2019) can not consistently outperform heuristically and manually designed alternatives such as H2GCN (Li et al., 2019); and they even underperform graph-agnostic MLPs in some cases.
The failures of polynomial-based GNNs are counterintuitive considering their strong expressively, which then leads to the following critical question: "**Have we achieved the full expressiveness and adaptivity of polynomial graph filter learning?**" To answer this question, we design a novel experiment to explore the full potential of polynomial filters, i.e., the best performance that the optimal polynomial filter can potentially achieve. Specifically, we choose the monomial polynomial (i.e., \(\sum_{k=0}^{K}\theta_{k}\hat{\mathbf{A}}^{k}\)) as designed in GPRGNN and perform a brute-force search to explore the optimal polynomial filter by a grid search on the polynomial coefficients \(\{\theta_{k}\}_{k=0}^{K}\) based on validation accuracy. We denote the GPRGNN with this optimal polynomial filter as GPRGNN-optimal. We compare the performance with the baseline GPRGNN whose polynomial coefficients are learnable parameters guided by training loss.
In detail, we evaluate the performance on semi-supervised node classification tasks on some classic homophilic and heterophilic datasets. In practice, it is computationally expensive to conduct an exhaustive brute-force search on the polynomial coefficients if the polynomial order \(K\) is large or the search granularity is fine since the search space will exponentially increase. To reduce the search space for this preliminary study, we set the polynomial order \(K=2\) so that we only need to do a grid search over 3 parameters in the range \(\{-0.9,-0.5,-0.2,-0.1,-0.05,0,0.05,0.1,0.2,0.5,0.9\}\). While this coarse search can not cover the whole search space, the results provide performance lower bounds for the optimal polynomial filter. For all models, we adopt the sparse splitting (10% training, 10% validation, and 80% testing) in both homophilic and heterophilic
datasets. We tune the hyperparameters such as learning rate and weight decay for GPRGNN following the original paper [5]. We run the experiments for 10 random data splits and report the mean and standard deviation. The comparison in Figure 1 shows the following observations:
* On homophilic datasets, i.e., Figure 1(a), GPRGNN underperforms GPRGNN-optimal on Cora and CiteSeer, and they achieve comparable performance on PubMed and Computers. Both of them outperform MLP significantly on all datasets. This demonstrates that GPRGNN can learn effective low-pass filters to model the graph signal for homophilic graphs, but it does not fully reach its maximum potential and is notably inferior to the optimal polynomial filter (GPRGNN-optimal) in some cases.
* On heterophilic datasets, i.e., Figure 1(b), GPRGNN exhibits significantly worse performance than GPRGNN-optimal, indicating that GPRGNN learns suboptimal filters. Moreover, GPRGNN does not show notably better performance than MLP, and it even underperforms MLP significantly on Cornell and Texas datasets while GPRGNN-optimal is consistently superior to MLP. This demonstrates that GPRGNN struggles to model the complex graph signal in heterophilic graphs and is far from achieving its theoretical learning potential as indicated by GPRGNN-optimal.
* GPRGNN exhibits unstable performance and high randomness. The learning process of GPRGNN is heavily influenced by the
Figure 1. The comparison between MLP, GPRGNN, and GPRGNN-optimal on homophilic and heterophilic datasets.
Figure 2. The training/validation/test accuracy of GPRGNN and GPRGNN-optimal in the training process.
randomness of weights initialization, data splitting and optimization process, leading to a large variance of the final results.
These observations reveal that GNNs based on the polynomial filters indeed have great potential, but the current learning approach is suboptimal, which prevents them from showcasing their optimal capacity. However, the reasons for such failures are unclear.
### Overfitting issue
In this section, we provide further insight into the failures of polynomial filter learning in GNNs and reveal one of the key obstacles to their success. Specifically, we compare the training process of GPRGNN and GPRGNN-optimal by measuring their training, validation, and testing accuracy during the training process. We adopt the same settings in Section 3.1, and only run one split. The training process as shown in Figure 2 provides the following observation:
* GPRGNN always achieves perfect training accuracy close to 100% on all datasets, but its validation performance and test performance are significantly lower. This reveals that the polynomial filter is flexible enough so that the training data can be perfectly fitted but the big generalization gap does not deliver satisfying performance on validation and test data.
* GPRGNN-optimal achieves lower training accuracy than GPRGNN, but its validation and test performance are notably better than GPRGNN. This indicates that GPRGNN suffers from a severe overfitting issue while the optimal polynomial filter can mitigate the overfitting and narrow down this generalization gap.
To summarize, polynomial filter learning in GNNs suffers from a severe overfitting problem that leads to poor generalization. This provides a plausible explanation for the failures of polynomial-based GNNs regardless of their powerful expressiveness.
## 4. Auto-polynomial
In this section, we introduce a novel and general automated polynomial filter learning approach to address the aforementioned overfitting issues and improve the effectiveness of graph polynomial filters.
### Automated polynomial filter learning
The polynomial coefficients in polynomial filters serve as key parameters that have a strong influence on the prediction of GNNs. Existing polynomial-based GNNs such as ChebNet, GPRGNN, and BernNet treat the polynomial coefficients the same as other model parameters, both of which are jointly learned through gradient descent algorithms such as Adam (Kingma and Ba, 2014) according to the training loss. Our preliminary study in Section 3 reveals the suboptimal of the widely used polynomial filter learning approach. This suboptimality is partially due to the severe overfitting problem since the filters are adjusted to perfectly fit the labels of training nodes that have a low ratio in the typical semi-supervised learning settings for graph deep learning (Kingma and Ba, 2014; Wang et al., 2015). On the other hand, our brute-force search on the polynomial filter based on the validation accuracy achieves consistently better performance and mitigates the overfitting issues. However, when the polynomial order \(K\) is large, it is not plausible to brute-force search the optimal polynomials by trial and error. This motivates us to develop a better automated learning approach for polynomial graph filter learning that mitigates the overfitting.
**Auto-Polynomial.** We propose Auto-Polynomial, a novel automated polynomial filter learning approach that learns the polynomial coefficients guided by the validation loss inspired by hyperparameter optimization (HPO) and automated machine learning (AutoML). Instead of treating polynomial coefficients the same as other model parameters, we consider these polynomial coefficients as hyperparameters. Let the polynomial coefficients \(\Theta=\{\theta_{k}\}_{k=0}^{K}\) represent GNNs with different filtering properties and \(w\) denotes other learnable weights in GNNs. The main idea behind Auto-Polynomial is to jointly optimize the coefficients \(\Theta\) and the weights \(w\) but with totally different targets. Formally, we model the automated polynomial filter learning as a bi-level optimization problem:
\[\begin{split}\min_{\Theta}&\mathcal{L}_{\textit{o}al }(w^{*}(\Theta),\Theta)\\ \text{s.t.}& w^{*}(\Theta)=\operatorname*{argmin}_{w}( \mathcal{L}_{train}(w,\Theta))\end{split} \tag{6}\]
where the lower-level optimization problem finds the best model parameters \(w^{*}(\Theta)\) based on training loss while the upper-level problem finds the optimal polynomial filter represented by \(\Theta\) based on validation loss. Guided by this bi-level optimization problem, the learned GNN model can not only fit the training data accurately but also enhance its performance on validation data, which helps improve generalization ability and alleviate the overfitting problem.
**Approximate Computation.** The bi-level problem can be solved by alternating the optimization between lower-level and upper-level problems. However, the optimal solution \(w^{*}(\Theta)\) in the lower-level problem has a complicated dependency on polynomial coefficients \(\Theta\) so that the meta-gradient of upper-level problem, i.e., \(\nabla_{\Theta}\mathcal{L}_{\textit{o}al}(w^{*}(\Theta),\Theta)\), is hard to compute. This complexity becomes even worse when the lower-level problem adopts multiple-step iterations. Therefore, we adopt a simplified and elegant method inspired by DARTS (DARTS, 2015) to approximate the gradient computation in problem (6):
\[\nabla_{\Theta}\mathcal{L}_{\textit{o}al}(w^{*}(\Theta),\Theta)\approx\nabla _{\Theta}\mathcal{L}_{\textit{o}al}(w-\xi\,\nabla_{w}\,\mathcal{L}_{train}(w, \Theta),\Theta) \tag{7}\]
where \(\xi\) denotes the learning rate for the lower-level optimization, and only one step gradient is used to approximate the optimal model \(w^{*}(\Theta)\). This approximation provides a reasonable solution since \(w\) will accumulate the update from all previous training iterations and its initial value \(w\) might not be too far away from the optima.
In particular, if we set \(\xi=0\), Equation (7) is equivalent to updating \(\Theta\) based on the current weights \(w\), which is a first-order approximation, a much simpler form of the problem. Although first-order approximation may not provide the best result, it can improve computational efficiency. If we use \(\xi>0\), then we can use chain rule to approximate Equation (7):
\[\nabla_{\Theta}\mathcal{L}_{\textit{o}al}\left(w^{\prime},\Theta\right)-\xi \nabla_{\Theta,w^{\prime}}^{2}\mathcal{L}_{train}(w,\Theta)\nabla_{w^{\prime}} \mathcal{L}_{\textit{o}al}\left(w^{\prime},\Theta\right) \tag{8}\]
where \(w^{\prime}=w-\xi\nabla_{w}\mathcal{L}_{\textit{train}}\left(w,\Theta\right)\) denotes the weights after one step gradient descent. In practice, we can apply the finite difference method (DARTS, 2015) to approximate the second term in Equation (8):
\[\begin{split}&\nabla_{\Theta,w}^{2}\mathcal{L}_{\textit{train}} \left(w,\Theta\right)\nabla_{w^{\prime}}\mathcal{L}_{\textit{o}al}\left(w^{ \prime},\Theta\right)\\ &\approx\frac{\nabla_{\Theta}\mathcal{L}_{train}\left(w^{+}, \Theta\right)-\nabla_{\Theta}\mathcal{L}_{train}\left(w^{-},\Theta\right)}{2 \xi}\end{split} \tag{9}\]
where \(\varepsilon\) denotes a small scalar in finite difference approximation, \(w^{+}=w+\varepsilon\nabla_{w^{\prime}}\mathcal{L}_{\mathit{real}}\left(w^{\prime}, \Theta\right)\), and \(w^{-}=w-\varepsilon\nabla_{w^{\prime}}\mathcal{L}_{\mathit{real}}\left(w^{ \prime},\Theta\right)\). In practice, we also introduce a hyper-parameter \(freq\) that specifies the update frequency of polynomial coefficients to further improve the efficiency by skipping this computation from time to time.
### Use cases
The proposed Auto-Polynomial is a general polynomial filter learning framework and it can be applied to various polynomial filter-based GNNs. In this section, we will introduce some representative examples to showcase its application.
**Case 1: Auto-GPRGNN.** In GPRGNN (Girani et al., 2017), the weights of Generalized PageRank (GPR), i.e., polynomial coefficients \(\Theta\) in Equation (3), are trained together with the model parameters \(w\). The GPR weights \(\Theta\) can be adaptively learned during the training process to control the contribution of the node features and their propagated features in different aggregation layers. GPR provides a flexible way to model various graph signals. However, our preliminary study in Section 3 has shown that GPRGNN can not learn effective filters and faces a serious overfitting problem for the learning of polynomial coefficients. Moreover, its performance is very sensitive to the initialization of filters, data split, and random seeds. The proposed Auto-Polynomial framework can help mitigate these problems. Instead of learning two types of parameters simultaneously, we treat the training of the model parameter \(w\) and the GPR weight \(\Theta\) as the lower-level and upper-level optimization problems in Equation (6), respectively. In this work, we denote GPRGNN using Auto-Polynomial as Auto-GPRGNN.
**Case 2: Auto-BernNet.** In BernNet (Han et al., 2017), the coefficients of the Bernstein basis \(\Theta\) can be learned end-to-end to fit any graph filter in the spectral domain. The training process of BernNet is slightly different from GPRGNN since BernNet specifies an independent learning rate for the polynomial coefficients. To some extent, the learning of polynomial coefficients \(\Theta\) is slightly distinguished from the learning of model parameters \(w\), which improves the model performance. However, this learning method still has not really brought out the potential of BernNet, and faces the problem of overfitting, especially under semi-supervised learning on heterophilic data. Applying the proposed Auto-Polynomial method to BernNet can ease this problem. We can regard the Bernstein basis coefficients \(\Theta\) in BernNet as hyperparameters and optimize them in the Auto-Polynomial bi-level optimization problem. Analogously, we refer to BernNet with Auto-Polynomial as Auto-BernNet.
### Implementation
In this section, we demonstrate the implementation details of Auto-Polynomial and present the learning procedure in Algorithm 1.
```
Input:\(\Theta\): coefficients of polynomial filter \(\xi\): learning rate for the lower-level approximation \(\eta_{0}\): learning rate of polynomial filter \(\eta_{1}\): learning rate of model parameters \(w\): model parameters, \(freq\): frequency to update \(\Theta\) Output: Graph polynomial filter with coefficients \(\Theta\) for\(i\gets 1\)to\(Iteration\)do (Update \(\Theta\)): if\(i\not=\theta\)then \(w^{\prime}_{i}\gets w_{i}-\xi\nabla_{w_{i}}\mathcal{L}_{train}(w_{i}, \Theta_{i})\), \(w^{\prime}_{i}\gets w_{i}+\varepsilon\nabla_{w^{\prime}_{i}}\mathcal{L}_{ \mathit{real}}\left(w^{\prime}_{i},\Theta_{i}\right)\), \(w^{\prime}_{i}\gets w_{i}-\varepsilon\nabla_{w^{\prime}_{i}}\mathcal{L}_{ \mathit{real}}\left(w^{\prime}_{i},\Theta_{i}\right)\), \(\nabla^{2}_{\Theta_{i},w_{i}}\mathcal{L}_{train}(w_{i},\Theta_{i})\nabla_{w^{ \prime}_{i}}\mathcal{L}_{\mathit{real}}\left(w^{\prime}_{i},\Theta_{i}\right)\leftarrow\) Equ. (9) \(\nabla_{\Theta_{i}}\mathcal{L}_{\mathit{real}}\left(w^{*}_{i}(\Theta_{i}), \Theta_{i}\right)\leftarrow\) Equ. (8) \(\Theta_{i+1}\leftarrow\Theta_{i}-\eta_{0}\nabla_{\Theta_{i}}\mathcal{L}_{ \mathit{real}}(w^{*}_{i},\Theta_{i})\) else \(\Theta_{i+1}\leftarrow\Theta_{i}\) end if (Update \(w\)): \(w_{i+1}=w_{i}-\eta_{1}\nabla_{w_{i}}\mathcal{L}_{train}(w_{i},\Theta_{i+1})\); end while
```
**Algorithm 1**Auto-Polynomial Learning Algorithm
Bi-level update.The whole procedure of Auto-Polynomial includes two main steps: updating the polynomial coefficients \(\Theta\) and updating the model parameters \(w\). To update \(\Theta\), we perform gradient descent with \(\nabla_{\Theta}\mathcal{L}_{\mathit{real}}\) to enhance the generalization ability of polynomial filter learning. To reduce memory consumption and increase efficiency, we first employ the finite difference method to compute the second-order term in Equation (9) and then obtain an approximation of \(\nabla_{\Theta}\mathcal{L}_{\mathit{real}}\) as Equation (8). In practice, we introduce a hyper-parameter \(freq\) to specify the update frequency of polynomial coefficients, so as to further improve the algorithm efficiency in necessary. In other words, \(\Theta\) can be updated only once in every \(freq\) iterations of model update. To update \(w\), we utilize \(\nabla_{\Theta}\mathcal{L}_{train}\) to update the model parameters \(w\) by fitting the labels of training nodes. Through this alternating update process, we ultimately obtain appropriate polynomial coefficients and model parameters to improve the GNNs' graph modeling and generalization abilities.
**Complexity analysis.** The bi-level optimization problem in Auto-Polynomial can be solved efficiently. To be specific, we assume that the computation complexity of the original model is \(O(C)\), where \(C\) is a known constant. Then the model complexity after applying Auto-Polynomial is \(O(\frac{freq+4}{freq}C)\), as we need 4 extra forward computation when updating \(\Theta\). Therefore, the computational complexity of Auto-Polynomial is at most 5 times bigger than the backbone GNN model but it can be effectively reduced by choosing a larger \(freq\). In terms of memory cost, Auto-Polynomial does not increase the memory cost significantly no matter how large \(freq\) is. The computation time and memory cost will be evaluated in Section 5.
## 5. Experiment
In this section, we provide comprehensive experiments to validate the effectiveness of Auto-Polynomial under semi-supervised learning and supervised learning settings on both heterophilic and homophilic datasets. Further ablation studies are presented to investigate the influence of training set ratio and update frequency on the model performance.
### Baselines and Datasets
**Baselines.** We compare the proposed Auto-Polynomial, including use cases Auto-GPRGNN and Auto-BernNet, with 7 baseline
models: MLP, GCN (Zhu et al., 2017), ChebNet (Chen et al., 2017), APPNP (Zhu et al., 2017), GPRGNN (Chen et al., 2017), BernNet (Chen et al., 2017) and H2GCN (Chen et al., 2017). For GPRGNN1 and BernNet2, we use their the officially released code. For H2GCN, we use the Pytorch version of the original code. For other models, we used the implementation of the Pytorch Geometric Library (Han et al., 2017).
Footnote 1: [https://github.com/jianhao2016/GPRGNN](https://github.com/jianhao2016/GPRGNN)
**Datasets.** We conduct experiments on the most commonly used real-world benchmark datasets for node classification. We use 4 homophilic benchmark datasets, including three citation graphs including Cora, Citeseer and PubMed (Zhu et al., 2017; Zhu et al., 2017), and the Amazon copurchase graph Computers (Zhu et al., 2017). We also use 4 heterophilic benchmark datasets, including Wikipedia graphs such as Chameleon and Squirrel(Chameleon and Squirrel, 2017), and webpage graphs such as Texas and Cornell from WebKB3. We summarize the dataset statistics in Table 1.
Footnote 2: [https://github.com/jianhao2016/GPRGNN](https://github.com/jianhao2016/GPRGNN)
**Hyperparameter settings.** For APPNP, we set its propagation step size \(K=10\) and optimize the teleport probability \(\alpha\) over \(\{0.1,0.2,0.5,0.9\}\). For ChebNet, we set the propagation step \(K=2\). For GPRGNN, we set its polynomial order \(K=10\). For BernNet, we set the polynomial order \(K=10\) and tune the independent learning rate for propagation layer over \(\{0.002,0.005,0.01,0.05\}\). For H2GCN, we follow the settings in (Zhu et al., 2017). We search the embedding round \(K\) over \(\{1,2\}\). For all models except H2GCN, we use 2 layer neural networks with 64 hidden units and set the dropout rate 0.5. For Auto-GPRGNN and Auto-BernNet, we use 2-layer MLP with 64 hidden units and set the polynomial order \(K=10\). We search the meta learning rate \(\eta_{0}\) of Auto-Polynomial over \(\{0.01,0.05\}\) and weight decay over \(\{0,0.0005\}\). We search the learning rate \(\xi\) over \(\{0,0.05\}\) and the polynomial weights are randomly initialized. For all models, we adopted an early stopping strategy of 200 epochs with a maximum of 1000 epochs. We use the Adam optimizer to train the models. We optimize learning rate over \(\{0.002,0.01,0.05\}\), and weight decay over \(\{0,0.0005\}\). We select the best performance according to the validation accuracy.
### Semi-supervised node classification
**Experimental settings.** In semi-supervised learning setting, we randomly split the datasets into training/validation/test sets with the ratio of 10%/10%/80%. We run each experiment 10 times with random splits and report the mean and variance of the accuracy. Note that the data split for semi-supervised learning in the literature is quite inconsistent since existing works usually use different ratios for the homophilic and heterophilic datasets (Chen et al., 2017), or a higher fraction on both the homophilic and heterophilic datasets (Chen et al., 2017; Zhu et al., 2017; Zhu et al., 2017). To be fair, we use a consistent ratio for both the homophilic and the heterophilic datasets, which can truly reflect the adaptability of the polynomial filters on different datasets.
**Performance analysis.** The performance summarized in Table 2 shows the following major observations:
* For homophilic datasets, all GNN models outperform MLP significantly, indicating that the structure information of homophilic graphs can be learned and captured easily. Moreover, Auto-GPRGNN and Auto-BernNet can achieve the best performance in most cases, demonstrating Auto-Polynomial's superior capability in polynomial graph filter learning.
* For heterophilic datasets, some baselines such as GCN, ChebNet and GPRGNN fail to learn effective polynomial filters and they even underperform graph-agnostic MLP. However, Auto-GPRGNN and Auto-BernNet can outperform all baselines by a significant margin. For instance, Auto-GPRGNN improves over GPRGNN by 14%, 16%, 5%, and 31% on Chameleon, Cornell, Squirrel, and Texas datasets, respectively. Auto-BernNet improves over BernNet by 2%, 4%, 8%, and 4% on Chameleon, Cornell, Squirrel, and Texas datasets, respectively. Moreover, Auto-polynomial effectively reduces the standard deviations in most cases. Especially, Auto-GPRGNN archives a standard deviation 10% lower than GPRGNN on the Texas dataset. These comparisons clearly indicate that Auto-Polynomial is capable of learning polynomial filters more effectively and mitigating the overfitting issue.
### Supervised node classification
**Experimental settings.** In supervised learning setting, we randomly split the datasets into training/validation/test sets with the ratio of 48%/32%/32%, following the high ratio in the work (Chen et al., 2017). We run each experiment 10 times with random splits and report the mean and variance of the test accuracy.
**Performance analysis.** The performance summarized in Table 3 shows the following major observations:
* For homophilic datasets, most of the models performed well with small standard deviations, which shows that the existing GNNs are capable of learning proper filters for homophilic graphs. Moreover, Auto-GPRGNN and Auto-BernNet improves their backbone models, GPRGNN and BernNet, indicating that Auto-Polynomial helps them learn better polynomial filters.
* For heterophilic datasets, most of the baselines perform poorly with large standard deviations, which shows that they cannot learn heterophilic graph information well. Models using polynomial filters such as ChebNet, GPRGNN, and BernNet achieve better performance, while Auto-GPRGNN and Auto-BernNet achieve the best or second-best performance in most cases and their standard deviations are smaller in most cases. Especially, Auto-GPRGNN achieves a standard deviation 12% lower than GPRGNN on the Texas dataset. These results indicate the effectiveness of Auto-Polynomial in graph polynomial filter learning.
### Ablation study
In this section, we provide ablation studies on the labeling ratio and polynomial update frequency.
**Labeling ratio.** From the results in semi-supervised and supervised settings in Table 2 and Table 3, it appears that the improvement from Auto-Polynomial is more significant on semi-supervised tasks (low labeling ratio), as compared to supervised tasks (high labeling ratio). This phenomenon motivates the study of how the labeling ratio impacts the effectiveness of our proposed framework. To this end, we fix the validation set ratio at 10%, vary the training set
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & MLP & ChebNet & GCN & APPNP & H2GCN & GPRGNN & BernNet & Auto-GPRGNN & Auto-BernNet \\ \hline Cora & 64.79\(\pm\)0.70 & 81.37\(\pm\)0.93 & 83.11\(\pm\)1.05 & **84.71\(\pm\)0.53** & 82.84\(\pm\)0.69 & 84.37\(\pm\)0.89 & 84.27\(\pm\)0.81 & 84.71\(\pm\)1.19 & 84.66\(\pm\)0.60 \\ Citeseer & 64.90\(\pm\)0.94 & 71.56\(\pm\)0.86 & 71.67\(\pm\)1.31 & 72.70\(\pm\)0.80 & 72.61\(\pm\)0.67 & 72.17\(\pm\)0.71 & 72.08\(\pm\)0.63 & 72.76\(\pm\)1.18 & **72.85\(\pm\)0.94** \\ PubMed & 84.64\(\pm\)0.41 & 87.17\(\pm\)0.31 & 86.40\(\pm\)0.22 & 86.63\(\pm\)0.36 & 86.71\(\pm\)0.25 & 86.42\(\pm\)0.34 & **86.94\(\pm\)0.18** & 86.69\(\pm\)0.31 & 86.81\(\pm\)0.41 \\ Computers & 76.80\(\pm\)0.88 & 86.17\(\pm\)0.58 & 86.79\(\pm\)0.63 & 85.76\(\pm\)0.44 & 84.99\(\pm\)0.54 & 85.79\(\pm\)1.00 & 86.84\(\pm\)0.57 & 86.74\(\pm\)0.61 & **87.61\(\pm\)0.48** \\ Chameleon & 38.61\(\pm\)0.14 & 49.14\(\pm\)1.66 & 51.65\(\pm\)1.35 & 42.13\(\pm\)2.06 & 50.69\(\pm\)1.60 & 41.25\(\pm\)4.44 & 54.11\(\pm\)1.16 & 55.34\(\pm\)2.93 & **56.12\(\pm\)1.13** \\ Cornell & 66.28\(\pm\)5.97 & 54.21\(\pm\)5.90 & 48.83\(\pm\)5.42 & 63.38\(\pm\)5.28 & 63.52\(\pm\)6.38 & 42.83\(\pm\)5.92 & 63.79\(\pm\)5.30 & 58.55\(\pm\)7.44 & **67.03\(\pm\)4.49** \\ Squirrel & 26.58\(\pm\)1.37 & 32.73\(\pm\)0.82 & **36.98\(\pm\)1.03** & 29.70\(\pm\)1.31 & 29.69\(\pm\)0.80 & 28.83\(\pm\)0.50 & 26.75\(\pm\)4.49 & 33.16\(\pm\)2.83 & 34.48\(\pm\)1.29 \\ Texas & 71.62\(\pm\)4.23 & 70.74\(\pm\)5.10 & 58.18\(\pm\)5.11 & 68.45\(\pm\)7.35 & 75.00\(\pm\)3.42 & 41.96\(\pm\)12.59 & 71.62\(\pm\)5.26 & 73.85\(\pm\)2.64 & **75.47\(\pm\)3.18** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of real-world datasets
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Cora} & \multicolumn{3}{c}{Chameleon} \\ \cline{2-9} Method & Test Acc & Running Time (S) & Memory Cost (MB) & Test Acc & Running Time (S) & Memory Cost (MB) \\ \hline GPRGNN & 84.70\(\pm\)0.47 & 1.04 & 49.67 & 40.03\(\pm\)4.31 & 1.13 & 64.43 \\ Auto-GPRGNN (\(freq\)=1) & 84.35\(\pm\)0.47 & 4.18 & 66.94 & 54.96\(\pm\)2.29 & 5.92 & 88.29 \\ Auto-GPRGNN (\(freq\)=2) & 84.51\(\pm\)1.03 & 2.71 & 66.94 & 53.71\(\pm\)1.89 & 3.79 & 88.29 \\ Auto-GPRGNN (\(freq\)=3) & 84.65\(\pm\)0.73 & 2.25 & 66.94 & 52.71\(\pm\)3.46 & 3.38 & 88.29 \\ Auto-GPRGNN (\(freq\)=4) & 84.76\(\pm\)0.96 & 2.00 & 66.94 & 53.23\(\pm\)3.37 & 2.90 & 88.29 \\ Auto-GPRGNN (\(freq\)=5) & 83.94\(\pm\)0.79 & 1.91 & 66.94 & 52.91\(\pm\)2.35 & 2.66 & 88.29 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Efficiency of GPRGNN and Auto-GPRGNN on Cora and Chameleon. \(freq\) denotes the filter update frequency.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Statistics & Cora & Citeseer & Pubmed & Computers & Chameleon & Cornell & Squirrel & Texas \\ \hline Features & 1433 & 3703 & 500 & 767 & 2325 & 1703 & 2089 & 1703 \\ Nodes & 2708 & 3327 & 19717 & 13752 & 2277 & 183 & 5201 & 183 \\ Edges & 5278 & 4552 & 44324 & 245861 & 31371 & 277 & 198353 & 279 \\ Classes & 7 & 6 & 5 & 10 & 5 & 5 & 5 \\ \(h\) & 0.83 & 0.72 & 0.8 & 0.8 & 0.25 & 0.3 & 0.22 & 0.06 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of real-world datasets
Figure 3. Results under different ratios on 3 heterophilic datasets and 1 homophilic dataset.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & MLP & ChebNet & GCN & APPNP & H2GCN & GPRGNN & BernNet & Auto-GPRGNN & Auto-BernNet \\ \hline Cora & 76.20\(\pm\)1.83 & 86.67\(\pm\)1.99 & 86.59\(\pm\)1.31 & 88.70\(\pm\)0.93 & 87.36\(\pm\)1.11 & 88.79\(\pm\)1.37 & 86.53\(\pm\)1.26 & **89.05\(\pm\)1.09** & 88.70\(\pm\)0.93 \\ Citeseer & 74.76\(\pm\)1.26 & 77.44\(\pm\)1.77 & 77.43\(\pm\)1.42 & 79.43\(\pm\)1.75 & **82.78\(\pm\)1.21** & 78.95\(\pm\)1.93 & 78.58\(\pm\)1.82 & 79.76\(\pm\)0.89 & 79.82\(\pm\)1.55 \\ PubMed & 86.51\(\pm\)0.82 & 88.76\(\pm\)0.58 & 87.04\(\pm\)0.60 & 89.10\(\pm\)0.89 & 87.72\(\pm\)0.46 & 88.09\(\pm\)0.51 & 88.25\(\pm\)0.52 & **89.10\(\pm\)0.54** & 88.38\(\pm\)0.45 \\ Computers & 83.07\(\pm\)0.74 & **89.11\(\pm\)0.61** & 87.71\(\pm\)0.73 & 86.58\(\pm\)0.44 & 89.04\(\pm\)0.61 & 87.04\(\pm\)2.71 & 86.55\(\pm\)0.54 & 88.56\(\pm\)0.54 & 88.71\(\pm\)0.71 \\ Chameleon & 47.62\(\pm\)1.41 & 60.68\(\pm\)2.22 & 61.90\(\pm\)2.19 & 25.65\(\pm\)2.48 & 59.58\(\pm\)2.07 & 62.54\(\pm\)2.77 & 65.71\(\pm\)1.57 & 67.38\(\pm\)1.44 & **68.07\(\pm\)1.48** \\ Cornell & 87.03\(\pm\)3.78 & 77.57\(\pm\)0.65 & 60.27\(\pm\)11.53 & 88.65\(\pm\)5.64 & 87.36\(\pm\)5.82 & 87.03\(\pm\)4.95 & 83.24\(\pm\)4.32 & **91.08\(\pm\)5
ratio over {5%, 10%, 20%, 30%}, and adjust the test set ratio accordingly. We evaluate the performance on three representative heterophilic datasets (Chameleon, Squirrel and Texas), and one homophilic dataset (Cora). We compare the proposed Auto-GPRGNN and Auto-BernNet with the original GPRGNN and BernNet, following the hyperparameter settings in Section 5.1. The results under different ratios on four datasets are shown in Figure 3, and we can have the following observations:
* Auto-GPRGNN and Auto-BernNet can achieve significant improvements in heterophilic graphs compared to the original GPRGNN and BernNet. This suggests that the Auto-Polynomial framework can help the polynomial filters effectively adapt to heterophilic graphs.
* The improvements of Auto-Polynomial over the backbone models are even more obvious under lower labeling ratios. This shows that when there are not enough labeled samples available, the proposed Auto-Polynomial framework can offer greater advantages in addressing the overfitting issues and enhancing the model's generalization ability.
**Update frequency.** In practical implementation, we introduce a hyper-parameter \(freq\) to control the update frequency of polynomial coefficients and thus improve the efficiency of our models. To investigate the influence of update frequency on the efficiency and performance of Auto-Polynomial, we vary the frequency at which Auto-GPRGNN updates \(\Theta\) and examine the the resulting changes in performance, training time and memory cost. We conduct the experiments on both homophilic dataset (Cora) and heterophilic dataset (Chameleon). The results presented in Table 4 provide the following observations:
* On the homophilic dataset (Cora), reducing the update frequency of Auto-GPRGNN almost does not impact the performance, validating the stability of Auto-Polynomial.
* On the heterophilic dataset (Chameleon), Auto-GPRGNN outperforms GPRGNN by significantly and is not very sensitive to the update frequency. This indicates that Auto-Polynomial can greatly enhance the model's efficiency by using an appropriate update frequency, with negligible sacrifice in performance.
* The memory usage of Auto-GPRGNN is approximately 1.3 times that of GPRGNN. However, this increase in memory usage can be deemed acceptable, given the significant improvement achieved by Auto-GPRGNN.
To summarize, Auto-Polynomial provides an effective, stable, and efficient framework to improve the performance and generalization of polynomial filter learning.
## 6. Related Work
**Polynomial Graph Filter.** Polynomial graph filters have been widely used as the guiding principles in the design of spectral-based GNNs starting from the early development of ChebNet (Chen et al., 2020). It has gained increasing attention recently due to its powerful flexibility and expressiveness in modeling graph signals with complex properties. For instance, some representative works including ChebNet (Chen et al., 2020), GPRGNN (Chen et al., 2020) and BernNet (Chen et al., 2020) utilize Chebyshev basis, Monomial basis, and Bernstein basis, respectively, to approximate the polynomial filters whose coefficients can be learned adaptively to model the graph signal. Therefore, these polynomial-based approaches exhibit encouraging results in graph signal modeling on both homophilic and heterophilic graphs. In this work, we design novel experiments to explore the potential and limitations of graph polynomial learning approaches, and we propose Auto-Polynomial, a general automated learning framework to improve the effectiveness of any polynomial-based GNNs, making this work a complementary effort to existing works.
**AutoML on GNNs.** Automated machine learning (AutoML) (Chen et al., 2020) has gained great attention during the past few years due to its great potential in automating various procedures in machine learning, such as data augmentation (Chen et al., 2020; Chen et al., 2020), neural architecture searching (NAS) (Wang et al., 2020; Wang et al., 2020), and hyper-parameter optimization (HPO) (Chen et al., 2020; Chen et al., 2020). Recent works have applied AutoML to GNN architecture search using various search strategies such as random search (Chen et al., 2020; Chen et al., 2020; Chen et al., 2020; Chen et al., 2020), reinforcement learning (Chen et al., 2020; Wang et al., 2020; Wang et al., 2020), evolutionary algorithms (Chen et al., 2020; Chen et al., 2020; Chen et al., 2020), and differentiable search (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). However, these works mainly focus on the general architecture search and none of them provides an in-depth investigation into the polynomial filter learning. Moreover, these methods require a vast search space and complex learning algorithms, which can be time-consuming and resource-intensive. Different from existing works, this work provides a dedicated investigation into polynomial graph filter learning and proposes the first automated learning strategy to significantly and consistently improve the effectiveness of polynomial filter learning with a highly efficient learning algorithm.
## 7. Conclusion
In this work, we conduct a novel investigation into the potential and limitations of the widely used polynomial graph filter learning approach in graph modeling. Our preliminary study reveals the suboptimality and instability of the existing learning approaches, and we further uncover the severe overfitting issues as a plausible explanation for its failures. To address these limitations, we propose Auto-Polynomial, a novel and general automated polynomial graph filter learning framework to improve the effectiveness and generalization of any polynomial-based GNNs, making this work orthogonal and complementary to existing efforts in this research topic. Comprehensive experiments and ablation studies demonstrate the significant and consistent improvements of Auto-Polynomial in various learning settings. This work further unleashes the potential of polynomial graph filter-based graph modeling.
|
2306.04810 | Correlative Information Maximization: A Biologically Plausible Approach
to Supervised Deep Neural Networks without Weight Symmetry | The backpropagation algorithm has experienced remarkable success in training
large-scale artificial neural networks; however, its biological plausibility
has been strongly criticized, and it remains an open question whether the brain
employs supervised learning mechanisms akin to it. Here, we propose correlative
information maximization between layer activations as an alternative normative
approach to describe the signal propagation in biological neural networks in
both forward and backward directions. This new framework addresses many
concerns about the biological-plausibility of conventional artificial neural
networks and the backpropagation algorithm. The coordinate descent-based
optimization of the corresponding objective, combined with the mean square
error loss function for fitting labeled supervision data, gives rise to a
neural network structure that emulates a more biologically realistic network of
multi-compartment pyramidal neurons with dendritic processing and lateral
inhibitory neurons. Furthermore, our approach provides a natural resolution to
the weight symmetry problem between forward and backward signal propagation
paths, a significant critique against the plausibility of the conventional
backpropagation algorithm. This is achieved by leveraging two alternative, yet
equivalent forms of the correlative mutual information objective. These
alternatives intrinsically lead to forward and backward prediction networks
without weight symmetry issues, providing a compelling solution to this
long-standing challenge. | Bariscan Bozkurt, Cengiz Pehlevan, Alper T Erdogan | 2023-06-07T22:14:33Z | http://arxiv.org/abs/2306.04810v3 | Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry
###### Abstract
The backpropagation algorithm has experienced remarkable success in training large-scale artificial neural networks, however, its biological-plausibility is disputed, and it remains an open question whether the brain employs supervised learning mechanisms akin to it. Here, we propose correlative information maximization between layer activations as an alternative normative approach to describe the signal propagation in biological neural networks in both forward and backward directions. This new framework addresses many concerns about the biological-plausibility of conventional artificial neural networks and the backpropagation algorithm. The coordinate descent-based optimization of the corresponding objective, combined with the mean square error loss function for fitting labeled supervision data, gives rise to a neural network structure that emulates a more biologically realistic network of multi-compartmental pyramidal neurons with dendritic processing and lateral inhibitory neurons. Furthermore, our approach provides a natural resolution to the weight symmetry problem between forward and backward signal propagation paths, a significant critique against the plausibility of the conventional backpropagation algorithm. This is achieved by leveraging two alternative, yet equivalent forms of the correlative mutual information objective. These alternatives intrinsically lead to forward and backward prediction networks without weight symmetry issues, providing a compelling solution to this long-standing challenge.
## 1 Introduction
How biological neural networks learn in a supervised manner has long been an open problem. The backpropagation algorithm Rumelhart et al. (1986), with its remarkable success in training large-scale artificial neural networks and intuitive structure, has inspired proposals for how biologically plausible neural networks can perform the necessary efficient credit-assignment for supervised learning in deep neural architectures (Whittington and Bogacz, 2019). Nonetheless, certain aspects of the backpropagation algorithm, combined with the oversimplified nature of artificial neurons, have been viewed as impediments to proposals rooted in this inspiration Crick (1989).
One of the primary critiques regarding the biological plausibility of the backpropagation algorithm is the existence of a parallel backward path for backpropagating error from the output towards the input, which uses the same synaptic weights as the forward path (Rumelhart et al., 1986; Whittington and Bogacz, 2019; Grossberg, 1987). Although such weight transport, or weight symmetry, is deemed highly unlikely based on experimental evidence Crick (1989), Grossberg (1987), some biologically plausible frameworks still exhibit this feature, which is justified by the symmetric structure of the
Hebbian updates employed in these frameworks Whittington and Bogacz (2019); Xie and Seung (2003); Scellier and Bengio (2017).
The concerns about the simplicity of artificial neurons have been addressed by models which incorporate multi-compartment neuron models into networked architectures and ascribe important functions to dendritic processing in credit assignment (Larkum, 2013; Urbanczik and Senn, 2014; Sacramento et al., 2018; Golkar et al., 2022). This new perspective has enabled the development of neural networks with improved biological plausibility.
In this article, we propose the use of correlative information maximization (CorInfoMax) among consecutive layers of a neural network as a new supervised objective for biologically plausible models, which offers
* a principled solution to the weight symmetry problem: our proposed information theoretic criterion aims to maximize the linear dependence between the signals in two neighboring layers, naturally leading to the use of linear or affine transformations in between them. A key property of this approach is that employing two alternative expressions for the correlative mutual information (CMI) results in potentially _asymmetric forward and backward prediction networks_, offering a natural solution to the weight transport problem. Consequently, predictive coding in both directions emerges as the inherent solution to the correlative information maximization principle, fostering signal transmission in both forward and top-down directions through asymmetrical connections. While the CorInfoMax principle enhances information flow in both directions, the introduction of set membership constraints on the layer activations, such as non-negativity, through activation nonlinearities and lateral inhibitions, encourages compression of information and sparse representations Bozkurt et al. (2023).
* a normative approach for deriving networks with multi-compartment neurons: the gradient search-based optimization of the CorInfoMax objective naturally leads to network models that employ multi-compartment pyramidal neuron models accompanied by interneurons as illustrated in Figure 1.
As derived and explained in detail in Section 2, the resulting networks incorporate lateral connections and auto-synapses to increase the entropy of a layer, promoting utilization of all dimensions within the representation space of that layer. Meanwhile, asymmetric feedforward and feedback connections act as forward and backward predictors, respectively, to reduce the conditional entropies between layers, targeting the elimination of redundancy.
### Related work
#### 1.1.1 Multi-compartmental neuron model based biologically plausible approaches
Experimentally grounded studies, such as (Larkum, 2013; Petreanu et al., 2009), have been influential for considering a role for dendritic-processing in multi-compartmental neurons for learning and credit assignment Richards and Lillicrap (2019). Subsequent research has explored biologically plausible models with supervised learning functionality, such as the two-compartment neuron model by Urbanczik and Senn (2014) and the three-compartment pyramidal neuron model by Sacramento et al. (2018). Both models integrate non-Hebbian learning and spike-time dependent plasticity, while the latter includes SST interneurons (Urban-Ciecko and Barth, 2016). Similar frameworks have been proposed by (Guerguiev et al., 2017) and (Golkar et al., 2022), with the latter introducing a normative framework based on multi-compartmental neuron structure, top-down feedback, lateral and feedforward connections, and Hebbian and non-Hebbian learning rules, emerging from the optimization of a prediction error objective with a whitening constraint on co-layer neurons.
In a similar vein to (Golkar et al., 2022), we propose an alternative normative framework based on information maximization principle. In this framework, the three-compartment structure and associated forward, top-down and lateral synaptic connections stem from the maximization of CMI between adjacent layers, without the imposition of any whitening constraint.
#### 1.1.2 Weight symmetry problem
A central concern regarding the biological plausibility of the backpropagation algorithm pertains to the weight symmetry issue: synaptic weights in the feedback path for error backpropagation are transposes of those used in the forward inference path (Whittington and Bogacz, 2019; Crick, 1989; Grossberg, 1987). The requirement of tied weights in backpropagation is questionable for
physically distinct feedforward and feedback paths in biological systems, leading many researchers to focus on addressing the weight symmetry issue.
Various strategies have been devised to address the weight symmetry issue, encompassing the employment of random and fixed feedback weights (Lillicrap et al., 2016), and the introduction of antisymmetry through separate random initializations (Amit, 2019). Liao et al. (2015) showed that the sign of the feedback weights (rather than their magnitude) affects the learning performance, and proposed the sign-symmetry algorithm.
Intriguingly, this symmetric weight structure is also observed in biologically plausible frameworks such as predictive coding (PC) (Rao and Ballard, 1999, Whittington and Bogacz, 2017, Song et al., 2020), equilibrium propagation (EP) (Scellier and Bengio, 2017b, Laborieux et al., 2021, Laborieux and Zenke, 2022), and similarity matching (Qin et al., 2021). This phenomenon can be rationalized by the transpose symmetry of the Hebbian update with respect to inputs and outputs. The EP framework in (Laborieux et al., 2021) unties forward and backward connections inspired by (Scellier et al., 2018, Kolen and Pollack, 1994), and only yields small performance degradation. A more recent approach by Golkar et al. (2022) addresses this challenge by integrating two alternative forward prediction error loss function terms associated with the same network layer and leveraging presumed whitening constraints to eliminate shared feedback coefficients.
In existing predictive coding-based schemes such as (Rao and Ballard, 1999, Whittington and Bogacz, 2017, Song et al., 2020), the loss function contains only forward prediction error terms. The feedback connection with symmetric weights, which backpropagates forward prediction error, emerges due to the gradient-based optimization of the PC loss. In contrast, our framework's crucial contribution is the adoption of two alternative expressions for the correlative mutual information between consecutive network layers as the central normative approach. Utilizing these two alternatives naturally leads to both forward and backward prediction paths with asymmetric weights, promoting information flow in both feedforward and top-down directions. Unlike the work of (Golkar et al., 2022), our method circumvents the need for layer whitening constraints and additional forward prediction terms to achieve asymmetric weights.
#### 1.1.3 Correlative information maximization
Information maximization has been proposed as a governing or guiding principle in several machine learning and neuroscience frameworks for different tasks: (i) The propagation of information within a self-organized network as pioneered by Linsker (1988). (ii) Extracting hidden features or factors associated with observations by maximizing information between the input and its internal representation such as independent component analysis (ICA-InfoMax) approach by Bell and Sejnowski (1995). In the neuroscience domain, the motivation has been to provide normative explanations to the behaviour of cortical activities evidenced by experimental work, such as orientation and visual stimuli length selectivity of primary visual cortex neurons (Hubel and Wiesel, 1959; Bell and Sejnowski, 1997). The same idea has been recently extended in the machine learning field by the Deep Infomax approach where the goal is to transfer maximum information from the input of a deep network to its final layer, while satisfying prior distribution constraints on the output representations (Hjelm et al., 2019). (iii) Matching representations corresponding to two alternative augmentations or modalities of the same input in the context of self-supervised learning (Becker and Hinton, 1992).
Correlative mutual information maximization has been recently proposed as an alternative for Shannon Mutual Information (SMI), due to its desirable properties (Erdogan, 2022): (i) maximization of CMI is equivalent to maximizing linear dependence, which may be more relevant than establishing arbitrary nonlinear dependence in certain applications (Ozsoy et al., 2022), (ii) it is based only on the second order statistics, making it relatively easier to optimize. Erdogan (2022) proposed the use of CorInfoMax for solving blind source separation (BSS) problem to retrieve potentially correlated components from their mixtures. Ozsoy et al. (2022) proposed maximizing the CMI between the representations of two different augmentations of the same input as a self-supervised learning approach. More recently, Bozkurt et al. (2023) introduced an unsupervised framework to generate biologically plausible neural networks for the BSS problem with infinitely many domain selections using the CMI objective.
In this article, we suggest employing the CorInfoMax principle for biologically plausible supervised learning. The key difference compared to the unsupervised framework presented in (Bozkurt et al., 2023) is the utilization of two alternative forms of mutual information. This leads to a bidirectional information flow that enables error backpropagation without encountering the weight symmetry issue.
## 2 Deep correlative information maximization
### Network data model
We assume a dataset with \(L\) input data points \(\mathbf{x}[t]\in\mathbb{R}^{m},t=1,\ldots,L\), and let \(\mathbf{y}_{T}[t]\in\mathbb{R}^{n}\) be the corresponding labels. We consider a neural network with \(P-1\) hidden layers whose activities are denoted by \(\mathbf{r}^{(k)}\in\mathbb{R}^{N_{k}},k=1,\ldots,P-1\). For notational simplicity, we also denote input and output of the network by \(\mathbf{r}^{(0)}\) and \(\mathbf{r}^{(P)}\), i.e., \(\mathbf{r}^{(0)}[t]=\mathbf{x}[t]\) and \(\mathbf{r}^{(P)}[t]=\hat{\mathbf{y}}[t]\). We consider polytopic constraints for the hidden and output layer activities, i.e., \(\mathbf{r}^{(k)}\in\mathcal{P}^{(k)}\), where \(\mathcal{P}^{(k)}\) is the presumed polytopic domain for the \(k\)-th layer (Bozkurt et al., 2023; Tatih and Erdogan, 2021). We note that the polytopic assumptions are plausible as the activations of neurons in practice are bounded. In particular, we will make the specific assumption that \(\mathcal{P}^{(k)}=\mathcal{B}_{\infty,+}=\{\mathbf{r}:\mathbf{0}\leq\mathbf{r }\leq\mathbf{1}\}\), i.e., (normalized) activations lie in a nonnegative unit-hypercube. Such nonnegativity constraints have been connected to disentangling behavior (Plumbley, 2003; Pehlevan et al., 2017; Whittington et al., 2023), however, we consider extensions in the form of alternative polytopic sets corresponding to different feature priors Bozkurt et al. (2023). More broadly, the corresponding label \(\mathbf{y}_{T}\) can be, one-hot encoded label vectors for a classification problem, or discrete or continuous valued vectors for a regression problem.
### Correlative information maximization based signal propagation
#### 2.2.1 Stochastic CorInfoMax based supervised criterion
We propose the total correlative mutual information among consecutive layers, augmented with the mean-square-error (MSE) training loss, as the stochastic objective:
\[J(\mathbf{r}^{(1)},\ldots,\mathbf{r}^{(P)})=\sum_{k=0}^{P-1}I^{(\varepsilon_ {k})}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})-\frac{\beta}{2}E(\|\mathbf{y}_{T} -\mathbf{r}^{(P)}\|_{2}^{2}), \tag{1}\]
where, as defined in [Erdogan, 2022, Ozsoy et al., 2022] and in Appendix A,
\[I^{\overset{\rightarrow}{(\epsilon_{k})}}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})= \frac{1}{2}\log\det\left(\mathbf{R}_{\mathbf{r}^{(k+1)}}+\epsilon_{k}\mathbf{I} \right)-\frac{1}{2}\log\det\left(\mathbf{R}_{\overset{\leftarrow}{\mathbf{e}}_{*} ^{(k+1)}}+\epsilon_{k}\mathbf{I}\right), \tag{2}\]
and \(\mathbf{R}_{\mathbf{r}^{(k+1)}}=E(\mathbf{r}^{(k+1)}\mathbf{r}^{(k+1)}{}^{T})\), \(\mathbf{R}_{\mathbf{r}^{(k)}\mathbf{r}^{(l)}}=E(\mathbf{r}^{(k)}\mathbf{r}^{(l )}{}^{T})\) are the autocorrelation and the cross-correlation matrices corresponding to the layer activations, respectively. Furthermore, \(\mathbf{R}_{\overset{\rightarrow}{\mathbf{e}}_{*}^{(k+1)}}=\mathbf{R}_{\mathbf{ r}^{(k+1)}}-\mathbf{R}_{\mathbf{r}^{(k+1)}\mathbf{r}^{(k)}}{}^{T}( \mathbf{R}_{\mathbf{r}^{(k)}}+\epsilon_{k}\mathbf{I})^{-1}\mathbf{R}_{\mathbf{r}^ {(k)}\mathbf{r}^{(k+1)}}\) corresponds to the error autocorrelation matrix for the best linear regularized minimum MSE predictor of \(\mathbf{r}^{(k+1)}\) from \(\mathbf{r}^{(k)}\). We refer to this problem as the _regularized forward prediction problem_ represented by the optimization
\[\underset{\mathbf{W}_{ff}^{(k)}}{\text{minimize}}\ E(\|\overset{\rightarrow}{ \mathbf{e}}^{(k+1)}\|_{2}^{2})+\epsilon_{k}\|\mathbf{W}_{ff}^{(k)}\|_{F}^{2}\ \ \ \ \text{s.t.}\ \ \ \overset{\rightarrow}{\mathbf{e}}^{(k+1)}=\mathbf{r}^{(k+1)}-\mathbf{W}_{ff}^{(k )}\mathbf{r}^{(k)}, \tag{3}\]
and \(\mathbf{e}_{*}^{(k+1)}\) is the forward prediction error corresponding to the optimal forward predictor \(\mathbf{W}_{ff,*}^{(k)}\).
An equal and alternative expression for the CMI can be written as (Appendix A)
\[I^{\overset{\leftarrow}{(\epsilon_{k})}}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)} )=\frac{1}{2}\log\det(\mathbf{R}_{\mathbf{r}^{(k)}}+\epsilon_{k}\mathbf{I})-\frac {1}{2}\log\det\left(\mathbf{R}_{\overset{\leftarrow}{\mathbf{e}}_{*}^{(k)}}+ \epsilon_{k}\mathbf{I}\right), \tag{4}\]
where \(\mathbf{R}_{\overset{\leftarrow}{\mathbf{e}}_{*}^{(k)}}=\mathbf{R}_{\mathbf{r}^ {(k)}}-\mathbf{R}_{\mathbf{r}^{(k+1)}\mathbf{r}^{(k)}}{}^{T}(\mathbf{R}_{ \mathbf{r}^{(k+1)}}+\epsilon_{k}\mathbf{I})^{-1}\mathbf{R}_{\mathbf{r}^{(k+1)} \mathbf{r}^{(k)}}\) corresponds to the error auto-correlation matrix for the best linear regularized minimum MSE predictor of \(\mathbf{r}^{(k)}\) from \(\mathbf{r}^{(k+1)}\). The corresponding _regularized backward prediction problem_ is defined by the optimization
\[\underset{\mathbf{W}_{fb}^{(k)}}{\text{minimize}}\ E(\|\overset{\leftarrow}{ \mathbf{e}}^{(k)}\|_{2}^{2})+\epsilon_{k}\|\mathbf{W}_{fb}^{(k)}\|_{F}^{2}\ \ \ \ \text{s.t.}\ \ \ \overset{\leftarrow}{\mathbf{e}}^{(k)}=\mathbf{r}^{(k)}-\mathbf{W}_{fb}^{(k)} \mathbf{r}^{(k+1)}. \tag{5}\]
We observe that the two alternative yet equivalent representations of the correlative mutual information between layers \(\mathbf{r}^{(k)}\) and \(\mathbf{r}^{(k+1)}\) in (2) and (4) are intrinsically linked to the forward and backward prediction problems between these layers, which are represented by the optimizations in (3) and (5), respectively. As we will demonstrate later, the existence of these two alternative forms for the CMI plays a crucial role in deriving a neural network architecture that overcomes the weight symmetry issue.
#### 2.2.2 Sample-based supervised CorInfoMax criterion
Our aim is to construct a biologically plausible neural network that optimizes the total CMI, equation 1, in an adaptive manner. Here, we obtain a sample-based version of (1) as a step towards that goal.
We first define the weighted sample auto and cross-correlation matrices as follows:
\[\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]=\frac{1-\lambda_{\mathbf{r}}}{1- \lambda_{\mathbf{r}}^{t}}\sum_{i=1}^{t}\lambda_{\mathbf{r}}^{t-i}\mathbf{r}^{ (k)}[i]\mathbf{r}^{(k)}[i]^{T},\hat{\mathbf{R}}_{\mathbf{r}^{(k)}\mathbf{r}^ {(k+1)}}[t]=\frac{1-\lambda_{\mathbf{r}}}{1-\lambda_{\mathbf{r}}^{t}}\sum_{i=1 }^{t}\lambda_{\mathbf{r}}^{t-i}\mathbf{r}^{(k)}[i]\mathbf{r}^{(k+1)}[i]^{T}, \tag{6}\]
for \(k=0,\ldots,P\), respectively, where \(0\ll\lambda_{\mathbf{r}}<1\) is the forgetting factor. Next, we define two equivalent forms of the sample-based CMI, \(\hat{I}^{(\epsilon)}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})[t]\):
\[\hat{I}^{\overset{\rightarrow}{(\epsilon_{k})}}(\mathbf{r}^{(k)},\mathbf{r} ^{(k+1)})[t] =\frac{1}{2}\log\det(\hat{\mathbf{R}}_{\mathbf{r}^{(k+1)}}[t]+ \epsilon_{k}\mathbf{I})-\frac{1}{2}\log\det(\hat{\mathbf{R}}_{\overset{\leftarrow}{ \mathbf{e}}^{(k+1)}}[t]+\epsilon_{k}\mathbf{I}), \tag{7}\] \[\hat{I}^{\overset{\leftarrow}{(\epsilon_{k})}}(\mathbf{r}^{(k)}, \mathbf{r}^{(k+1)})[t] =\frac{1}{2}\log\det(\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]+ \epsilon_{k}\mathbf{I})-\frac{1}{2}\log\det(\hat{\mathbf{R}}_{\overset{\leftarrow}{ \mathbf{e}}^{(k)}}[t]+\epsilon_{k}\mathbf{I}), \tag{8}\]
where \(\hat{\mathbf{R}}_{\overset{\leftarrow}{\mathbf{e}}^{(k+1)}}[t]=\hat{\mathbf{R}}_{ \mathbf{r}^{(k+1)}}[t]-\hat{\mathbf{R}}_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}[ t]^{T}(\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]+\epsilon_{k}\mathbf{I})^{-1}\hat{ \mathbf{R}}_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}[t]\) is the autocorrelation matrix for the forward prediction error at level-\((k+1)\), \(\overset{\rightarrow}{\mathbf{e}}^{(k+1)}[t]\), corresponding to the best linear weighted regularized least squares predictor of \(\mathbf{r}^{(k+1)}[t]\) from the lower level activations \(\mathbf{r}^{(k)}[t]\). Similarly, \(\hat{\mathbf{R}}_{\overset{\leftarrow}{\mathbf{e}}^{(k)}}[t]=\hat{\mathbf{R}}_{ \mathbf{r}^{(k)}}[t]-\hat{\mathbf{R}}_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}[t]( \hat{\mathbf{R}}_{\mathbf{r}^{(k+1)}}[t]+\epsilon_{k}\mathbf{I})^{-1}\hat{\mathbf{R }}_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}[t]^{T}\) is the autocorrelation matrix for the backward prediction error at level-\((k)\), \(\overset{\leftarrow}{\mathbf{e}}^{(k)}[t]\), corresponding to the best linear weighted regularized least squares predictor of \(\mathbf{r}^{(k)}[t]\) from the higher level activations \(\mathbf{r}^{(k+1)}[t]\).
The sample-based CorInfoMax optimization can be written as:
\[\underset{\mathbf{r}^{(k)}[t],k=0,\dots,P}{\operatorname{maximize}} \sum_{k=0}^{P-1}\hat{I}^{(\epsilon_{k})}(\mathbf{r}^{(k)},\mathbf{r}^ {(k+1)})[t]-\frac{\beta}{2}\|\mathbf{y}_{T}[t]-\mathbf{r}^{(P)}[t]\|_{2}^{2} \tag{9a}\] \[\operatorname{subject\ to} \mathbf{r}^{(k)}[t]\in\mathcal{P}^{(k)},k=1,\dots,P,\] (9b) \[\mathbf{r}^{(0)}[t]=\mathbf{x}[t], \tag{9c}\]
The first-order Taylor series approximation of the \(\log\det\) terms on the right of (7) and (8) are:
\[\log\det\left(\hat{\mathbf{R}}_{\overline{\mathbf{e}}^{(k+1)}}[t]+ \epsilon_{k}\mathbf{I}\right)\approx\frac{1}{\epsilon_{k}}\operatorname{Tr} \left(\hat{\mathbf{R}}_{\overline{\mathbf{e}}^{(k+1)}}[t]\right)+N_{k+1}\log( \epsilon_{k})\] \[=\frac{1}{\epsilon_{k}}\sum_{i=1}^{t}\lambda_{\mathbf{r}}^{t-i} \|\mathbf{r}^{(k+1)}[i]-\mathbf{W}_{ff,*}^{(k)}[t]\mathbf{r}^{(k)}[i]\|_{2}^{ 2}+\epsilon_{k}\|\mathbf{W}_{ff,*}^{(k)}[t]\|_{F}^{2}+N_{k+1}\log(\epsilon_{k}), \tag{10}\] \[\quad\log\det\left(\hat{\mathbf{R}}_{\overline{\mathbf{e}}^{(k)}}[t] +\epsilon_{k}\mathbf{I}\right)\approx\frac{1}{\epsilon_{k}}\operatorname{Tr} \left(\hat{\mathbf{R}}_{\overline{\mathbf{e}}^{(k)}}[t]\right)+N_{k}\log(\epsilon_ {k})\] \[=\frac{1}{\epsilon_{k}}\sum_{i=1}^{t}\lambda_{\mathbf{r}}^{t-i} \|\mathbf{r}^{(k)}[i]-\mathbf{W}_{fb,*}^{(k)}[t]\mathbf{r}^{(k+1)}[i]\|_{2}^{ 2}+\epsilon_{k}\|\mathbf{W}_{fb,*}^{(k)}[t]\|_{F}^{2}+N_{k}\log(\epsilon_{k}). \tag{11}\]
Note that in (10), \(\mathbf{W}_{ff,*}^{(k)}[t]\) denotes the optimal linear regularized weighted least squares forward predictor coefficients in predicting \(\mathbf{r}^{(k+1)}[i]\) from \(\mathbf{r}^{(k)}[i]\) for \(i=1,\dots,t\). Likewise, \(\mathbf{W}_{fb,*}^{(k)}[t]\) in (11) represents the optimal linear regularized weighted least squares backward predictor coefficients in predicting \(\mathbf{r}^{(k)}[i]\) from \(\mathbf{r}^{(k+1)}[i]\) for \(i=1,\dots,t\). Consequently, the optimal choices of forward and backward predictor coefficients are coupled with the optimal choices of layer activations.
In the online optimization process, we initially relax this requirement and start with random predictor coefficient selections. During the learning process, we apply a coordinate ascent-based procedure on activation signals and predictor coefficients. Specifically, at time step-\(t\), we first optimize with respect to the activations \(\{\mathbf{r}^{(k)}[t],k=1,\dots,P\}\), where we assume predictor coefficients to be fixed. Next, we update the forward and backward predictor coefficients \(\mathbf{W}_{ff}^{(k)}\) and \(\mathbf{W}_{fb}^{(k)}\), for \(k=1,\dots,P\), to reduce the corresponding forward and backward prediction errors, respectively. As the algorithm iterations progress, the predictor coefficients converge to the vicinity of their optimal values.
For the first phase of the online optimization, we employ a projected gradient ascent-based approach for activations: for \(k=1,\dots,P-1\), the layer activation vector \(\mathbf{r}^{(k)}[t]\) is included in the objective function terms \(\hat{I}^{(\epsilon)}(\mathbf{r}^{(k-1)},\mathbf{r}^{(k)})[t]\) and \(\hat{I}^{(\epsilon)}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})[t]\). Therefore, to calculate the gradient with respect to \(\mathbf{r}^{(k)}[t]\), we can use the modified form of expressions in (7) and (8), where we use the approximations in (10)-(11), and the optimal predictors are replaced with their current estimates:
\[\nabla_{\mathbf{r}^{(k)}}\hat{J}_{k}(\mathbf{r}^{(k)})[t]=\nabla_ {\mathbf{r}^{(k)}}\hat{I}^{\overset{\rightarrow}{\leftarrow}}_{(\epsilon_{k-1 })}(\mathbf{r}^{(k-1)},\mathbf{r}^{(k)})[t]+\nabla_{\mathbf{r}^{(k)}}\hat{I}^{ \overset{\leftarrow}{\leftarrow}}_{(\epsilon_{k})}(\mathbf{r}^{(k)},\mathbf{ r}^{(k+1)})[t]\] \[=\tfrac{1}{2}\nabla_{\mathbf{r}^{(k)}}(\log\det(\hat{\mathbf{R}} _{\mathbf{r}^{(k)}}[t]+\epsilon_{k-1}\mathbf{I})+\log\det(\hat{\mathbf{R}}_{ \mathbf{r}^{(k)}}[t]+\epsilon_{k}\mathbf{I}))-\tfrac{1}{\epsilon_{k-1}}\overline{ \mathbf{e}}^{(k)}[t]-\tfrac{1}{\epsilon_{k}}\overline{\mathbf{e}}^{(k)}[t], \tag{12}\]
where
\[\overline{\mathbf{e}}^{(k)}[t]=\mathbf{r}^{(k)}[t]-\mathbf{W}_{ff}^{(k-1)}[t] \mathbf{r}^{(k-1)}[t],\quad\overset{\leftarrow}{\mathbf{e}}^{(k)}[t]= \mathbf{r}^{(k)}[t]-\mathbf{W}_{fb}^{(k)}[t]\mathbf{r}^{(k+1)}[t] \tag{13}\]
are forward and backward prediction errors at level-\(k\), respectively. Following the procedure in Bozkurt et al. (2023), for the gradient term in (12), we can write:
\[\frac{1}{2}\nabla_{\mathbf{r}^{(k)}}(\log\det(\hat{\mathbf{R}}_{ \mathbf{r}^{(k)}}[t]+\epsilon_{k-1}\mathbf{I})+\log\det(\hat{\mathbf{R}}_{\mathbf{ r}^{(k)}}[t]+\epsilon_{k}\mathbf{I}))=2\gamma\mathbf{B}_{\mathbf{r}^{(k)}}[t]\mathbf{r}^{(k)}[t], \tag{14}\]
where \(\mathbf{B}_{\mathbf{r}^{(k)}}[t]=(\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]+\epsilon_ {k-1}\mathbf{I})^{-1}\approx(\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]+\epsilon_{k} \mathbf{I})^{-1}\) and \(\gamma=\frac{1-\lambda_{\mathbf{r}}}{\lambda_{\mathbf{r}}}\). The gradient of the objective for the final layer can be expressed as:
\[\nabla_{\mathbf{r}^{(P)}}(\hat{I}^{\overset{\rightarrow}{\leftarrow}}( \mathbf{r}^{(P-1)},\mathbf{r}^{(P)})[t]-\frac{\beta}{2}\|\mathbf{r}^{(P)}[t]- \mathbf{y}_{T}[t]\|_{2}^{2})\] \[=\gamma\mathbf{B}_{\mathbf{r}^{(P)}}[t]\mathbf{r}^{(P)}[t]-\frac{1}{ \epsilon_{P-1}}\overline{\mathbf{e}}^{(P)}[t]-\beta(\mathbf{r}^{(P)}[t]-\mathbf{y}_ {T}[t]).\]
### Neural network formulation based on information maximization
In this section, we develop a biologically plausible neural network grounded on the correlative information maximization-based network propagation model outlined in Section 2.2. To achieve this, we employ projected gradient ascent optimization for determining layer activations \(\mathbf{r}^{(1)}[t],\mathbf{r}^{(2)}[t],\ldots,\mathbf{r}^{(P)}[t]\), which shape the network structure and dynamics, as well as updating the corresponding synapses that govern the learning dynamics.
#### 2.3.1 Network structure and neural dynamics
In this section, we show that the projected gradient ascent solution to the optimization in (9) defines a multilayer recurrent neural network. To this end, we introduce the intermediate variable \(\mathbf{u}^{(k)}\) as the updated layer-\(k\) activations prior to the projection onto the domain set \(\mathcal{P}^{(k)}\). Utilizing the gradient expressions in (12)-(14), we can express the network dynamics for layers \(k=1,\ldots,P-1\) as follows:
\[\tau_{\mathbf{u}}\frac{d\mathbf{u}^{(k)}[t;s]}{ds} =-g_{lk}\mathbf{u}^{(k)}[t;s]+\frac{1}{\epsilon_{k}}\boldsymbol{ M}^{(k)}[t]\mathbf{r}^{(k)}[t;s]-\frac{1}{\epsilon_{k-1}}\overset{\rightarrow}{ \mathbf{e}}_{u}^{(k)}[t;s]-\frac{1}{\epsilon_{k}}\overset{\leftarrow}{ \mathbf{e}}_{u}^{(k)}[t;s], \tag{15}\] \[\overset{\rightarrow}{\mathbf{e}}_{u}^{(k)}[t;s] =\mathbf{u}^{(k)}[t;s]-\boldsymbol{W}_{ff}^{(k-1)}[t]\mathbf{r}^ {(k-1)}[t;s],\quad\overset{\leftarrow}{\mathbf{e}}_{u}^{(k)}[t;s]=\mathbf{u} ^{(k)}[t;s]-\boldsymbol{W}_{fb}^{(k)}[t]\mathbf{r}^{(k+1)}[t;s],\] (16) \[\mathbf{r}^{(k)}[t;s] =\sigma_{+}(\mathbf{u}^{(k)}[t;s]), \tag{17}\]
where \(\tau_{\mathbf{u}}\) is the update time constant, \(\boldsymbol{M}^{(k)}[t]=2\epsilon_{k}(\gamma\boldsymbol{B}^{(k)}[t]+g_{lk} \boldsymbol{I})\), and \(\sigma_{+}\) represents the elementwise clipped-ReLU function corresponding to the projection onto the nonnegative unit-hypercube \(\mathcal{B}_{\infty,+}\), defined as \(\sigma_{+}(u)=\min(1,\max(u,0))\).
To reinterpret the dynamics in (15) to (17) as a multi-compartmental neural network, for \(k=1,\ldots,P-1\), we define the signals:
\[\mathbf{v}_{A}^{(k)}[t;s]=\boldsymbol{M}^{(k)}[t]\boldsymbol{r}^{(k)}[t;s]+ \boldsymbol{W}_{fb}^{(k)}[t]\mathbf{r}^{(k+1)}[t;s],\quad\mathbf{v}_{B}^{(k)} [t;s]=\boldsymbol{W}_{ff}^{(k-1)}[t]\mathbf{r}^{(k-1)}[t;s], \tag{18}\]
which allow us to rewrite the network activation dynamics (15) to (17) as:
\[\tau_{\mathbf{u}}\frac{d\mathbf{u}^{(k)}[t;s]}{ds}=-g_{lk} \mathbf{u}^{(k)}[t;s]+g_{A,k}(\mathbf{v}_{A}^{(k)}[t;s]-\mathbf{u}^{(k)}[t;s] )+g_{B,k}(\mathbf{v}_{B}^{(k)}[t;s]-\mathbf{u}^{(k)}[t;s]), \tag{19}\] \[\mathbf{r}^{(k)}[t;s]=\sigma_{+}(\mathbf{u}^{(k)}[t;s]), \tag{20}\]
where \(g_{A,k}=\frac{1}{\epsilon_{k-1}}\) and \(g_{B,k}=\frac{1}{\epsilon_{k}}\). Similarly, for the output layer, we employ the same expressions as (19) and (20) with \(k=P\), except that in this case we have:
\[\mathbf{v}_{A}^{(P)}[t;s]=\boldsymbol{M}^{(P)}[t]\mathbf{r}^{(k)}[t;s]-( \mathbf{r}^{(P)}[t;s]-\boldsymbol{y}_{T}[t]),\quad\mathbf{v}_{B}^{(P)}[t;s]= \boldsymbol{W}_{ff}^{(P-1)}[t]\mathbf{r}^{(P-1)}[t;s], \tag{21}\]
where \(g_{B,P}=\frac{1}{\epsilon_{P-1}}\), \(g_{A,P}=\beta\) and \(\boldsymbol{M}^{(P)}[t]=\beta^{-1}(\gamma\boldsymbol{B}^{(P)}[t]+g_{lk} \boldsymbol{I})\).
Remarkably, the equations (18) to (21) reveal a biologically plausible neural network that incorporates three-compartmental pyramid neuron models, as presented in (Sacramento et al., 2018; Golkar et al., 2022). This intricate architecture, of which two-layer segment is demonstrated in Figure 1, naturally emerges from the proposed correlative information maximization framework. In this network structure:
* \(\mathbf{u}^{(k)}\) embodies the membrane potentials for neuronal somatic compartments of the neurons at layer-\(k\), where \(\tau_{\mathbf{u}}\) is the membrane leak time constant of soma.
* \(\mathbf{v}_{B}^{(k)}\) corresponds to membrane potentials for basal dendrite compartments, receiving feedforward input originating from the previous layer.
* \(\mathbf{v}_{A}^{(k)}\) denotes the membrane potentials for distal apical dendrite compartments, which gather top-down input from the subsequent layer and lateral inputs represented by \(\boldsymbol{M}^{(k)}[t]\mathbf{r}^{(k)}\) in (18) and (21). Decomposing \(\boldsymbol{M}^{(k)}\) into \(\boldsymbol{D}^{(k)}-\boldsymbol{O}^{(k)}\), we find that \(\boldsymbol{D}^{(k)}\) mirrors autapses (Lubke et al., 1996), and the off-diagonal component \(\boldsymbol{O}^{(k)}\) corresponds to lateral inhibition synapses. We use \(\mathbf{i}^{(k)}=-\boldsymbol{O}^{(k)}\mathbf{r}^{(k)}\) to represent the activations of SST interneurons (Urban-Ciecko and Barth, 2016) that generate lateral inhibitions to the apical dendrites.
* Forward (backward) prediction errors manifest in the membrane voltage differences between soma and basal (distal) compartments of the pyramidal neurons.
* Forward (backward) prediction coefficients \(\mathbf{W}_{ff}^{(k)}\) (\(\mathbf{W}_{fb}^{(k)}\)) are associated with feedforward (top-down) synapses connecting layers \((k)\) and \((k+1)\).
* The inverse of the regularization coefficient \(\epsilon_{k}\) is related to the conductance between soma and dendritic compartments. In contrast, at the output layer, the augmentation constant \(\beta\) corresponds to the conductance between soma and distal compartments. This relationship can be motivated by modifying the objective in (9a) as \[\sum_{k=0}^{P-1}\hat{I}^{(\epsilon_{k})}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})[ t]+\frac{1}{2}\hat{I}^{\overset{\leftarrow}{(\beta^{-1})}}(\mathbf{r}^{(P)}, \mathbf{y}_{T})[t],\] (22) where, through the first-order approximation, the \(\mathbf{r}^{(P)}[t]\) dependent portion of \(\hat{I}^{(\beta^{-1})}(\mathbf{r}^{(P)},\mathbf{y}_{T})[t]\) can be expressed as \(-\beta\|\mathbf{r}^{(P)}[t]-\mathbf{W}_{fb}^{(P)}\mathbf{y}_{T}[t]\|_{2}^{2}\). For accuracy, we enforce \(\mathbf{W}_{fb}^{(P)}=\mathbf{I}\).
### Learning dynamics
Network parameters consists of feedforward \(\mathbf{W}_{ff}^{(k)}\), feedback \(\mathbf{W}_{fb}^{(k)}\) and lateral \(\mathbf{B}^{(k)}\) coefficients.The learning dynamics of these coefficients are elaborated below:
* _Feedforward Coefficients_ are connected to the forward prediction problem defined by the optimization in (3). We can define the corresponding online optimization objective function as \[C_{ff}(\mathbf{W}_{ff}^{(k)})=\epsilon_{k}\|\mathbf{W}_{ff}^{(k)}\|_{F}^{2}+\| \overset{\leftarrow}{\mathbf{e}}^{(k+1)}[t]\|_{2}^{2}\text{ for which the the partial derivative is given by }\] \[\frac{\partial C_{ff}(\mathbf{W}_{ff}^{(k)}[t])}{\partial\mathbf{W}_{ff}^{(k)}}=2 \epsilon_{k}\mathbf{W}_{ff}^{(k)}[t]-2\overset{\rightarrow}{\mathbf{e}}^{(k+1)}[ t]\mathbf{r}^{(k)}[t]^{T}.\] (23) In Appendix C, we provide a discussion on rewriting (23) in terms of the membrane voltage difference between the distal apical and soma compartments of the neuron, based on the equilibrium condition for the neuronal dynamics: \[-\overset{\rightarrow}{\mathbf{e}}^{(k+1)}[t]\mathbf{r}^{(k)}[t]^{T}=g_{B,k} ^{-1}(g_{A,k}\mathbf{v}_{A}^{(k)}[t]-(g_{lk}+g_{A_{k}})\mathbf{u}_{*}^{(k)}[t] +\mathbf{h}_{*}[t])\mathbf{r}^{(k)}[t]^{T},\] (24) where \(\mathbf{h}_{*}[t]\) is nonzero only for neurons that are silent or firing at the maximum rate.
* Similarly, _Feedback Coefficients_ are connected to the backward prediction problem defined by the optimization in (5), and the corresponding online optimization objective function as \(C_{fb}(\mathbf{W}_{fb}^{(k)})=\epsilon_{k}\|\mathbf{W}_{ff}^{(k)}\|_{F}^{2}+\|\overset{ \leftarrow}{\mathbf{e}}^{(k)}[t]\|_{2}^{2}\) for which the partial derivative is given by \[\frac{\partial C_{fb}(\mathbf{W}_{fb}^{(k)}[t])}{\partial\mathbf{W}_{fb}^{(k)}}=2 \epsilon_{k}\mathbf{W}_{fb}^{(k)}[t]-2\overset{\leftarrow}{\mathbf{e}}^{(k)}[t] \mathbf{r}^{(k+1)}[t]^{T}.\] (25) To compute the updates of both feedforward and feedback coefficients, we use the EP approach [Scellier and Bengio, 2017b], where the update terms are obtained based on the contrastive expressions of partial derivatives in (23) and (25) for the nudge phase, i.e., \(\beta=\beta^{\prime}>0\), and the free phase, i.e., \(\beta=0\), : \[\delta\mathbf{W}_{ff}^{(k)}[t]\propto\frac{1}{\beta^{\prime}}\left(( \overset{\leftarrow}{\mathbf{e}}^{(k+1)}[t]\mathbf{r}^{(k)}[t]^{T})\bigg{|}_ {\beta=\beta^{\prime}}-(\overset{\leftarrow}{\mathbf{e}}^{(k+1)}[t]\mathbf{ r}^{(k)}[t]^{T})\bigg{|}_{\beta=0}\right),\] (26) \[\delta\mathbf{W}_{fb}^{(k)}[t]\propto\frac{1}{\beta^{\prime}}\left(( \overset{\leftarrow}{\mathbf{e}}^{(k)}[t]\mathbf{r}^{(k+1)}[t]^{T})\bigg{|}_ {\beta=\beta^{\prime}}-(\overset{\leftarrow}{\mathbf{e}}^{(k+1)}[t]\mathbf{ r}^{(k+1)}[t]^{T})\bigg{|}_{\beta=0}\right).\] (27)
* _Lateral Coefficients_, \(\mathbf{B}^{(k)}\) are the inverses of the \(\epsilon\mathbf{I}\) perturbed correlation matrices. We can use the update rule in [Bozkurt et al., 2023] for their learning dynamics after the nudge phase: \[\mathbf{B}^{(k)}[t+1]=\lambda_{\mathbf{r}}^{-1}(\mathbf{B}^{(k)}[t]-\gamma\mathbf{z} ^{(k)}[t]\mathbf{z}^{(k)}[t]^{T}),\text{ where }\mathbf{z}^{(k)}=\mathbf{B}^{(k)}[t]\mathbf{r}^{(k)}[t].\]
## 3 Discussion of results
* In (12), we devise an update for layer activation \(\mathbf{r}^{(k)}\) by employing two distinct forms of the CMI associated with \(\mathbf{r}^{(k)}\): \(\widehat{I}^{(\epsilon_{k-1})}(\mathbf{r}^{(k-1)},\mathbf{r}^{(k)})[t]\), the CMI with the preceding layer, encompassing the forward prediction error for estimating \(\mathbf{r}^{(k)}\), and \(\widehat{I}^{(\epsilon_{k})}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})[t]\), the CMI with the subsequent layer, incorporating the backward prediction error for estimating \(\mathbf{r}^{(k)}\). Employing these alternative expressions is crucial in circumventing the weight transport problem and offering a more biologically plausible framework. For further discussion, please refer to Appendix B.
* In the context of the proposed correlative information maximization framework, predictive coding naturally emerges as a crucial mechanism. By incorporating both alternative expressions of CMI, the framework focuses on minimizing both forward and backward prediction errors between adjacent layers via feedforward and feedback connections. These connections foster bidirectional information flow, thereby enhancing the overall learning process.
* Figure 1 depicts the interplay between the CorInfoMax objective and the corresponding network architecture. The emergence of lateral connections and autapses can be attributed to the maximization of the unconditional layer entropy component of the CMI, which allows for efficient utilization of the available representation dimensions. Simultaneously, the minimization of conditional entropies between adjacent layers gives rise to feedforward and feedback connections, effectively reducing redundancy within representations.
* We employ time-contrastive learning, as in GenRec (O'Reilly, 1996), EP (Scellier and Bengio, 2017b) and CSM (Qin et al., 2021), by implementing separate phases with Hebbian and anti-Hebbian updates, governed by an assumed teaching signal. It has been conjectured that the teaching signal in biological networks can be modeled by the oscillations in the brain (Whittington and Bogacz, 2019; Baldi and Pineda, 1991; Ketz et al., 2013). Although the oscillatory rhythms and their synchronization in the brain are elusive, they are believed to play a crucial role in adaptive processes such as learning and predicting upcoming events (Fell and Axmacher, 2011; Engel et al., 2001).
## 4 Numerical experiments
In this section, we evaluate the performance of our CorInfoMax framework on image classification tasks using three popular datasets: MNIST (LeCun and Cortes, 2010), Fashion-MNIST (Xiao et al., 2017), and CIFAR10 (Krizhevsky et al., 2009). We compare the effectiveness of our approach against other contrastive methods, such as EP (Scellier and Bengio, 2017b) and CSM (Qin et al., 2021), as well as explicit methods, including PC (Whittington and Bogacz, 2017) and PC-Nudge (Millipore et al., 2023), when training multilayer perceptron (MLP) architectures.
We examine two distinct constraints on the activations of CorInfoMax Networks: (i) \(\mathcal{B}_{\infty,+}\), representing the nonnegative part of the unit hypercube, and (ii) \(\mathcal{B}_{1,+}=\{\mathbf{r}:\mathbf{r}\geq 0,\|\mathbf{r}\|_{1}\leq 1\}\), denoting the nonnegative part of the unit \(\ell_{1}\)-norm ball (Tatli and Erdogan, 2021). Table 1 presents the test accuracy results for each algorithm, averaged over 10 realizations along with the corresponding standard deviations. These findings demonstrate that CorInfoMax networks can achieve comparable or superior
\begin{table}
\begin{tabular}{l l l l} \hline \hline & MNIST & FashionMNIST & CIFAR10 \\ \hline
**CorInfoMax-\(\mathcal{B}_{\infty,+}\)** (Appendix E.3) & \(97.62\pm 0.1\) & \(88.14\pm 0.3\) & \(51.86\pm 0.3\) \\
**CorInfoMax-\(\mathcal{B}_{1,+}\)** (Appendix E.5) & \(97.71\pm 0.1\) & \(88.09\pm 0.1\) & \(51.19\pm 0.4\) \\ EP & \(97.61\pm 0.1\) & \(88.06\pm 0.7\) & \(49.28\pm 0.5\) \\ CSM & \(98.08\pm 0.1\) & \(88.73\pm 0.2\) & \(40.79^{*}\) \\ PC & \(98.17\pm 0.2\) & \(89.31\pm 0.4\) & - \\ PC-Nudge & \(97.71\pm 0.1\) & \(88.49\pm 0.3\) & \(48.58\pm 0.7\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracy results (mean \(\pm\) standard deviation from \(n=10\) runs) for CorInfoMax networks are compared with other biologically-plausible algorithms. The performance of CSM on the CIFAR10 dataset is taken from (Qin et al., 2021), while the remaining results stem from our own simulations.
performance in relation to the state-of-the-art methods for the selected tasks. Additional information regarding these experiments, as well as further experiments, can be found in the Appendix. We also provide the code used for these experiments in the supplementary document.
## 5 Conclusion
In this article, we have presented the correlative information maximization (CorInfoMax) framework as a biologically plausible approach to constructing supervised neural network models. Our proposed method addresses the long-standing weight symmetry issue by providing a principled solution, which results in asymmetric forward and backward prediction networks. Furthermore, the CorInfoMax framework offers a normative approach for developing network models that incorporate multi-compartment pyramidal neuron models, aligning more closely with the experimental findings about the biological neural networks.
One potential limitation of our framework, shared by other supervised approaches, is the necessity for model parameter search to improve accuracy. We discuss this issue in detail in Appendix F.
|
2310.11245 | Neural network approach for a rapid prediction of metal-supported
borophene properties | We develop a high-dimensional neural network potential (NNP) to describe the
structural and energetic properties of borophene deposited on silver. This NNP
has the accuracy of DFT calculations while achieving computational speedups of
several orders of magnitude, allowing the study of extensive structures that
may reveal intriguing moir\'e patterns or surface corrugations. We describe an
efficient approach to constructing the training data set using an iterative
technique known as the "adaptive learning approach". The developed NNP
potential is able to produce, with an excellent agreement, the structure,
energy and forces of DFT. Finally, the calculated stability of various
borophene polymorphs, including those not initially included in the training
dataset, shows better stabilization for $\nu\sim0.1$ hole density, and in
particular for the allotrope $\alpha$ ($\nu=\frac{1}{9}$). The stability of
borophene on the metal surface is shown to depend on its orientation, implying
structural corrugation patterns that can only be observed from long time
simulations on extended systems. The NNP also demonstrates its ability to
simulate vibrational densities of states and produce realistic structures, with
simulated STM images closely matching the experimental ones. | Pierre Mignon, Abdul-Rahman Allouche, Neil Richard Innis, Colin Bousige | 2023-10-17T13:13:23Z | http://arxiv.org/abs/2310.11245v1 | # Neural network approach for a rapid prediction of metal-supported borophene properties
###### Abstract
We develop a high-dimensional neural network potential (NNP) to describe the structural and energetic properties of borophene deposited on silver. This NNP has the accuracy of DFT calculations while achieving computational speedups of several orders of magnitude, allowing the study of extensive structures that may reveal intriguing moire patterns or surface corrugations. We describe an efficient approach to constructing the training data set using an iterative technique known as the "adaptive learning approach". The developed NNP potential is able to produce, with an excellent agreement, the structure, energy and forces of DFT. Finally, the calculated stability of various borophene polymorphs, including those not initially included in the training dataset, shows better stabilization for \(\nu\sim 0.1\) hole density, and in particular for the allotrope \(\alpha\) (\(\nu=\nicefrac{{1}}{{9}}\)). The stability of borophene on the metal surface is shown to depend on its orientation, implying structural corrugation patterns that can only be observed from long time simulations on extended systems. The NNP also demonstrates its ability to simulate vibrational densities of states and produce realistic structures, with simulated STM images closely matching the experimental ones.
## 1 Introduction
The recent synthesis of borophene [1, 2], a one-atom-thick 2D crystal of boron with numerous polymorphs [1, 2, 3, 4, 5], has brought forward a missing piece of the 2D materials bestiary: a partially stable metallic 2D material. Thanks to its interesting properties, borophene may lead to promising applications such as efficient [6, 7], flexible [8, 9, 10], and transparent [6] electronics, optoelectronic devices [11, 12], or dense ionic batteries [13, 14, 15, 16, 17]. In addition to its numerous properties [18, 19, 20, 21], borophene shows a high degree of polymorphism, its allotropes being stabilized by the introduction of periodically distributed hexagonal holes into the triangular lattice structure [1, 2, 3, 4, 5] (Fig. 1). If there are infinite ways to arrange these hexagonal holes, cluster expansion methods [5, 22] have shown several structures (with
hole densities in the 10-15% range) with cohesive energies within a few meV/atom of the minimum. Interestingly, all of the above mentioned properties may be modulated by the degree of anisotropy found in the borophene polymorphs, which contributes to the great richness of this material. Thus, one may expect to tune some properties such as plasmon emission, electronic and thermal transport, or mechanical resistance [11, 18] by selectively synthesizing a given polymorph - note that most polymorphs show metallic behavior. [18] To date, eleven polymorphs of borophene have been experimentally identified [1, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36] and their occurrence has been shown to depend on the experimental synthesis conditions used: temperature, presence of annealing, gas flows, substrate orientation, etc. - which is very promising for our future ability to selectively synthesize a given polymorph for its desired properties. Although borophene's allotropes might be identified by Raman spectroscopy [37] (which is highly dependent on the structure and electronic state of the studied material), the identification of the synthesized allotrope on metal surfaces is not straightforward, as it is usually done by comparing an experimental scanning tunneling microscopy (STM) image with simulated ones. [1, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]
Overall, the experimental characterization of the structural properties and allotropic configuration of borophene is far from straightforward. Therefore, theoretical studies are essential to understand and predict its properties, and also to properly characterize the synthesized structures by comparing computational and experimental data. However, most theoretical studies use Density Functional Theory (DFT) calculations, which are very accurate but also very time consuming and limited in the size of the studied model. Some theoretical studies [8] on free-standing borophene have been carried out using ReaxFF, [38] a classical potential that is rather designed for carbon-based systems and not for boron-substrate interaction - which is rather important since borophene is always grown on a metal such as silver, gold or copper. Therefore, in this work we have developed a high-dimensional Neural Network Potential (NNP) [39, 40, 41, 42] capable of describing the structural and energetic properties of borophene deposited on silver with the accuracy of DFT calculations while drastically reducing the computational time by several orders of magnitude. In this work, we focus on silver as it is the most commonly used substrate for borophene synthesis, [1, 2, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47] although our methodology can be easily transferred to other metals.
NNPs are a class of machine learning potentials that have been shown to accurately describe the properties of a wide variety of materials, [40, 42, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153] in the idea that the potential energy surface (PES) of a system can be approximated by a sum of atomic environment contributions, which in turn can be approximated by a sum of smooth atomic density functions. The parameters of these smooth functions are then fitted to reproduce the DFT energies and forces of a training set of configurations. Once trained, the NNP can be used to perform highly accurate Molecular Dynamics (MD) simulations of extended systems at a fraction of the computational cost of DFT calculations. In the present study, this will allow the description of surface corrugation as a function of the borophene allotrope and surface orientation.
The article is organized as follows. First, we will present the development of the NNP, with a focus on the iterative construction of the training set through an adaptive learning procedure. Then, we will discuss the validity of the obtained potential through structural and energetic arguments, on extended models of various allotropes outside of the training set. Finally, we will show that this NNP can be used to study the stability of polymorphs, to simulate vibrational densities of states (VDOS), and to produce realistic structures whose simulated STM micrographs closely match experimental ones.
## 2 Models and methods
### Generation of borophene allotropes
To develop a transferable NNP usable on any borophene allotrope, several structures were generated on Ag substrate. As shown in Fig. 1, the primitive unit cells of borophene allotropes are not necessarily orthogonal or with angles of 60\({}^{\circ}\) (see
Fig. S1 for a description of the complete set of allotropes). To facilitate the accommodation of these structures on an fcc (111) or (100) substrate, an orthogonal unit cell was preferred. The principle is to generate \(N_{x}\times N_{y}\) replica of the initial flat two-atom orthogonal cell (dimension \(2.81\times 1.62\) A\({}^{2}\) with boron atoms located at \((0,0)\) and \((\nicefrac{{\nu}}{{2}},\nicefrac{{\nu}}{{2}})\)), and then remove a list of selected atoms from this supercell to obtain the desired allotrope. The sheet, which may be rotated by \(90\,^{\circ}\) around the \(z\) axis, is then placed on top of an orthogonal Ag(111) or Ag(100) slab replicated along the \(x\) and \(y\) directions to get the substrate cell parameters as close as possible to those of the borophene sheet. The equilibrium B-Ag distance given by DFT optimization is of 2.45 A, but this distance can be varied during structure generation. The borophene atomic positions in the surface plane are then multiplied by a correction factor to accommodate the underlying silver surface whose lattice parameters determine the ones of the whole system - resulting in a slight deformation of the borophene lattice. A python command-line interface and a graphical user interface have been created to facilitate the generation and visualization of the structures, as well as the generation of VASP (or other formats) input files - it is freely available [55].
### High-dimensional neural network potential
We used the high-dimensional feed-forward neural network potential developed by Behler and Parinello [39] and implemented in the n2p2 v2.2.0 software [41, 42, 56]. In this method, the input layer corresponds to the geometric descriptors of the system, treated by hidden layers of a neural networks (usually two) made up of a defined number of neurons each. A neural network is defined for each element of the system, resulting in atomic energies and forces (output layers). Atomic environments, defined around each atom \(i\) by shells of radius \(r_{c}\) (cutoff radius), are then described by a vector of radial and angular symmetry functions, \(G_{i}\), which describe the local environment of each atom in the system in terms of 2- and 3-body densities [57]. The ensemble of \(G\) functions forms the input layer of the NNP. In this work we used Gaus
Figure 1: The five different allotropes used in the training dataset, sorted by increasing hole density (\(\nu\)): \(\delta_{6}\), \(\alpha\), \(\beta_{12}\), \(\chi_{3}\) and \(\delta_{3}\). All allotropes are initially flat, except for \(\delta_{6}\) which shows a corrugation. The black lines represent the orthogonal unit cells used in the simulations, while the blue ones represent the primitive unit cells. The nomenclature of the different borophene allotropes is based on ref. [4]. The allotropes are deposited on top of a 3-layer thick Ag(111) or Ag(100) slab, and can be rotated by \(90\,^{\circ}\) around the \(z\) axis while keeping the Ag slab fixed, as shown for example with \(\beta_{12}\) on Ag(111).
sian radial functions given as:
\[G_{i}^{rad}=\sum_{j}\mathrm{e}^{-\eta\left(r_{ij}-r_{s}\right)^{2}}f_{c}(r_{ij}), \tag{1}\]
as well as narrow angular functions given as:
\[G_{i}^{ang}=2^{1-\zeta}\sum_{\begin{subarray}{c}j,k\neq i\\ j<k\end{subarray}}\left(1+\lambda\cos\theta_{ijk}\right)^{\zeta}\mathrm{e}^{- \eta\left(\frac{r_{ij}^{2}+r_{ik}^{2}+r_{jk}^{2}}{r_{jk}}\right)}\times\] \[f_{c}(r_{ij})f_{c}(r_{ik})f_{c}(r_{jk}), \tag{2}\]
with \(f_{c}(r)\) the CT_POLY2 polynomial cutoff function, and where \(r_{ij}\) is the distance between atom \(i\) and atom \(j\), \(\theta_{ijk}\) the angle between \(\overrightarrow{r_{ij}}\) and \(\overrightarrow{r_{ik}}\), and \(\lambda\), \(\zeta\) and \(\eta\) are parameters. For a full description of the NNP used in n2p2, its symmetry functions and optimization procedures, we refer the reader to refs. [39, 40, 41, 42, 49, 56, 57].
Here we used a set of 22 radial and 30 angular symmetry functions per element, resulting in an input dimension of 104 for the neural network. All parameters of the symmetry functions are provided in the SI along with an example input file. They have been adapted from those used in ref. [58] to describe copper clusters on a ZnO surface, as they should be well suited to the present similar but simpler system. Note that in all cases the cutoff radius was set to 6.35 A: it is large enough to include all atoms in the first coordination sphere of each atom, but small enough to keep the computational cost reasonable.
Unless otherwise noted, we used a neural network with 2 hidden layers of 20 neurons each. The softplus and linear activation functions for the hidden and output layers have been used, respectively. The NNPs were optimized using the multi-stream Kalman filter method,[41] which allows for very fast convergence, and the objective functions included both energies and forces. The dataset was divided into two subsets for training (90 %) and validation (10 %).
### Molecular Dynamics with Neural Network Potentials
The MD-NNP simulations were performed using the LAMMPS simulation software[59] (version 27May2021) with the n2p2[56] interface implemented in the LAMMPS-NNP package.[42] In almost all cases (otherwise noticed), simulations are run with a timestep of 0.1 fs and a Nose-Hoover thermostat with a relaxation time of 100 timesteps. The Verlet algorithm[60] is used for time integration. Periodic boundary conditions are applied in all directions. Note that if the MD simulation encounters a structure outside the range of structures represented in the training dataset, the program will issue an extrapolation warning (EW). These EWs are to be avoided because they signal that the simulation may be heading with the generation of unrealistic structures - the NNP is good at interpolation but bad at extrapolation. In this case, the simulations are usually stopped and the structures raising EW are kept for later inclusion in the training dataset (see details below).
During the testing and renewal phase of the NNP dataset construction (detailed below), dynamics are run for 20 ps with a temperature ramp from 200 K up to 1,000 K in either the NVT or NPT ensembles, and atomic positions are recorded every 20 fs. The simulations are set to stop when 800 EW have been raised, which corresponds to a maximum of 4 structures having raised an EW per simulation. For these simulations, all atoms are free to move.
For the vibrational analysis, the system is equilibrated for 10 ps in the NVT ensemble before the production run in the NVE ensemble. The latter is run for 50 ps, and atomic positions and velocities are recorded every 1 fs. The vibrational densities of states (VDOS) are then calculated from the square norm of the Fourier transform of the velocities using the pdos function from the pwtools Python package.[61] For these simulations, the bottom two layers of the Ag substrate are fixed to mimic the presence of a substrate.
Sample LAMMPS input files and data handling scripts are freely available on Zenodo.[55]
### Construction of the training dataset
Building the most representative dataset while avoiding over-representation of given atomic configurations and keeping the computation time (and thus the dataset size) as small as possible is actu
ally the most crucial and difficult part of NNP construction. For this purpose, we have implemented an iterative construction algorithm based on the adaptive learning procedure [40, 52, 53, 62, 63], which allows to build the training dataset by adding only selected structures while keeping the number of DFT calculations to a minimum. In the following, the "dataset" refers to the selected structures, associated with their DFT-computed forces and energies, providing the references used for training the NNP. The "stock library" is a set of available structures that might be integrated in the dataset after computing energy and forces at the DFT level. We note here that structures are integrated in the dataset only if their energies are negative and the norms of the force vectors are below 25 eV/A. This filtering is performed each time a new structure is calculated with DFT to ensure that no structure with unrealistic energies or forces is included in the dataset.
The workflow of the iterative construction algorithm is shown in Fig. 2. It consists of an initial phase followed by an alternation of two phases: i) the "adaptive learning phase", where the dataset is iteratively enriched by selecting new structures from the stock library, ii) the "testing and renewal phase", where the refined NNP is used to perform a series of MD simulations allowing to test its validity and to renew the stock library with "fresh" structures. Thanks to the adaptive learning procedure, the number of DFT calculations is kept to a minimum and the dataset is enriched with only the most relevant structures.
### Initialization
The initial stock dataset and library are constructed from the 5 allotropes shown in Fig. 1. These five allotropes were chosen for the training dataset because \(\alpha\), \(\beta_{12}\), and \(\chi_{3}\) are the most commonly reported allotropes in the experimental literature, and \(\delta_{6}\) and \(\delta_{3}\) introduce cases where boron atoms are highly or poorly coordinated, respectively. They are deposited on Ag(111) and Ag(100) with supercell sizes of 1\(\times\)1, 1\(\times\)2, 2\(\times\)1 and 2\(\times\)2 orthogonal unit cells while keeping the number of atoms below 40. From these structures, small random atomic displacements of 0.2 A at most are realized (structural details of these structures are given in Tab. S1), leading to 100 starting structures that are then computed at the DFT level to create the initial dataset. These structures are also used to build an initial stock library of 15k structures by randomly shifting the atoms by a maximum of 0.2 A and by expanding or compressing the cells by a maximum of 5 %.
### Phase 1: adaptive learning
The adaptive learning phase then consists in selecting structures from the stock library based on the energy difference calculated from two ghost NNPs with different parameters: two hidden layers each, one NNP with 20 neurons per layer and the other with 15 neurons (see Fig. 2). For a given structure, a large energy difference means that the PES is not well represented from the existing dataset and the structure can potentially be selected and added to the dataset. Thus, the energies of all structures in the stock library are computed with these two ghost NNPs, and the 20 structures with the largest energy difference are then selected as new structures to enrich the dataset: DFT energies and forces are thus calculated only for these 20 structures. The two ghost NNPs are then retrained with this enriched dataset, and the process is repeated until the mean and standard deviation of the energy difference between the two NNPs converge over the entire stock library. The training of these ghost NNPs is performed on a small number of epochs (typically 10) to save computational time - thanks to the Kalman filter method [41], very fast convergence is achieved anyway. The training of the two NNPs, the energy calculations and the DFT computations can be distributed over several nodes and run in parallel on a computing cluster, which makes the whole process quite efficient. Since the dataset should be independent of the shape of the NNPs used, we can actually perform this procedure with more than two NNPs (as long as they are well designed) and compare them two-by-two, and convergence is then achieved faster. All this procedure is controlled by an in-house python script that is included in the Zenodo archive [55].
### Phase 2: testing and renewal of the stock library
After the adaptive learning phase, we enter the testing and renewal phase. We train one of the ghost NNPs above until convergence, and use it to perform 50 test MD simulations on the five training allotropes deposited on the two substrate orientations, in both the NVT and NPT ensembles (see Tab. S1). These MD simulations consist of heating ramps that continuously heat the system from 200 K to 1,000 K in either the NVT or NPT ensemble. The goal here is to sample a wide variety of configurations, including high-energy ones, to ensure that the NNP is able to describe the entire PES. If too many EW are found, it means that the corresponding atomic configurations are not well represented in the training dataset. These simulations are then stopped, and the structures that generated EW (_i.e._ the last 4 in the trajectory) are automatically computed with DFT and included in the new dataset. The rest of the trajectories are then concatenated into a new stock library along with their 5% compression and dilatation analogs, and we enter the adaptive learning phase again.
This alternation of two phases is repeated until no EW is found on the 50 test MD simulations, which in our case happened after 5 iterations when the training dataset reached 9281 structures. The initial stock database was increased from the first 15k random structures to \(\sim 45\)k structures, then to \(\sim 85\)k structures, and on to 150k structures in the final step (the maximum without EW). Note that we could have used the testing and renewal phase to create the initial stock database, but training the NNP for MD simulations on only 100 structures makes no sense.
The goal of the adaptive procedure is to enrich the dataset with new structures describing PES regions that are not yet well represented. As such, at each new enrichment step, the NNP is able to well reproduce/predict energies and forces for new atomic configurations (_i.e._ interatomic distances and angles). Indeed, this is well confirmed by the evolution and broadening of the distribution of atomic configurations (B-B, Ag-Ag and -Ag distances) as the dataset size increases (see Fig. S2, the smoothing and broadening of the peaks, especially for small distances, which allows to better describe repulsive interactions). This is clear evi
Figure 2: Workflow of the iterative construction algorithm for building the dataset. The procedure is stopped when no extrapolation warnings are found on a set of 50 different test MD simulations on the five training allotropes, going from 200 K to 1,000 K. The “datasets”, framed with a full blue line, are an ensemble of structures associated with their DFT-computed forces and energies. The “stock libraries”, framed with a dashed yellow line, are an ensemble of atomic positions.
dence that the adaptive learning procedure is proceeding with the intended purpose. We can therefore conclude that the adaptive learning procedure is very efficient, since it allows i) to build the most representative dataset while keeping the computational time (number of DFT calculations) as low as possible, and ii) to define a clear decision threshold for when to stop enriching the dataset.
### Final training
Once the dataset built, the final refined NNP (two hidden layers of 20 neurons each) is trained and convergence reached after 77 epochs. The final energy RMSE for training is 26 meV/atom (28 meV/atom for testing), and the final force RMSE for training and testing is 508 meV/A (see Fig. S3). We note here that these values are unusually high for an NNP, but this is due to the fact that we are using a very small dataset with structures that are very different and some are highly energetic because of the induced geometric distortions made by cell compression/expansions on high temperature (until 1,000 K) systems. For details see "distortion" column of Tab. S1, the small unit cells used introduce a large amount of strain in the borophene structures (from 3 to 54%), which is not realistic - but not a problem per se, as it allows to well describe the limits of repulsive and attractive interactions in the NNP atomic potential. We were forced to use such small cells to keep the computation time reasonable. We will see below that applying this NNP to more realistic structures leads to much better RMSEs. This argument is supported by the fact that much lower MAEs are obtained for energies and forces, since the MAE gives less weight to outliers than the RMSE. Indeed, we obtain MAEs for energies and forces of 5.4 meV/atom and 203 meV/A for training, and 7.5 meV/atom and 213 meV/A for testing - which are much more reasonable values. The rather large RMSE observed is thus caused by the occurrence of a few of high-energy-limit structures in the dataset that are fully well described by the NNP.
### First-principles calculations
The NNP has been developed on the basis of reference data (energies and forces) from DFT calculations performed with the Vienna Ab initio Simulation Package (VASP) [64, 65, 66, 67] using the projector augmented wave (PAW) method to describe ionic cores and valence electrons through a plane wave basis [68, 69]. The Perdew-Burke-Ernzerhof (PBE) form of the generalized gradient approximation (GGA) was used for the exchange and correlation functional [70, 71]. The cutoff energy was fixed at 700 eV. The bulk Ag unit cell was first optimized using a 9\(\times\)9\(\times\)9 Monkhorts-Pack \(\Gamma\)-centered mesh to sample the Brillouin zone; this resulted in the cell parameter \(a=4.085\) A from which the (111) and (100) surface plates were constructed. Single point computations were performed on NNP training structures to provide the energy and forces. K-point meshes of 7\(\times\)7\(\times\)1 and cutoff energy of 700 eV were chosen from the evaluation of the error computed on energy and forces for different k-point meshes (3\(\times\)3\(\times\)1, 5\(\times\)5\(\times\)1, 7\(\times\)7\(\times\)1, 9\(\times\)9\(\times\)1, 11\(\times\)11\(\times\)1) and cutoff energies (400, 500, 600, 700 eV). The selected parameters allowed accurate energy and forces calculations within reasonable computational time (see Figs. S4-S5). All DFT calculations were performed by applying the D3 correction [72] to the energy and forces allowing to take into account the van der Waals interactions that are of great importance in the present system.
The MD simulations used in the validation section are performed on large cells (\(\sim 25\times\)25\(\times\)25 A\({}^{3}\)) containing \(\sim 500\) atoms. Thus, the Brillouin zone sampling could be limited to the \(\Gamma\) point to keep the computational cost reasonable. The energy cutoff is also reduced to the standard value of 400 eV. For these MD simulations, the thermalization is performed in the NVT ensemble (scaling velocities) for 0.5 ps at 300 K. This allows an average temperature of 300 K to be maintained during the 5 ps production run in the NVE ensemble. A simulation time step of 1 fs is used. The equilibrium of the system is checked and confirmed by verifying that the energy of the system remains stable during the equilibration period. Also, the temperature of the system remains stable throughout the production run for all simulations, confirming that the system is well pre-equilibrated.
Results and discussion
### Validation of the model
The refined NNP is validated by structural and energetic comparison with DFT calculations. MD simulations were performed on six borophene allotropes deposited on Ag(111), namely \(\alpha\), \(\alpha_{1}\), \(\beta_{12}\), \(\beta_{13}\), \(\chi_{2}\), and \(\chi_{3}\) (see Fig. S1 for their structure), using either DFT or the NNP, with the parameters described in the Methods section. The cell size is set to \(\sim 25\) A per side, which is 3 to 5 times larger than those used for the structures in the training dataset, resulting in cells containing \(\sim 500\) atoms each. This allows the stress on the borophene sheets to be reduced with respect to the smaller structures in the training dataset, since a maximum of 3.5% adjustment of the borophene supercell dimensions on the replicated substrate unit cell has been applied - the exact cell size for this depends on the allotrope (see Tab. S2 for all structural details). We recall here that only the \(\alpha\), \(\beta_{12}\) and \(\chi_{3}\) structures are included in the training dataset, with a maximum of 40 atoms per structure (see Tab. S1). In both NNP and DFT cases, the MD is run on a 0.5 ps NVT thermalization at 300 K and 5 ps NVE production (1 fs time step in both cases), and images are saved every 1 fs. The initial structure for the MD-NNP is taken as the first one from the MD-DFT production run. We finally note that each MD-DFT simulation took about 5 days to run on four nodes with 40 cores each, while the MD-NNP simulations ran in less than one hour on one unique node of 40 cores.
ted error range of DFT methods, which is the most important aspect.
Regarding the forces, their RMSE (Tab. 1) are much reduced with respect to the training ones and are in the range of the generally accepted force RMSE for a reliable NNP [58]. This is due to the fact that the structures encountered along the MD-DFT are all physically sound and less stressed than those present in the training dataset. The detailed time evolution of the norm of the force vectors for a few atoms along the MD-DFT trajectories computed with DFT and NNP can be found in Fig. S9. In addition to the well reproduced shape of the PES from computed energies, this shows that the evolution of any system from MD-NNP simulations allows to explore phase space with an accuracy comparable to DFT ones.
In conclusion, we have shown that the NNP is able to reproduce the DFT results very accurately in terms of structure, energy and forces, both on the allotropes on which it was trained and on others - and the training was performed on structures with \(\sim 10\) times fewer atoms than the ones tested here. This validates the NNP and allows us to
\begin{table}
\begin{tabular}{c|c|c} Allotrope & Energies & Forces RMSE \\ & RMSE\({}^{*}\) & [meV/Å] \\ & [meV/at] & \\ \hline \(\alpha\) & 1.22 & 261 \\ \(\beta_{12}\) & 0.807 & 132 \\ \(\chi_{3}\) & 0.774 & 165 \\ \hline \(\alpha_{1}\) & 1.47 & 299 \\ \(\beta_{13}\) & 1.50 & 337 \\ \(\chi_{2}\) & 0.929 & 304 \\ \end{tabular}
\end{table}
Table 1: Resulting energies and forces RMSE\({}^{*}\) for the six test borophene allotropes on Ag(111) along their MD-DFT trajectories. The allotropes in the first group are used in the training dataset, the others are not. The energies RMSE\({}^{*}\) are calculated by correcting the NNP energies by the MAE between the NNP and DFT energies (about 10 meV/at).
Figure 3: Comparison of partial \(g(r)\) calculated on NVE molecular dynamics trajectories at 300 K using DFT (black) or the NNP (orange), for three borophene allotropes on Ag(111): \(\alpha_{1}\), \(\beta_{13}\), and \(\chi_{2}\). These allotropes are not included in the NNP training dataset. In all cases, the Ag substrate is composed of three layers, with the bottom two fixed and the lateral cell size \(\sim 25\) Å (see Tab. S2 for all structural details).
use it to perform MD simulations on large systems with allotropes it was not trained on, which we will do in the next section.
### Stability analysis
Using the NNP, we performed a stability analysis of 19 different borophene allotropes on Ag(111). Figure 4 shows the average potential energies of the boron atoms for each of these allotropes as a function of their hole density and angular configuration (\(0\,^{\circ}\) or \(90\,^{\circ}\), as defined in Fig. 1). These energies are averaged over a 5 ps NVE production run after 10 ps thermalization at 300 K: the sheets have thus been allowed to relax on the substrate and buckle out of plane. Two of the tested structures are omitted in Fig. 4 because of their instability: \(\delta_{3}\) rearranges rapidly during thermalization into a disordered phase with regions resembling \(\chi_{3}\) and others with large holes, and \(\alpha_{2}\) tends to crumple upon itself. It has to be noticed that these two allotropes have never been reported on silver.
Very interestingly, Fig. 4 shows that the most stable structures are those with \(\nu\sim 0.1\), and especially the allotrope \(\alpha\) (\(\nu=\nicefrac{{1}}{{9}}\)). In particular the minimum stability profiles (solid/dashed lines) are in very good agreement with that obtained from static DFT calculations and cluster expansion methods [5, 22]. Indeed, a minimum is also found for \(\nu=\nicefrac{{1}}{{9}}\) for free-standing or gold-supported borophene - it shifts to \(\nu=\nicefrac{{1}}{{6}}\) for copper (cf. the inset of Fig. 4).
It has to be noticed that in our simulations the cell size is much larger and the stability values are averaged over 300 K MD simulations, allowing to describe the corrugation of the borophene sheet above the silver surface. This explains the loss of the stability for given allotropes (\(\beta_{11}\), \(\chi_{4}\), \(\beta_{13}\), \(\delta_{5}\), \(\chi_{2}\)) lying rather far above the minimum stability profile. This is thus induced from the dynamic borophene structure deformation that was not taken into account from static DFT calculations. Thus, in addition to the fact that our simulations show that the minimum stability for \(\nu\sim 0.1\) is respected, we observe and describe particular dynamic structural accommodations of given borophene allotropes upon interaction with a metal surface.
Experimentally, the most commonly reported allotropes on Ag(111) are \(\beta_{12}\) (\(\nu=\nicefrac{{1}}{{6}}\)) and \(\chi_{3}\) (\(\nu=\nicefrac{{1}}{{5}}\)), however, it is possible to favor one or the other by playing with annealing times and temperatures [1, 2, 33, 34, 43, 44, 45, 46, 47] showing that these allotropes are metastable. We recall that our simulation results are obtained from MD at 300 K, which does not take into account the synthesis pathway, and they are also performed on a limited lateral size, which naturally introduces stress in the borophene lattice. It would thus be interesting to anneal at larger temperatures and/or over longer periods these allotropes to see whether the \(\alpha\) one can be obtained.
We note here that the good agreement between our NNP and the DFT and cluster expansion methods [5, 22] is a further confirmation that our NNP is
Figure 4: Average potential energies of borophene sheets for 17 different stable allotropes on Ag(111) as a function of their hole density \(\nu\) after MD relaxation. The full circles are for structures with a \(0\,^{\circ}\) rotation with respect to the substrate, the empty squares for a \(90\,^{\circ}\) rotation. All structures have a lattice dimension of at least 25\(\times\)25\(\times\)25 Å\({}^{3}\) and contain about 500 atoms (the lateral size varies from structure to structure in order to keep the borophene distortion below 4 % – details of the structures are given in the SI). The dashed line is a guide to the eye, highlighting the \((\nu-\nicefrac{{1}}{{9}})^{2}\) trend. The inset reproduces data from ref. [22] and shows the borophene potential energies of free-standing, gold-supported, and copper-supported borophene allotropes as a function of \(\nu\). The full signs correspond to DFT calculations, while the empty ones come from the cluster expansion method.
sound and can be used reliably to describe the arrangement of B atoms on the surface regardless of the hole density, as well as to compute the relative energies of different allotropes. Moreover, we emphasize that all interatomic interactions are very well represented by the NNP - which was not a given, considering that B-Ag is a non-bonded, _i.e._ long-range interaction close to the cutoff limit.
From Fig. 4, some allotropes show a large difference in stability upon boron sheet rotation (difference between circle and squares for a given allotrope), this is particularly the case for \(\delta_{4}\) and \(\beta_{11}\). This difference is however not correlated to hole density (see Fig. S10) neither to the change in borophene sheet distortion due to the rotation (see Tabs. S4 and S5, \(\delta_{4}\) shows almost the lower change). Therefore, this shows that for stability evaluations various configurations should always be considered when seeking to identify a given allotrope. Moreover, it is observed that there logically exists a correlation between hole density and B sheet corrugation over the Ag surface (see Fig. S11), showing a flatter borophene layer for increasing hole density. This is however observed only for the 0 \({}^{\circ}\) configurations for which a positive distortion of the B sheet has been applied for matching the Ag cell dimensions. In the case of the 90 \({}^{\circ}\) rotated configurations, the correlation between hole density and surface corrugation is not respected. Indeed, for these structures, the borophene sheet is always more corrugated as compared to the 0 \({}^{\circ}\) configurations, which is due to a compressing distortion of the B sheet induced by the matching. Nevertheless, taken all together, these data show that the borophene stability above the metallic surface is correlated to the stability of the free borophene allotrope (computed DFT values [5, 22]) and to the hole density, but it is also tuned by the geometrical rearrangement of the B sheet on the surface which significantly modulate its stabilization.
### Vibrational analysis
The vibrational density of states (VDOS) of the boron and silver atoms for each allotropes in their 0 \({}^{\circ}\) and 90 \({}^{\circ}\) rotated configurations have been evaluated (Fig. 5). 50 ps long MD-NNP simulations have been carried out in the NVE ensemble on the 17 stable borophene allotropes on Ag(111) (see Fig. S1 and Tab. S4 for structural details), after a 10 ps NVT thermalization at 300 K. Again, in all cases, only the Ag atoms in the top layer were allowed to move.
First, the silver VDOS are very similar for all structures, with two peaks at about 100 cm\({}^{-1}\) and 150 cm\({}^{-1}\) (with small variations depending on the allotrope), the low energy one being about twice the intensity of the other. This general shape, independent of the 0 \({}^{\circ}\) or 90 \({}^{\circ}\) configuration, is close to the expected experimental values for bulk silver as measured by inelastic neutron scattering at \(\sim 125\) cm\({}^{-1}\) and \(\sim 180\) cm\({}^{-1}\) with the same relative intensities [73, 74]. This further supports the validity of this approach, and we can assume that
Figure 5: Comparison of the normalized vibrational densities of states (VDOS) for the 17 stable allotropes, as calculated from the silver- or boron-only atomic velocities obtained with an MD-NNP. For each allotrope, the lower and upper curves correspond to the 0 \({}^{\circ}\) and 90 \({}^{\circ}\) configurations, respectively. Details of the structures are given in the SI. The allotropes are ordered by increasing \(\nu\) from bottom to top. Thermalization is performed in the NVT ensemble at 300 K, while production is performed in the NVE ensemble.
the NNP is capable of performing a reliable vibrational analysis on this system.
Figure 5 gathers the VDOS of the boron atoms for the 17 stable allotropes in both 0 \({}^{\circ}\) and 90 \({}^{\circ}\) configurations. Let us first consider the differences between the allotropes for a single angular configuration, say 0 \({}^{\circ}\). We see that most allotropes have very different vibrational profiles with well-defined peaks. The allotropes that have the broader features are the ones that are the most corrugated (see Fig. S11 for the \(z\) profiles of the different allotropes). This result is very interesting because it suggests that vibrational analysis could be used to identify the structure of a borophene film on a substrate, since the vibrational profile of the boron atoms should be very different from one allotrope to another. Now let us look at the differences between the 0 \({}^{\circ}\) and 90 \({}^{\circ}\) configurations. We can see on Fig. 5 that the VDOS for the 90 \({}^{\circ}\) configuration are generally broader than for the 0 \({}^{\circ}\) one, with less well-defined peaks. Also, the general shape of the VDOS is often shifted in frequency between the two configurations. This frequency shift can be explained by the difference in borophene strain induced by the different borophene distortions in the two configurations (see Tab. S4 and Tab. S5). For the allotropes where the features are broader in the 90 \({}^{\circ}\) configuration, this is probably caused by a more pronounced corrugation in this configuration (see Fig. S11). These results show that vibrational analysis can be used to identify borophene allotropes as well as their angular configuration on a substrate, since the vibrational profiles depend on these parameters.
### STM images of MD-obtained structures
Using the structures obtained from MD-NNP simulations, it is then possible to compute simulated STM images of the borophene layers on Ag substrate in any configuration from DFT calculations. Preliminar benchmarking calculations have been carried out in order to check the effect of the number of silver layers as well as the the number of k-points used in the DFT calculation. Results show a low sensibility of the produced STM images and electronic density of the structure with respect to these parameters (Figs. S13-S14). As such, simulated STM images were obtained from single-point DFT energy calculations at the \(\Gamma\) point on a structure containing a single substrate layer in addition to the borophene sheet, in the constant current mode and with a tip placed 2 A above the top atom.
Figure 6 shows the simulated STM image of the \(\beta_{12}\) allotrope on Ag(111) in the 90 \({}^{\circ}\) configuration and compare it with the experimental STM image of the undulated phase obtained experimentally from ref. [75]. One can see here a very good agreement between the experimental and simulated STM images, showing that the MD-NNP simulations are able to well reproduce the periodic undulated phase observed experimentally - without having to introduce a surface deformation of
Figure 6: (a) Simulated STM image of the \(\beta_{12}\) allotrope on Ag(111) surface in the 90 \({}^{\circ}\) configuration, and (b) experimental STM image of the undulated phase from ref. [75] (reproduced with permission. Copyright 2023 American Chemical Society.). At the top of panel (a), the boron atoms are colored according to their height, from blue (low) to red (high). This undulated configuration was adopted naturally by the NNP while the two bottom Ag layers were kept fixed.
the silver slab [75]. This suggests that the structures produced by the NNP are very close to the experimental ones, which may be of great help for allotrope identification. It has to be noticed that this is made possible because of the large system models considered and allowed from the MD-NNP simulations. The use of large lateral sizes is very interesting as it allows the formation of moire patterns and possibly large wavelength corrugation patterns. This is very encouraging, as it means that the NNP can be used quite easily to produce large STM images of borophene on metals in any configuration and at any temperature, which is a very useful tool for comparison with experimental images for allotrope identification.
## 4 Conclusions
In this work, we have developed a neural network potential for borophene on silver substrate. A robust iterative algorithm has been developed to construct the NNP training database, based on the "adaptive learning" procedure, which is very general and can be applied to any system. The resulting NNP is able to reproduce very accurately the DFT results in terms of structure, energy and forces on large structures, on allotropes that are part of the training set and on others. This validates the NNP, and allows us to use it to perform long time MD simulations on extended systems with any borophene allotrope, with the accuracy of DFT and for a fraction of its computational cost. The stability analysis of 19 different borophene allotropes on Ag(111) shows that the most stable structures are those with \(\nu\sim 0.1\), and in particular the allotrope \(\alpha\) (\(\nu=\nicefrac{{1}}{{9}}\)). We observe that the stability of borophene on the metal surface also depends on its orientation, implying structural corrugation patterns. The vibrational analysis of these 19 allotropes shows that the vibrational profiles of the boron atoms are very different from one allotrope to another, and also depend on the angular configuration of the borophene sheet on the substrate. Finally, we show that the NNP can be used to produce large scale realistic structures of borophene on metals in any configuration and at any temperature, from which large STM images can be simulated, which is a very useful tool for comparison with experimental images for allotrope identification. In the future, this will be used to build an image database dedicated to the characterization and identification of experimental structures. In addition, further work will focus on extending this potential to other metals as well as to multilayer borophene.
## Data Availability Statement
The data and code that support the findings of this study are openly available in on GitHub and Zenodo [55].
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Author's contributions
P.M. contributed to all steps of the study, focusing more on the ab initio calculations. A.R.A. advised the study concerning the ML parts. N.R.I. advised the study on borophene chemistry and structure. C.B. designed and carried out the study. C.B. and P.M. wrote the manuscript. All authors discussed and revised the manuscript.
## Acknowledgements
C.B. acknowledges support by the French National Research Agency grant (ANR-21-CE09-0001-01).
|
2302.05021 | ShapeWordNet: An Interpretable Shapelet Neural Network for Physiological
Signal Classification | Physiological signals are high-dimensional time series of great practical
values in medical and healthcare applications. However, previous works on its
classification fail to obtain promising results due to the intractable data
characteristics and the severe label sparsity issues. In this paper, we try to
address these challenges by proposing a more effective and interpretable scheme
tailored for the physiological signal classification task. Specifically, we
exploit the time series shapelets to extract prominent local patterns and
perform interpretable sequence discretization to distill the whole-series
information. By doing so, the long and continuous raw signals are compressed
into short and discrete token sequences, where both local patterns and global
contexts are well preserved. Moreover, to alleviate the label sparsity issue, a
multi-scale transformation strategy is adaptively designed to augment data and
a cross-scale contrastive learning mechanism is accordingly devised to guide
the model training. We name our method as ShapeWordNet and conduct extensive
experiments on three real-world datasets to investigate its effectiveness.
Comparative results show that our proposed scheme remarkably outperforms four
categories of cutting-edge approaches. Visualization analysis further witnesses
the good interpretability of the sequence discretization idea based on
shapelets. | Wenqiang He, Mingyue Cheng, Qi Liu, Zhi Li | 2023-02-10T02:30:31Z | http://arxiv.org/abs/2302.05021v1 | # ShapeWordNet: An Interpretable Shapelet Neural Network for Physiological Signal Classification
###### Abstract
Physiological signals are high-dimensional time series of great practical values in medical and healthcare applications. However, previous works on its classification fail to obtain promising results due to the intractable data characteristics and the severe label sparsity issues. In this paper, we try to address these challenges by proposing a more effective and interpretable scheme tailored for the physiological signal classification task. Specifically, we exploit the time series shapelets to extract prominent local patterns and perform interpretable sequence discretization to distill the whole-series information. By doing so, the long and continuous raw signals are compressed into short and discrete token sequences, where both local patterns and global contexts are well preserved. Moreover, to alleviate the label sparsity issue, a multi-scale transformation strategy is adaptively designed to augment data and a cross-scale contrastive learning mechanism is accordingly devised to guide the model training. We name our method as ShapeWordNet and conduct extensive experiments on three real-world datasets to investigate its effectiveness. Comparative results show that our proposed scheme remarkably outperforms four categories of cutting-edge approaches. Visualization analysis further witnesses the good interpretability of the sequence discretization idea based on shapelets.
Keywords:Physiological Signal Classification Shapelet-based Sequence Discretization Interpretability Contrastive Learning
## 1 Introduction
Physiological signal is an invaluable type of medical time series, which has broad applications in the healthcare domains, such as emotion recognition, seizure detection and heartbeat classification [11]. To effectively indicate the health state of human body, relevant information is often recorded simultaneously by multiple sensors through high-frequency and long-time sampling. For example, an electrocardiogram (ECG) signal recording can be sampled in 12 channels at a
frequency of 500 HZ for at least 10 seconds to be used for diagnosing the cardiovascular condition of a patient. Nowadays, the fast progress of IoT spurs an explosive increase of physiological signals, making the traditional way of manually classifying such high-dimensional data not only costly but also inefficient [21]. Hence, recent research has turned to artificial intelligence and machine learning for technical assistance [6].
Given the temporal data property, the physiological signal classification (PSC) task is often viewed as a typical time series classification (TSC) problem in the machine learning field, where a plentitude of TSC methods have been proposed and can be roughly grouped into two categories: the classical algorithms and the deep learning (DL) based approaches [3, 17]. Classical TSC methods focus on explainable feature engineering, where distinguishable features are designed and extracted from various perspectives. For instance, the "golden standard" 1-NN Dynamic Time Warping (DTW) paradigm [10, 22] concentrates on comparing the similarity of global patterns, while the shapelet-based approaches [33, 14, 23] aim at mining discriminative subsequences that maximally represent a class. Nevertheless, despite effective on small-scale and univariate datasets, classical methods do not scale well to the PSC task due to the difficulty of large-space feature selection and the inability to capture multi-variate interaction [31].
In the past few years, deep learning based methods have reported incredible advancements in the TSC field, which avoid handcraft feature design and laborious feature selection by directly learning informative and low-dimensional representations from raw data [34, 17]. However, DL approaches require a large amount of labelled data to supervise model training, which is quite limited in the PSC scenario and may lead to performance degeneration. Besides, DL models provide little insight into the decisive factors and such black-box natures would impair their credibility in the healthcare field [1, 24]. To overcome the above challenges and better adapt to the PSC task, one natural idea is to make full use of the strengths of both the classical and the deep learning based methods [34].
In this article, we propose a two-stage model named ShapeWordNet to provide a more effective and interpretable solution to the PSC problem. To be specific, we novelly take advantage of the time series shapelets to extract discriminative local patterns as elementary "words" that contain certain class-relevant "semantics" of the original data. Then, based on these explainable words, we discretize the point-wise raw signal into a word-wise token sequence dubbed ShapeSentence for the whole-series information distillation, where we believe both significant local patterns and global contexts are well preserved. Moreover, in order to alleviate the label sparsity issue, the large shapelet candidate space is adaptively leveraged to augment the raw data in a multi-scale transformation way, and a cross-scale contrastive learning mechanism is accordingly constructed as an auxiliary objective to capture the data invariance. Finally, a scale-aware feature integrator is devised to fuse the representations of multi-scale ShapeSentences for class label prediction.
In summary, the main contributions of our work are as follows:
* We propose an effective and interpretable scheme named ShapeWordNet tailored to the physiological signal classification task, which integrates the representation learning strengths of deep neural networks with the interpretability advantages of time series shapelets.
* We design a ShapeWord Discretization strategy to deal with the intractable data properties of physiological signals and devise a cross-scale contrastive learning mechanism to alleviate the label sparsity issues. To the best of our knowledge, this is the first work to utilize shapelets for explainable sequence discretization and deep learning's representational ability promotion.
* We conduct extensive experiments on three real-world datasets to investigate the effectiveness of ShapeWordNet. The comparative results validate the model's outperformance over four categories of TSC methods and the visualization analysis illustrates the good interpretability of the sequence discretization idea based on shapelets.
## 2 Preliminaries
### Problem Formulation
Given a group of physiological signals \(\mathcal{T}=\{T_{1},T_{2},...,T_{m}\}\in\mathcal{R}^{m\times d\times n}\) and the corresponding label set \(\mathcal{Y}=\{y_{1},y_{2},...,y_{m}\}\in\mathcal{R}^{m}\), where each sample \(T_{i}\in\mathcal{R}^{d\times n}\) is a \(d\)-dimensional sequence of \(n\) time steps associated with a label \(y_{i}\), the goal of physiological signal classification is to train a model \(f_{\Theta}:T\mapsto y\) to predict the class label for a target instance.
### Definitions
**Definition 1: Shapelet.** A shapelet \(\tilde{S}\in\mathcal{R}^{l}\) (\(1\leq l\leq n\)) is a type of subsequence that well discriminates classes [33]. A good shapelet is supposed to have small \(sDist\), i.e. the shapelet distance [4], to instances of one class and have large \(sDist\) to those of another. The \(sDist\) is defined as the minimum euclidean distance between \(\tilde{S}\) and any subseries \(w\in W^{l}\) of a given time series \(T\in\mathcal{R}^{n}\) :
\[sDist(\tilde{S},T)=\min_{w\in W^{l}}(dist(\tilde{S},w)). \tag{1}\]
**Definition 2: ShapeWord.** A ShapeWord is defined as the cluster centroid of a set of similar shapelets, which represents the abstract prototype [20] of their shared local pattern and can be referred to by the cluster label token \(SW\):
\[ShapeWord =ClusterCentroid(\tilde{S}_{1},...,\tilde{S}_{v}), \tag{2}\] \[SW =ClusterLabel(\tilde{S}_{1},...,\tilde{S}_{v}).\]
**Definition 3: ShapeSentence.** A ShapeSentence \(SS\) is the discretized token sequence of the continuous raw series \(T\), where each token \(SW_{i}\) refers to a ShapeWord and \(s\) is the length of this ShapeSentence:
\[T=[t_{1},...,t_{n}]\in\mathcal{R}^{d\times n}\to SS=[SW_{1},...,SW_{s}]\in \mathcal{R}^{d\times s}. \tag{3}\]
ShapeWordNet
The overall architecture of our proposed ShapeWordNet is shown in Figure 1, which consists of two stages: the **ShapeWord Discretization** stage and the **Cross-scale Contrastive Learning Assisted Classification** stage.
### ShapeWord Discretization
The first stage includes three steps: (1) **Shapelet Selection**, (2) **ShapeWord Generation** and (3) **Muti-scale ShapeSentence Transformation**.
**Shapelet Selection.** Shapelets are discriminative subsequences that can offer explanatory insights into the problem domain [33]. In this paper, we seize on such advantages of shapelets to extract interpretable and prominent local patterns. Traditional way of selecting shapelets is to evaluate the maximum information gain among all possible splits for each shapelet candidate, which would be extremely time-consuming in the physiological signal classification (PSC) scenario. To fast select shapelets, we combine the single-scan shapelet discovery algorithm [23] with a random sampling strategy. Specifically, we first select 10 samples from each class at random. Then we generate shapelet candidates with a sliding window and evaluate each candidate's discrimination ability with the F-statistic measure that assesses the mean \(sDist\) distribution differences between classes:
\[F\left(S\right)=\frac{\sum_{v=1}^{V}\left(\bar{d}_{S,v}-\bar{d}_{S}\right)^{2} \Big{/}(V-1)}{\sum_{v=1}^{V}\sum_{j=1}^{N_{v}}\left(d_{S,v,j}-\bar{d}_{S,v} \right)^{2}\Big{/}(N-V)}, \tag{4}\]
where \(V\) is the class number, \(N\) is the total sample number, \(N_{v}\) is the number of class \(v\), \(S\) is the shapelet candidate to be assessed, \(\bar{d}_{S}\) is the mean value of its \(sDist\) vector \(D_{S}\), and \(d_{S,v,j}\) is its \(sDist\) with the \(j\)-th sample of class \(v\).
Figure 1: An overview of the ShapeWordNet model.
ShapeWord Generation.Although plenty of shapelets can be easily found, many of them are similar to each other, which brings about feature redundancy and increases computational complexity. To alleviate this issue, we propose to generate the prototypes [20] that contain the key information shared by similar shapelets as the elementary units for sequence discretization. Toward this end, we cluster the selected shapelets with K-means and define the cluster centroids as their prototypes [28], as is suggested in Equation 2. Those prototypes are named ShapeWord and assigned with numeric cluster label tokens for reference. For the multivariate case, we simply repeat the aforementioned algorithm and generate ShapeWords for each variable. In doing so, we establish a _vocabulary_ of ShapeWords which encompasses the significant local patterns over all variable dimensions, as is illustrated in Figure 1.
Moreover, to validate the discriminative edges of ShapeWords, we conduct a comparison experiment between the selected shapelets and the generated ShapeWords on the Sleep dataset [19]. In our experiment, we first generate shapelet candidates of different lengths from 5 to 200. For each scale, we select the top-100 shapelets to produce corresponding ShapeWords via K-means, where the cluster number is set equal to the class number, i.e. \(N_{SW}=8\). Then, we establish a validation set containing 10 random samples of each class to compare the average F-statistic qualities of the ShapeWords with that of top-100 shapelets. The results in Figure 2 show that the mean F-statistic scores of the ShapeWords are competitively higher regardless of scales, which concretely demonstrate the effectiveness of ShapeWords in representing the prototypes of similar shapelets.
Multi-scale ShapeSentence Transformation.Traditional shapelet-based methods focus on extracting local patterns for classification, while in the PSC scenario, the global contextual information such as the periodicity and variation of local patterns is also of critical importance. For example, the sinus arrhythmia can be more effectively diagnosed from a periodic perspective. Hence, based on the extracted local patterns represented by ShapeWords, we discretize the entire sequence to further distill the global contexts. Firstly, we segment the original signals into non-overlapping subsequences via a sliding window of the
Figure 2: Results of the average F-statistic quality comparison between ShapeWords and the top-100 Shapelets w.r.t. the shapelet lengths, which is the higher the better.
ShapeWord size. Then, we assign each segment the cluster label token that refers to its nearest ShapeWord according to the Euclidean Distance. In doing so, a long and complex point-wise time series can be interpretably compressed into a much shorter and simpler word-wise token sequence, which preserve the key features both locally and globally and can be more robust to noise disturbance [30]. We call such a token sequence ShapeSentence to suggest it is like a meaningful sentence in the natural language, where the whole sentence and its constituent words contain information at different semantic levels.
In addition, the ShapeSentence is quite scalable and can be adaptively extended as a data augmentation strategy to mitigate data sparsity issue. To be specific, we make the best of the large shapelet candidate space to generate different sizes of ShapeWords and transform each sample into multiple scales of ShapeSentences, which are further utilized to construct self-supervised signals in the contrastive learning mechanism. We dub this data augmentation technique as the **M**ulti-scale **S**hapeSentence **T**ransformation (MST) strategy and summarize its procedure in Algorithm 1. When \(max=min+1\), it is the simplest single-scale ShapeSentence transformation version mentioned above.
```
1:Input: sample set \(T\), ShapeWord scale range \([min,max]\), ShapeWord vocabulary \(SWList\)
2:\(AugmentedData\leftarrow\emptyset\)
3:for\(T_{i}\) in \(T\)do
4:\(MShapeSentences\leftarrow\emptyset\)
5:for\(l\gets min\) to \(max\)do
6:\(W_{i}^{l}\gets segmentSequences\left(T_{i},l\right)\)
7:for subsequence \(S\) in \(W_{i}^{l}\)do
8:\(SW_{h}\gets findClosest\left(S,SWList\right)\)
9:\(MShapeSentences.append\left(SW_{h}\right)\)
10:endfor
11:\(AugmentedData.append\left(MShapeSentences\right)\)
12:endfor
13:endfor
14:return\(AugmentedData\)
```
**Algorithm 1****M**ulti-scale **S**hapeSentence **T**ransformation (MST)
### Cross-scale Contrastive Learning Assisted Classification
The second stage adopts the paradigm of multi-task learning, where the model training is assisted by the **Cross-scale Contrastive Learning** and the learnt representations are fused through **Scale-aware Feature Integration** before final classification.
**Cross-scale Contrastive Learning.** Self-supervised learning has emerged as an alternative paradigm to overcome deep learning's heavy dependence on manual labels by leveraging the input data itself as supervision [25]. In recent years, great breakthroughs in this field has been achieved by the contrastive learning [7], which aims at "learning to compare" through the Noise Contrastive
Estimation (NCE) [15] or the InfoNCE objectives [27]. In this work, we adaptively design a cross-scale contrastive learning mechanism to alleviate the label sparsity issues of PSC by constructing the self-supervised signals based on the multi-scale transformed ShapeSentences. Since a pair of large-scale and small-scale ShapeSentences of the same sample can be regarded as its observations from multi-scale perspectives, it is safe to hypothesize that there exists latent invariance behind them which we can enable the feature encoders to capture [16]. With this intuition, we encode each scale of ShapeSentence into fixed-size representations and compute the InfoNCE loss by comparing their similarities.
As is illustrated in Figure 1, for instance, \(h\) scales of sample \(i\)'s ShapeSentences, i.e. \(SS_{i}^{1}\), \(SS_{i}^{2}\), \(...\), \(SS_{i}^{h}\), are first fed into a set of encoders \(En^{1}\left(\cdot\right),...,En^{h}\left(\cdot\right)\) to obtain their representations, i.e. \(e_{i}^{1}=En^{1}\left(SS_{i}^{1}\right),...,e_{i}^{h}=En^{h}\left(SS_{i}^{h}\right)\) (one encoder corresponds to one scale). Then, in order to make the encoders capable of capturing the invariance shared by different scales of ShapeSentences, we define the cross-scale contrastive loss \(L_{sc}^{h}\) as:
\[L_{sc}^{h}=\frac{1}{\binom{h}{2}}\sum\limits_{u=1}^{h-1}\sum\limits_{v=u+1}^{ h}L_{u,v}^{sc}, \tag{5}\]
\[L_{u,v}^{sc}=E_{\left(e_{i}^{u},e_{i}^{v}\right)\sim P_{u,v}^{i}}\left[- \log\frac{f\left(e_{i}^{u},e_{i}^{v}\right)}{f\left(e_{i}^{u},e_{i}^{v}\right) +\sum\limits_{i\neq j}f\left(e_{i}^{u},e_{j}^{v}\right)}\right], \tag{6}\]
\[f\left(e_{i}^{u},e_{i}^{v}\right)=\left(e_{i}^{u}\right)^{T}e_{i}^{v}\Big{/} \tau, \tag{7}\]
where \(L_{u,v}^{sc}\) is the contrastive loss between the representations of the \(u\)-scale ShapeSentences and the \(v\)-scale ShapeSentences, with \(P_{u,v}^{i}\) as their joint sample distribution and \(f\left(\cdot\right)\) being the representation similarity measure. In this article, we simply apply the vector inner product to compute representation similarities, view representation pair \(\left(e_{i}^{u},e_{i}^{v}\right)\) from the same sample as positive, and randomly select \(N-1\) different samples from the marginal distributions of other samples within the same one mini-batch as negative, e.g. the negative \(e_{j}^{v}\) from the \(v\)-scale marginal distribution of another sample \(j\), as is suggested by InfoNCE [27].
In our scheme, we choose the deep dilated causal convolutional neural network [12] as the encoder backbone given its high efficiency and outstanding excellence in capturing long-range dependencies [5]. Besides, to learn the multivariate interactions, we input different variable's ShapeSentence into different channels of the encoder to obtain the representation feature vectors [35].
**Scale-aware Feature Integration for Classification.** Before classification, we need to integrate the learned representations from different scales of ShapeSentences at first. In terms of multi-scale feature fusion, general methods like average pooling, maxpooling and direct concatenation [8] are all model-agnostic that do not take the domain particularities into consideration. To adaptively make full use of the multi-scale information in our method, we regard each scale's representation as complementary to the raw data's invariant features and
concatenate them by channel. We then input the concatenated representation tensor into the the **S**cale-aware **F**eature **I**ntegrator (SFI) for feature fusion:
\[C_{i}=SFI\left(E_{i}\right), \tag{8}\]
where SFI is a single one-dimensional convolution layer with different channels catering to different scales of ShapeSentence representations, \(E_{i}=\left[e_{i}^{1},...,e_{i}^{h}\right]\in\mathcal{R}^{p\times h}\) is sample \(i\)'s concatenated representation tensor, and \(C_{i}\in\mathcal{R}^{q}\) is the integrated feature vector.
Finally, we input the fused feature representation \(C_{i}\) into a single linear classifier to obtain the classification outcome \(\hat{y}_{i}\):
\[\hat{y}_{i}={W_{c}}^{\top}C_{i}+W_{0}, \tag{9}\]
where \(W_{c}\in\mathcal{R}^{c\times q}\) and \(W_{0}\in\mathcal{R}^{c}\) are learnable parameters.
**Multi-task Optimization.** To optimize the whole network, we combine the cross-entropy loss for classification with the cross-scale contrastive loss for self-supervision as a multi-task goal [29] for joint training:
\[\begin{split} L&=L_{ce}+\lambda L_{sc}^{h}\\ &=-\sum_{c}^{|C|}y_{c}\log\left(\hat{y}_{c}\right)+\frac{\lambda }{\binom{h}{2}}\sum_{u=1}^{h-1}\sum_{v=u+1}^{h}L_{u,v}^{sc},\end{split} \tag{10}\]
where \(\lambda\) is used to balance different losses and \(h\) is the number of scales.
## 4 Experiments
### Experimental Setup
**Datasets.** We conduct experiments on three real-world public datasets from PhysioNet [13], where two datasets are ECG signals used for cardiac disease classification in the 2020 Physionet/Computing Cardiology Challenge [2] and one dataset contains EEG signals popular in sleep-stage classification [19]. In our experiment, we pick out part of the single-label samples and randomly split them into 80%-20% train-test datasets. The statistics of these datasets are summarized in Table 1, where Trainsize/Testsize means the sample number of the training/testing dataset, Dim/Len refers to the variable number and time steps, and Ratio stands for the ratio of one class number to all.
**Baselines and Variants.** We compare three variants of our ShapeWordNet (SWN) scheme with four categories of time series classification (TSC) baselines:
**(1) Shapelet-based:** We employ the Shapelet Transformation (**ST**) [23] and Learning Shapelets (**LS**) [14] for the first baseline group. ST searches the shapelets and transforms the original data into distance vectors, while LS learns the shapelets directly by optimizing the goal function of classification.
**(2) Dictionary-based:** We pick out the classical **SAX-VSM**[32] and the recent **WEASEL+MUSE**[31] for sequence discretization comparison, which respectively utilize the SAX words and SFA words for time series discretization and build classifiers based on their frequency patterns.
**(3) SOTA:** This group contains two state-of-the-art TSC methods, including the the non-DL model MiniRocket [9] and the DL model TapNet [34].
**(4) CNN-based:** Considering the noticeable achievements of convolutional neural networks (CNN) in sequence modeling [5], we adopt four different architectures of CNN-based TSC approaches as the deep learning baselines, which are **MCNN**[8], **LSTM-FCN**[18], **ResCNN**[36] and **TCN**[5].
**(5) Variants:** We put forward three variants of our method for ablation study: (1) **SWN w/o SD** stands for the ShapeWordNet without ShapeWord Discretization, i.e. the encoder backbone and the **TCN** baseline, (2) **SWN w/o CCLM** represents the ShapeWordNet without the Cross-scale Contrastive Learning Mechanism, and (3) **SWN w/o SFI** means the ShapeWordNet without the Scale-aware Feature Integrator.
**Implementation Details.** We set the \(layer\_depth=3\), \(kernel\_size=3\), and \(out\_channels=50\) as the default parameters for each of the dilated causal convolutional encoders. In terms of the ShapeWord/shapelet/word number parameters in three SWN variants, Dictionary-based and Shapelet-based baselines, we set them equal to the task class number times the variable dimensions of each dataset, i.e. 8*2 for Sleep and 9*12 for CPSC and Georgia. As for the ShapeWord/shapelet/word lengths, we set 10 for SWN w/o CCLM, SAX-VSM, WEASEL+MUSE, ST and LS, and set [10,25,50] for SWN w/o SFI and SWN. In SWN w/o SFI and SWN, the \(\lambda\) in Equation 10 to balance loss is set to be 0.5. Besides, we set the \(batch\_size=30\), \(training\_epochs=50\) and use Adam optimizer with \(learning\_rate=0.001\) for all methods.
We evaluate the classification performance with two metrics: **Accuracy (ACC)** and **Macro F1-score (MAF1)**. ACC can measure the overall perfor
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Property} & \multicolumn{3}{c}{Datasets} \\ \cline{2-4} & CPSC & Georgia & Sleep \\ \hline Triansize & 5,123 & 2,676 & 12,787 \\ Testsize & 1,268 & 668 & 1,421 \\ Dim/Len & 12/5,000 & 12/5,000 & 2/3,000 \\ Category & ECG & ECG & EEG \\ \hline \multirow{3}{*}{Multi-class Ratio (\%)} & [14.07, 24.24, & [52.73, 12.97, & [48.42, 3.65, \\ & 15.42, 9.53, & 7.47, 6.50, & 21.40, 4.25, \\ & 12.26, 10.85, & 6.69, 3.81, & 4.40, 9.93, \\ & 3.03, 2.69, 7.91] & 3.77, 3.51, 2.54] & 0.08, 7.86] \\ \hline Binary Ratio (\%) & [14.07, 85.93] & [52.73, 47.27] & [48.42, 51.58] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Descriptive statistics of three datasets.
mance by calculating how many samples are correctly classified in total, while MAF1 can avoid the measurement bias caused by class imbalance and assess a model's discrimination ability more fairly.
We have implemented the proposed method in python 3.7 and run all the experiments on a machine of CentOS 7.9.2009 with 4 Tesla V100 and 2 Intel Xeon Gold 5218 @2.30GHz.
### Performance Comparison
To comprehensively evaluate the performance of our method, we conduct two experimental tasks of binary classification (BC) and multi-class classification (MC) on each dataset, where all the labels in MC other than the positive ones constitute the negative labels in BC. According to Table 1, the BC class ratios of Georgia and Sleep are more balanced than their MC class ratios, which enables BC tasks to serve as the control experiments concerning the label sparsity issue. Although the BC class ratio of CPSC is less balanced than MC class ratio, CPSC has a more balanced MC ratio than Georgia and Sleep, making it a control dataset to indicate the influence of class balance on model performance. Table 2 reports the ACC and MAF1 of different methods on two tasks times three datasets and denotes the best ACC and MAF1 for each task with boldface. Consistent with our intuition, the major observations are summarized as follows:
(1) We can see that our SWN variants significantly outperform four categories of TSC methods on three datasets with an average improvement of 9.28% on BC tasks and 31.40% on MC tasks. These results strongly testi
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{CPSC} & \multicolumn{4}{c}{Georgia} & \multicolumn{4}{c}{Sleep} \\ \cline{2-13} Method & \multicolumn{2}{c}{2 Classes} & \multicolumn{2}{c}{9 Classes} & \multicolumn{2}{c}{2 Classes} & \multicolumn{2}{c}{9 Classes} & \multicolumn{2}{c}{2 Classes} & \multicolumn{2}{c}{8 Classes} \\ \cline{2-13} & ACC & MAF1 & ACC & MAF1 & ACC & MAF1 & ACC & MAF1 & ACC & MAF1 & ACC & MAF1 \\ \hline LS & 0.8400 & 0.4579 & 0.1554 & 0.0299 & 0.4970 & 0.4970 & 0.2814 & 0.1112 & 0.7847 & 0.7843 & 0.6946 & 0.3244 \\ ST & 0.6500 & 0.5911 & 0.6900 & 0.1331 & 0.5100 & 0.4516 & 0.2500 & 0.2319 & 0.8300 & 0.8311 & 0.5600 & 0.2839 \\ \hline SAX-VSM & 0.1617 & 0.1426 & 0.1483 & 0.1141 & 0.5973 & 0.5904 & 0.1272 & 0.1106 & 0.6868 & 0.6666 & 0.5771 & 0.2994 \\ WEASEL+MUSE & 0.8303 & 0.4987 & 0.2505 & 0.3850 & 0.6102 & 0.5561 & 0.5653 & 0.1567 & 0.7136 & 0.7126 & 0.5517 & 0.3334 \\ \hline MiniRocket & 0.8450 & 0.4646 & 0.0935 & 0.0220 & 0.5308 & 0.3802 & 0.0793 & 0.0164 & 0.9369 & 0.9363 & 0.6864 & 0.4211 \\ TapNet & 0.7167 & 0.5366 & 0.1372 & 0.1148 & 0.5132 & 0.5123 & 0.2695 & 0.1180 & 0.7910 & 0.7904 & 0.5742 & 0.2775 \\ \hline MCNN & 0.8564 & 0.4613 & 0.1044 & 0.0210 & 0.4850 & 0.3266 & 0.4850 & 0.0726 & 0.5011 & 0.3338 & 0.5039 & 0.0838 \\ LSTM-FCN & 0.8967 & 0.7865 & 0.6727 & 0.6134 & 0.5150 & 0.3399 & 0.1617 & 0.0607 & 0.7607 & 0.7566 & 0.6144 & 0.2489 \\ ResCNN & 0.8431 & 0.4574 & 0.1300 & 0.0489 & 0.5225 & 0.4976 & 0.1766 & 0.0417 & 0.7741 & 0.7575 & 0.1323 & 0.0535 \\ TCN (SWN w/o SD) & 0.8904 & 0.7734 & 0.6491 & 0.5423 & 0.5928 & 0.5883 & 0.5045 & 0.1740 & 0.8951 & 0.8946 & 0.7241 & 0.4645 \\ \hline SWN w/o CLM & 0.9014 & 0.7893 & 0.6924 & 0.6395 & **0.7590** & **0.7590** & 0.7246 & 0.3650 & 0.9198 & 0.9192 & 0.76425 & 0.5210 \\ SWN w/o SFI & 0.9085 & 0.8059 & 0.7397 & 0.6778 & 0.7350 & 0.7325 & 0.7156 & **0.4538** & 0.9346 & 0.9343 & 0.8023 & 0.5638 \\ SWN & **0.9101** & **0.8212** & **0.7516** & **0.7156** & 0.7096 & 0.7094 & **0.7365** & 0.4256 & **0.9374** & **0.9370** & **0.8093** & **0.5645** \\ \hline Improv (\%) & 1.49 & 4.41 & 8.93 & 16.66 & 19.70 & 20.58 & 45.99 & 83.53 & 4.73 & 4.74 & 11.77 & 21.53 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The performance comparison between different methods on BC and MC tasks of three datasets with ACC and MAF1 metrics. The best performing methods are boldfaced and the best beaselines are underlined. Improv (%) measures the relative improvements of SWN variants over the best baselines. Note that the improvements are statistically significant with a two-sided t-test \(p\)\(<\)0.01.
of our method in dealing with the PSC problem especially on the condition of severe label sparsity issues.
(2) Our proposed ShapeWordNet surpasses the classical Shapelet-based ST and LS methods by a large margin, which underlines the significance of utilizing deep learning models to capture multivariate interaction and distilling both the local and global information for time series classification.
(3) Compared with SAX-VSM and WEASEL+MUSE, our method wins overwhelmingly on all tasks. We believe this phenomenon concretely demonstrates the advantages of ShapeWords in representing discriminative local patterns over the SAX words and SFA words. Besides, it also indicates learning abstract feature representations is more effective than relying on discrete statistical patterns.
(4) In contrast to the CNN-based methods that rank the second in 5 tasks (3 for LSTM-FCN and 2 for TCN), our method works particularly better on three MC tasks by obtaining an average of 19.09% rise in MAF1. Such a noticeable improvement soundly validates the effectiveness of leveraging ShapeWord Discretization and Cross-scale Contrastive Learning in label sparsity mitigation and invariant feature extraction.
### Ablation Study
To evaluate the effectiveness of each component, we investigate the performance of three variants. Firstly, we observe that SWN w/o CCLM substantially surpasses the SWN w/o SD by approximately 17.92% on CPSC MC, 12.16% on Sleep MC and 109.77% on Georgia MC in MAF1, which substantiates the effectiveness of ShapeWord Discretization in reducing noise disturbance and distilling prominent features. In addition, the fact that SWN and SWN w/o SFI defeat SWN w/o CCLM on 10 tasks with at least 5.99% rise of MC MAF1 tellingly verifies the remarkable progress made by the CCLM component in relieving the label sparsity. Moreover, the slight edges of SWN over SWN w/o SFI on 9 tasks indicate that SFI may contribute to promoting performance more or less.
### ShapeWord Discretization Analysis
As the results in Table 2 imply, ShapeWord Discretization plays the most critical role in boosting the model's generalization. Therefore, it is necessary to explain how it really takes effect. To investigate the interpretability and effectiveness of ShapeWord Discretization, we take a close look at its characteristics through two visualization experiments.
First, we conduct a case study to intuitively explore the interpretability of the ShapeWord Discretization. Figure 3 illustrates a case of two atrial fibrillation (AF) samples and two sinus rhythm (NSR) samples, where the four raw signals in blue lines are discretized into four ShapeSentences (\([1,1,8,8,1]\), \([8,8,1,1,1]\), \([1,1,1,3,1]\) and \([1,1,3,1,1]\)) in red lines respectively. It can be noticed that the pattern of AF, i.e. the disease of sustained tachyarrhythmia commonly seen in clinical practice, seems to be captured by the violently fluctuated ShapeWord \(SW_{8}\) presented below. In contrast, the peculiarities of NSR, i.e. the normal state,
seem to be represented by the combination of the ShapeWord \(SW_{1}\) and \(SW_{3}\). Hence, it is reasonable to believe that both the discriminative local patterns and their coherence can be well preserved via ShapeWord Discretization.
Second, to vindicate the contributions of ShapeWord Discretization to DL model's representation learning, we conduct the t-SNE analysis [26] to compare the representations output by SWN w/o CCLM and SWN w/o SD respectively on the Georgia BC task. In Figure 4(a) and 4(b), the sample representations output by SWN w/o CCLM (i.e. the variant with the ShapeWord Discretization) are more closely grouped with a clearer clustering boundary than those output by the SWN w/o SD, which defensibly testifies the effect of ShapeWord Discretization on relieving the influence of data noise and promoting the representational ability of deep neural networks.
### Hyper-parameter Sensitivity
In this section, we discuss the impact of several key hyper-parameters on the performance of our method, which includ the Scale Number \(N_{s}\), the Loss Balance Factor \(\lambda\), the ShapeWord Number \(N_{SW}\) and the ShapeWord Length \(L_{SW}\).
Figure 3: A case of two AF samples and two NSR samples transformed by ShapeWord Discretization with \(window\_size=50\). The raw signals are in blue lines and their ShapeSentences are in red ones, where each token refers to a ShapeWord shown below.
**Performance w.r.t. Scale Number.** We display the model performance w.r.t. the scale number \(N_{s}\) in Figure 5, where the scales for MST are successively picked from the range [5, 10, 25, 50, 100], e.g. 2 for [5, 10] and 3 for [5, 10, 25]. Figure 5(a) and Figure 5(b) show that for different datasets the impact of scale number is different and an ideal interval of this parameter shared by three datasets seems to be around [2, 3]. Based on this observation, we implement 3 scales of MST for SWN in our experiments.
**Performance w.r.t. Loss Balance Factor.** The parameter \(\lambda\) in Equation 10 is to balance the loss between classification training and contrastive learning. As is illustrated in Figure 5(c) and 5(d), the performance of SWN seems more likely to be affected by \(\lambda\) on MC tasks than on BC tasks, and the overlapping optimal interval of \(\lambda\) for three datasets is suggested to be \([0.3,0.7]\). In our experiment, we adopt \(\lambda=0.5\) as default given the fact that SWN obtains the best performance on two datasets under this condition.
**Performance w.r.t. ShapeWord Number.** Since ShapeWord is defined as the centroid of a cluster of similar shapelets, its number hence depends on the cluster number. In this article, we leverage K-means to generate ShapeWords. To exclude the impact of contrastive learning, we adopt the variant SWN w/o
Figure 4: T-SNE visualization of representations produced by (a) SWN w/o SD and (b) SWN w/o CCLM on Georgia BC task (0 indicates positive and 1 means negative).
Figure 5: SWN’s performance of MAF1 w.r.t. the number of scales \(N_{s}\) and the loss balance factor \(\lambda\) on three datasets over BC task and MC task.
CCLM to conduct the sensitivity experiment regarding the parameter of ShapeWord Number \(N_{SW}\). Graphs in Figure 6 show that despite the influence of ShapeWord Length \(L_{SW}\), the overlapping optimal interval of three datasets for \(N_{SW}\) is approximately \([5,12]\). In our experiment, we choose the class number as this parameter's default setting for each dataset (i.e. 8 for Sleep, 9 for CPSC and Georgia), which is also consistent with the original definition of shapelets.
**Performance w.r.t. ShapeWord Length.** It can be observed from Figure 6 that the performance curve of each dataset shows a trend of first moving up and then going down as \(L_{SW}\) increases, which indicates short ShapeWords are more suitable for physiological signal discretization. And the overlapping optimal interval of ShapeWord Length among three datasets is about \([10,50]\), which is why we choose the three scales \([10,25,50]\) for SWN.
## 5 Conclusion
In this paper, we proposed ShapeWordNet to deal with physiological signal classification. The uniqueness of our model is to generate prototypes of discriminative local patterns via shapelets and to discretize point-wise raw signals into token sequences of subseries. Given the label sparsity issue, we designed a cross-scale contrastive learning mechanism to assist model optimization, where a multi-scale ShapeSentence transformation strategy was adaptively utilized to augment the data. The experimental results demonstrated both the effectiveness and the interpretability of our method, paving the way for its extension to general time series analysis in the future.
**Acknowledgement.** This research was partially supported by grant from the National Natural Science Foundation of China (Grant No. 61922073). This work also thanks the support of fundings MAI2022C007 and WK5290000003.
Figure 6: Performance of SWN w/o CCLM w.r.t. ShapeWord Number in MC tasks under five different settings of ShapeWord Length. |
2303.11341 | What does it take to catch a Chinchilla? Verifying Rules on Large-Scale
Neural Network Training via Compute Monitoring | As advanced machine learning systems' capabilities begin to play a
significant role in geopolitics and societal order, it may become imperative
that (1) governments be able to enforce rules on the development of advanced ML
systems within their borders, and (2) countries be able to verify each other's
compliance with potential future international agreements on advanced ML
development. This work analyzes one mechanism to achieve this, by monitoring
the computing hardware used for large-scale NN training. The framework's
primary goal is to provide governments high confidence that no actor uses large
quantities of specialized ML chips to execute a training run in violation of
agreed rules. At the same time, the system does not curtail the use of consumer
computing devices, and maintains the privacy and confidentiality of ML
practitioners' models, data, and hyperparameters. The system consists of
interventions at three stages: (1) using on-chip firmware to occasionally save
snapshots of the the neural network weights stored in device memory, in a form
that an inspector could later retrieve; (2) saving sufficient information about
each training run to prove to inspectors the details of the training run that
had resulted in the snapshotted weights; and (3) monitoring the chip supply
chain to ensure that no actor can avoid discovery by amassing a large quantity
of un-tracked chips. The proposed design decomposes the ML training rule
verification problem into a series of narrow technical challenges, including a
new variant of the Proof-of-Learning problem [Jia et al. '21]. | Yonadav Shavit | 2023-03-20T13:50:05Z | http://arxiv.org/abs/2303.11341v2 | # What does it take to catch a Chinchilla?
###### Abstract
As advanced machine learning systems' capabilities begin to play a significant role in geopolitics and societal order, it may become imperative that (1) governments be able to enforce rules on the development of advanced ML systems within their borders, and (2) countries be able to verify each other's compliance with potential future international agreements on advanced ML development. This work analyzes one mechanism to achieve this, by monitoring the computing hardware used for large-scale NN training. The framework's primary goal is to provide governments high confidence that no actor uses large quantities of specialized ML chips to execute a training run in violation of agreed rules. At the same time, the system does not curtail the use of consumer computing devices, and maintains the privacy and confidentiality of ML practitioners' models, data, and hyperparameters. The system consists of interventions at three stages: (1) using on-chip firmware to occasionally save snapshots of the the neural network weights stored in device memory, in a form that an inspector could later retrieve; (2) saving sufficient information about each training run to prove to inspectors the details of the training run that had resulted in the snapshotted weights; and (3) monitoring the chip supply chain to ensure that no actor can avoid discovery by amassing a large quantity of un-tracked chips. The proposed design decomposes the ML training rule verification problem into a series of narrow technical challenges, including a new variant of the Proof-of-Learning problem [Jia et al. '21].
## 1 Introduction
Many of the remarkable advances of the past 5 years in deep learning have been driven by a continuous increase in the quantity of _training compute_ used to develop cutting-edge models [25, 21, 54]. Such large-scale training has been made possible through the concurrent use of hundreds or thousands of specialized accelerators with high inter-chip communication bandwidth (such as Google TPUs, NVIDIA A100 and H100 GPUs, or AMD MI250 GPUs), employed for a span of weeks or months to compute thousands or millions of gradient updates. We refer to these specialized accelerators as _ML chips_, which we distinguish from consumer-oriented GPUs with lower interconnect bandwidth (e.g., the NVIDIA RTX 4090, used in gaming computers).
This compute scaling trend has yielded models with ever more useful capabilities. However, these advanced capabilities also bring with them greater dangers from misuse [7]. For instance, it is increasingly plausible that criminals may soon be able to leverage heavily-trained code-generation-and-execution models to autonomously identify and exploit cyber-vulnerabilities, enabling ransomware attacks on an unprecedented scale. 1 Even absent malicious intent, rival companies or countries trapped in an AI "race dynamic" may face substantial pressure to cut corners on testing and risk-mitigation, in order to deploy high-capability ML systems in the hopes of outmaneuvering their competitors economically or militarily. The edge-case behaviors of deep learning models are notoriously difficult to debug [20], and without thorough testing and mitigation, such bugs in increasingly capable systems may have increasingly severe consequences. Even when rival parties would all prefer to individually do more testing and risk-mitigation, or even
forgo developing particularly dangerous types of ML models entirely [60], they may have no way to verify whether their competitors are matching their levels of caution.
In the event that such risks do emerge, governments may wish to enforce limits on the large-scale development of ML models. While law-abiding companies will comply, criminal actors, negligent companies, and rival governments may not, especially if they believe their rule-violations will go unnoticed. It would therefore be useful for governments to have methods for reliably _verifying_ that large-scale ML training runs comply with agreed rules.
These training runs' current need for large quantities of specialized chips leaves a large physical and logistical footprint, meaning that such activities are generally undertaken by sizable organizations (e.g., corporate or governmental data-center operators) well-equipped to comply with potential regulations. Yet even if the relevant facilities are known, there is no easily-observable difference between training a model for social benefit, and training a model for criminal misuse -- they require the same hardware, and at most differ in the code and data they use. Given the substantial promise of deep learning technologies to benefit society, it would be unfortunate if governments, in a reasonable attempt to curtail harmful use-cases but unable to distinguish the development of harmful ML models, ended up repressing the development of beneficial applications of ML as well. Such dynamics are already appearing: the US Department of Commerce's rationale for its October 2022 export controls denying the sale of high-performance chips to the People's Republic of China, while not specific to ML, was based in part on concern that those chips might be used to develop weapons against the United States or commit human rights abuses [5]. If the US and Chinese governments could reach an agreement on a set of permissible beneficial use-cases for export-controlled chips, and had a way to verify Chinese companies' compliance with that agreement, it may be possible to prevent or reverse future restrictions.
Such a system of verification-based checks and balances, distinguishing between "safe" and "dangerous" ML model training, might seem infeasible. Yet a similar system has been created before. At the dawn of the nuclear age, nations faced an analogous problem: reactor-grade uranium (used for energy) and weapons-grade uranium (used to build nuclear bombs) could be produced using the same types of centrifuges, just run for longer and in a different configuration. In response, in 1970 the nations of the world adopted the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) and empowered the International Atomic Energy Agency (IAEA) to verify countries' commitments to limiting the spread of nuclear weapons, while still harnessing the benefits of nuclear power. This verification framework has helped the world avoid nuclear conflict for over 50 years, and helped limit nuclear weapons proliferation to just 9 countries while spreading the benefits of safe nuclear power to 33 [40]. If future progress in machine learning creates the domestic or international political will for enacting rules on large-scale ML development, it is important that the ML community is ready with technical means for verifying such rules.
### Contributions
In this paper, we propose a monitoring framework for enforcing rules on the _training_ of ML2 models using large quantities of specialized ML chips. Its goal is to enable governments to verify that companies and other governments have complied with agreed guardrails on the development of ML models that would otherwise pose a danger to society or to international stability. The objective of this work is to lay out a possible system design, analyze its technical and logistical feasibility, and highlight important unsolved challenges that must be addressed to make it work.
Footnote 2: Throughout the text, we use “ML” to refer to deep-learning-based machine learning, which has been responsible for much of the progress of recent years.
The proposed solution has three parts:
1. To prove compliance, an ML chip owner employs firmware that logs limited information about that chip's activity, with their employment of that firmware attested via hardware features. We propose an activity logging strategy that is both lightweight, and maintains the confidentiality of the chip-owner's trade secrets and private data, based on the NN weights present in the device's high-bandwidth memory.
2. By inspecting and analyzing the logs of a sufficient subset of the chips, inspectors can provably determine whether the chip-owner executed a rules-violating training run in the past few months, with high probability.
3. Compute-producing countries leverage supply-chain monitoring to ensure that each chip is accounted for, so that actors can't secretly acquire more ML chips and then underclaim their total to hide from inspectors.
The system is compatible with many different rules on training runs (see Section 2.1), including those based on the total chip-hours used to train a model, the type of data and algorithms used, and whether the produced model exceeds a performance threshold on selected benchmarks. To serve as a foundation for meaningful international coordination, the framework aspires to reliably detect violations of ML training rules _even in the face of nation-state hackers attempting to circumvent it_. At the same time, the system does not force ML developers to disclose their confidential training data
or models. Also, as its focus is restricted to specialized data-center chips, the system does not affect individuals' use of their personal computing devices.
Section 2 introduces the problem of verifying rules on large-scale ML training. Section 3 provides an overview of the solution, and describes the occasional inspections needed to validate compliance. Sections 4, 5, and 6 discuss the interventions at the chip-level, data-center-level, and supply-chain respectively. Section 7 concludes with a discussion of the proposal's benefits for different stakeholders, and lays out near-term next steps.
### Limitations
The proposed system's usefulness depends on the continued importance of large-scale training to produce the most advanced (and thus most dangerous) ML models, a topic of uncertainty and ongoing disagreement within the ML community. The framework's focus is also restricted only to training runs executed on specialized data-center accelerators, which are today effectively necessary to complete the largest-scale training runs without a large efficiency penalty. In Appendix A, we discuss whether these two trends are likely to continue. Additionally, hundreds of thousands of ML chips have already been sold, many of which do not have the hardware security features required by the framework, and may not be retrofittable nor even localable by governments. These older chips' importance may gradually decrease with Moore's Law. But combined with the possibility of less-efficient training using non-specialized chips, these unmonitored compute sources present an implicit lower bound on the minimum training run size that can be verifiably detected by the proposed system. Still, it may be the case that frontier training runs, which result in models with new emergent capabilities to which society most needs time to adapt, are more likely to require large quantities of monitorable compute.
More generally, the framework does not apply to small-scale ML training, which can often be done with small quantities of consumer GPUs. We acknowledge that the training of smaller models (or fine-tuning of existing large models) can be used to cause substantial societal harm (e.g., computer vision models for autonomous terrorism drones [44]). Separately, if a model is produced by a large-scale training run in violation of a future law or agreement, that model's weights may from then on be copied undetectably, and it can be deployed using consumer GPUs [55] (as ML inference requires far lower inter-chip communication bandwidth). Preventing the proliferation of dangerous trained models is itself a major challenge, and beyond the scope of this work. More broadly, society is likely to need laws and regulations to limit the harms from bad actors' misusing such ML models. However, exhaustively _enforcing_ such rules at the hardware-level would require surveilling and policing individual citizens' use of their personal computers, which would be highly unacceptable on ethical grounds. This work instead focuses attention upstream, regulating whether and how the most dangerous models _are created in the first place_.
Lastly, rather than proposing a comprehensive shovel-ready solution, this work provides a high-level solution design. Its contribution is in isolating a set of open problems whose solution would be sufficient to enable a system that achieves the policy goal. If these problems prove unsolvable, the system's design will need to be modified, or its guarantees scaled back. We hope that by providing a specific proposal to which the community can respond, we will initiate a cycle of feedback, iteration, and counter-proposals that eventually culminates in an efficient and effective method for verifying compliance with large-scale ML training rules.
### Related Work
This paper joins an existing literature examining the role that compute may play in the governance of AI. Early work by Hwang [23] highlighted the potential of computing power to shape the social impact of ML. Concurrent work by Sastry et al. [51] identifies attributes of compute that make it a uniquely useful lever for governance, and provides an overview of policy options. Closely-related work by Baker [4] draws lessons from nuclear arms control for the compute-based verification of international agreements on large-scale ML.
Rather than focusing on specific policies, the work proposes a technical platform for verifying many possible regulations and agreements on ML development. Already, the EU AI Act has proposed establishing risk-based regulations on AI products [61], while US senators have proposed an "Algorithmic Accountability Act" to oversee algorithms used in critical decisions [11], and the Cyberspace Administration of China (CAC) has established an "algorithm registry" for overseeing recommender systems [43]. Internationally, many previous works have discussed the general feasibility and desirability of AI arms control [47, 12, 37], with [52] highlighting the importance of verification measures to the success of potential AI arms control regimes. Past work has also explored the benefits of international coordination on non-military AI regulation [13].
The proposed solution involves proving that a rule-violating ML training run was _not_ done, in part by proving which other training runs _were_ done. The analysis of the latter problem is heavily inspired by the literature on Proof-of
Learning [24; 15] (discussed further in Section 5). Other works has have used tools from cryptography to train NN models securely across multiple parties [63], and to securely prove the correctness of NN inference [30]. However, these approaches suffer large efficiency penalties and cannot yet be scaled to cutting-edge model training, rendering them nonviable as a method for verifying rules on large-scale training runs.
## 2 The Problem: Detecting Violations of Large-Scale ML Training Rules
We focus on the setting in which one party (the "Verifier") seeks to verify that a given set of ML training rules is being followed, and another party (the "Prover") is developing the ML system and wants to prove to the Verifier that it is complying with those rules. The Verifier can request that the Prover take actions, such as disclosing information on training runs, in order to help the Verifier determine the Prover's compliance. The Prover is a "covert adversary" [2] - they may benefit from _violating_ the ML training rule, but will only seek to violate the rule _if they can still appear compliant_ to the Verifier. There are two real-world Prover-Verifier relationships we are particularly interested in:
* _Domestic Oversight_: Governments have a clear interest that the ML systems developed by companies operating within their borders comply with certain rules. Regulators can level both civil and criminal penalties on organizations caught violating rules, and often require organizations to maintain records that prove regulatory compliance (e.g., financial transaction record-keeping requirements).
* _International Oversight_: The most significant types of ML training rules may be those enforced internationally (on companies and governments in multiple countries), and verified by other governments or international bodies. These include enforcing globally-beneficial rules (e.g., combatting disinformation), and verifying arms control agreements (e.g., limiting the development of autonomous code-generating cyberneapons). There is precedent for countries abiding by international agreements with strict monitoring regimes when they stand to benefit, such as Russia's historically allowing random U.S. inspections of its missiles as a part of the START treaties, in exchange for certainty that the U.S. was abiding by the same missile limits [53].
Thus, the problem we address is: what minimal set of verifiable actions can the Verifier require the Prover to take that would enable the Verifier to detect, with high probability, whether the Prover violated any training rules?
### What types of rules can we enforce by monitoring ML training?
It is important that standards and agreements on ML training focus on preventing concrete harm, and otherwise leave society free to realize the broad benefits of highly-capable ML systems. Indeed, there are many types of ML models that should not only be legal to train, but that should open-sourced so that all of society can benefit from them [58]. The proposed framework focuses only on enforcing rules on the training of those more dangerous models whose creation and distribution would substantially harm society or international security. Indeed, as mentioned in Section 1.2, this framework _could not_ prevent smaller-scale training of ML models, and thus limits the risk of overreach by authoritarian Verifiers. Below are some informative properties that a Verifier could determine by monitoring the training process of an ML model:
* _Total training compute_, which has proven to be an indicator for ML models' capabilities [25; 59].
* _Properties of the training data_, such as whether a language model's text dataset contains code for cybersecurity exploits.
* _Properties of the hyperparameters_, such as the fraction of steps trained via reinforcement learning.
* _The resulting model's performance on benchmarks designed to elicit its capabilities_, including whether the model's capabilities exceed agreed-on thresholds, and including interactive benchmarks (e.g. finetuning the model on a particular task).
* Combinations of the above -- for example, "if a model was trained on RL-for-code-generation for greater than \(X\) FLOPs, then it should not be trained beyond \(Y\) performance on \(Z\) benchmarks."
Ultimately, these rule thresholds should be selected based on the model capabilities that would result. Current "scaling law" extrapolations are not yet able to reliably predict ML models' downstream capabilities [16], so finding principled methods for deciding on rule-thresholds that achieve desired policy outcomes is an important area for future work.
If a Verifier can reliably detect the aforementioned training run properties, that would allow them to mandate several types of rules, such as:
* _Reporting requirements_ on large training runs, to make domestic regulators aware of new capabilities or as a confidence-building measure between companies/competitors [22].
* _Bans or approval-requirements_ for training runs considered overly likely to result in models that would threaten society or international stability. Approval could be conditioned on meeting additional requirements (e.g., willingness to comply with downstream regulations on model use, increased security to prevent model-theft, greater access for auditors).
* _Requiring that any trained model be modified to include post-hoc safety mitigations_ if the unmodified model could be expected to pose a severe accident risk absent those mitigations. Such safety assessments and mitigations (such as "Helpful and Harmless" finetuning [3]) may involve a prohibitive upfront cost that companies/governments would otherwise avoid. However, once they have been forced to make the investment and built a less accident-prone model, they may then prefer to use the safer version. Such rules allow all parties to coordinate spending more resources on safe and responsible innovation, without fearing that their competitors may secretly undercut them by rushing ahead without addressing negative externalities.
### Other Practical Requirements
There are several other considerations for such a monitoring system to be practical. Its cost should be limited, both by limiting changes to current hardware, and by minimizing the ongoing compliance costs to the Prover and enforcement costs to the Verifier. The system should also not pose a high risk of leaking the Prover's proprietary information, including model weights, training data, or hyperparameters. Most importantly, the system must be robust to cheating attempts, even by highly-resourced adversaries such as government hacking groups, who may be willing to employ sophisticated hardware, software, and even supply-chain attacks.
## 3 Solution Overview
In this section, we outline a high-level technical plan, illustrated in Figure 1, for Verifiers to monitor Provers' ML chips for evidence that a large rule-violating training occurred. The framework revolves around chip inspections: the Verifier will inspect a sufficient random sample of the Prover's chips (Section 3.2), and confirm that none of these chips contributed to a rule-violating training run. For the Verifier to ascertain compliance from simply inspecting a chip, we will need interventions at three stages: on the chip, at the Prover's data-center, and in the supply chain.
* _On the chip_ (Section 4): When the Verifier gets access to a Prover's chip, they need to be able to confirm whether or not that chip was involved in a rule-violating training run. Given that rule violation depends only
Figure 1: Overview of the proposed monitoring framework.
on the code that was run, our solution will necessitate that ML chips logging infrequent traces of their activity, with logging done via hardware-backed firmware. We suggest that ML chips' firmware occasionally log a copy of the current state of the chip's high-bandwidth memory to long-term storage, and in particular, that it logs the shard of the NN's weights stored in memory. These _weight-snapshots_ can serve as a fingerprint of the NN training that took place on each chip.
* _At the data-center_ (Section 5): The Verifier needs a way to interpret the chips' logs, and determine whether or not they are evidence for a rule-violating training run. To that end, the Prover, who is training the model, will be required to store a transcript of the training process -- including training data, hyperparameters, and intermediate weight checkpoints -- for each model they train. Using protocols similar to "Proof-of-Learning" [24], these training transcripts may serve as provenance for the logged weight-snapshots, which are themselves the result of the same training process. In practice, for each (hash of a) weight-snapshot logged by a chip, the Prover provides the Verifier (the hashed version of) the matching training transcript. Then the Prover and Verifier jointly and securely verify that, with high probability, retraining using the training transcript would have indeed resulted in the logged weight-snapshot (and that no other valid training transcript could have resulted in that snapshot). Finally, now that the Verifier knows an approximate training transcript of the training run that had been executed on that chip at that time, they can examine properties of the training transcript to confirm that the Prover has complied with the agreed upon rules.
* _At the supply chain_ (Section 6): The Verifier needs to know which ML chips the Prover owns, so that the Verifier can randomly inspect a representative sample of those chips, to confirm their ownership and that their logging mechanisms are reporting correctly. Without this chip-ownership verification step, a Prover might covertly acquire a large quantity of chips and use them for training without ever notifying the Verifier. Keeping track of chip-ownership is viable because the cutting-edge data-center chip supply chain is highly concentrated, meaning that chips originate from a few initial chokepoints and can be monitored thereafter.
These steps, put together, enable a chain of guarantees.
* When any organization wants to train a large rule-violating ML model, they must do so using chips that the Verifier is aware of.
* These chips will occasionally log weight-snapshots. Each time a chip logs a weight-snapshot, the Prover must report the log to the Verifier, along with (hashes of) training transcripts that establish the provenance of that weight-snapshot.
* If the Prover tries to hide the logs from every chip involved in the training run, the Verifier will eventually find out, because it will sample and physically inspect at least one of those chips with high probability.
* Also, the hashed training transcripts that the Prover provides along with the logged weight-snapshot need to be authentic, reflecting the training run that was run on the chip. If they are not authentic, they will be caught by the Verifier's transcript verification procedure.
* If the training transcript _is_ authentic, and the run violated the rules, then the Verifier can tell, and will catch the Prover.
Thus, so long as the Prover complies with the Verifier's steps, the Verifier will detect the Prover's rule-violation with high probability. Just as in financial audits, a Prover's refusal to comply with the verification steps would itself represent an indication of guilt.
### Chip inspections
Ideally, chips could remotely report their logs, with on-chip firmware and remote attestation being sufficient to guarantee that those logs were truthfully reported (see Section 4). Such guarantees may be sufficient to deter most Provers from misbehavior. Unfortunately, these measures may not restrain well-resourced adversarial Provers: it is generally understood within the hardware security world that, given unlimited physical access to a chip (which the Prover certainly would have), a chip's attestation and signed-firmware-enforcement mechanisms can be undermined [56]. However, by physically inspecting chips after the fact, it may be possible to reliably detect such tampering. This is because most hardware attacks require physically damaging the chip in ways that would be nearly-impossible to hide from a Verifier's subsequent inspection, especially if ML chips are supplemented with tamper-evident packaging measures [64]. Luckily, rather than needing to inspect every chip, we only need to inspect enough to catch at least one chip from each large training run (Section 3.2).
A chip inspection can be done in one of two ways. The Verifier can send an inspector to the facility in which a sampled chip is stored, which may be cost-effective if the facility contains multiple chips selected for sampling. Alternatively,
the ML chips can be _transported a neutral inspection site_, as these chips are generally housed in easily-removable servers. (Sensitive components like storage can be removed prior to transport.) The transporting option may be more attractive to security-conscious Provers who would prefer not to allow Verifier inspectors into their data-centers.
A chip inspection involves several steps. First, the Verifier confirms the chip's serial number (often burned into chips and accessible both digitally and physically) matches the serial number they requested from the Prover. This confirms that this is the correct randomly-sampled chip, and also that the Prover still owns the chip and has not secretly resold it to an untrusted party. Second, the verifier checks the on-chip logs, from which they extract the weight snapshot hashes which they will check against the Prover's reported training transcripts (Section 5). Finally, the Verifier checks that the chip's logging mechanisms, such as its secure firmware authentication mechanism (Section 4), have not been physically tampered with.
### How many ML chips does the Verifier need to inspect?
Sections 4, 5, and 6 will together guarantee that if the Verifier observes even a single ML chip that logged a weight snapshot hash during a rules-violating training run, then the Verifier can use that weight snapshot to prove that the Prover violated the rules. For the Verifier to make sure they'd observe at least one such snapshot from a large training run with high probability (or alternatively, catch a chip that'd been tampered with), they would need to inspect at least \(s\) of the Prover's \(C\) chips every \(T_{m}\) days. We now seek to calculate \(s\).
We define a large-scale training run as any training run using more than \(H\) FLOPs. We only seek to verify rules on training runs using \(>H\) FLOPs, meaning that any weight-snapshots from the first \(H\) FLOPs are not incriminating (as the Prover could have stopped the training run after that snapshot). Catching a chip that happened to save a weight-snapshot at time \(H+\epsilon\) is vanishingly unlikely for small \(\epsilon\), so we instead provide guarantees on detecting the case where the Prover's training run is executed for \(\geq 2H\), substantially exceeding \(H\).3 This means that the training run was executed for an additional second period of at least \(H\) FLOPs, during which any weight snapshot would capture evidence of an ML model that had already been trained for at least \(>H\) FLOPs. From hereon, when we describe detecting a snapshot of a "large-scale training run", we refer to this second stage of the training run. If a Verifier wanted to catch a Prover _before_ they had completed a \(H\) rule-violating training run, they can simply shrink their target threshold to a fraction of \(H\), and sample chips on a rolling basis.
Footnote 3: We can always keep the detection threshold the same by cutting \(H\) in half. Also, the literature on neural scaling laws [25, 49, 62] suggests that model loss decreases logarithmically at best given additional compute, so failing to catch a less-than-2x violation may have limited consequences.
The Verifier, whether due to specific suspicions or routine due diligence, wants to audit a particular Prover's \(C\) chips. Let \(a\) be the FLOPs per day for those chips. We will conservatively assume that NN training can be perfectly parallelized and utilize all the FLOPs on each chip. Let \(f\) be the expected number of weight-snapshots saved by the chip per day (Section 4) snapshots are Poisson distributed. The Verifier wants to catch a training run within at most \(T_{m}\) days of its concluding, though the training run itself may have lasted for a longer period 4\(T>T_{m}\). This implies that the Prover must have used a minimum of \(\frac{H}{aT}\) chips. One might expect that by increasing \(T\), the Prover could use fewer chips and therefore evade detection, but this turns out to not to meaningfully improve the Prover's evasion probability (a \(<5\%\) increase) because each chip is sampled \(T/T_{m}\) times as often. (See Appendix C for details.) Thus for simplicity, we assume the Verifier will choose \(s\) as though \(T\) is fixed.
Footnote 4: For simplicity, we assume \(T\) is divisible by \(T_{m}\)
To detect at least one chip with a snapshot from a rule-violating training run with probability \(p\), the Prover needs to sample \(s\) chips each monitoring period \(T_{m}\), where:
\[s\geq\frac{\log\left(1-p\right)}{\log\left(1-\frac{H}{aT^{C}}\left(1-e^{-fT_{m }}\right)\right)}\cdot\frac{T_{m}}{T} \tag{1}\]
We provide a derivation for Equation 1 in Appendix B.
In Table 1, we provide the total required samples-per-year \(365\cdot s/T_{m}\) to catch several past and projected training runs, for Provers who own different quantities of chips \(C\). As the table shows, the number of inspected chips remains approximately at a fixed fraction of the full stockpile, decreasing with training run size \(H\).
An additional difficulty is the possibility that multiple Provers collude and each contribute a small fraction of their chips to a shared training run, such that if only one of the Provers is audited, the fraction of their \(C\) chips they used for the training run is much smaller than \(\frac{H}{aT_{m}^{2}}\). This is particularly relevant in the International Oversight setting, where a government Prover might force the participation of multiple firms. One solution to this is for an international Verifier to do background monitoring of _all_ Provers, in effect treating them all as one large Prover with access to the global
chip stockpile. Predictably, this would require a much greater number of samples, and is likely to only be feasible for regulating the largest-scale training runs (which can be caught with fewer samples). Yet, for training runs at the scale of \(10^{25}\) FLOPs or greater, monitoring could be done with a bureaucracy similar in size to the IAEA.5
Footnote 5: We want to estimate the number of inspectors needed to catch a Chinchilla-280B-sized training run, with \(10^{25}\) FLOPs, given several more years of hardware progress and global production. Given \(C=10^{7}\) worldwide chips (\(>5\times\) global stocks as of 2022), each of which can output \(a=3\cdot 10^{15}\cdot 86400\) FLOPs per day (\(3\times\) more FLOP/s than the NVIDIA H100), detecting a Chinchilla-280B-sized run within \(T=30\) days of its completion anywhere on earth with 90% probability would require roughly 232,000 worldwide chip samples per year. A single inspector might be expected to verify at least 1000 chips a year, especially if those chips are brought to a central location (see Section 3.1). This would require \(\approx 232\) inspectors, slightly smaller than the 280 active IAEA inspectors as of 2021. [39].
As individual chips become more powerful (\(a\) grows) and algorithmic efficiency increases (\(H\) shrinks), the number of required samples would need to grow in order to verify rules on a fixed-capability training run. Still, it may be that the training runs of greatest societal concern are those _near the frontier_ of scaling, and those are likely to remain detectable with few samples for the foreseeable future, even if Provers attempt to undermine them with physical tampering.
## 4 On the chip
In an NN training cluster, a large number of ML devices (each containing an ML chip, and connected to a "host" CPU chip) collaborate to simultaneously compute gradient updates. Large NNs have too many weights to fit in the high-bandwidth memory of a single ML device, so efficiently training a model requires splitting the model's weights into _weight shards_, and placing a shard on each chip [57]. Each such shard is a vector of millions or billions of values. Shards are updated after each gradient step so that new gradients can always be computed with respect to the latest weight values. This means that at every timestep, the chip contains an up-to-date result from the training run, which we will show in Section 5 can serve as a type of checksum for the training run.
In this section, we will describe how it may be possible to have ML chips log the values of these weight-shard vectors, for Verifiers to retroactively inspect. We will assume the presence of certain hardware features6 on the ML chip, though the details of the approach may need to be modified depending on the specifics of the device. For example, if the ML chip does not have the hardware required for remote firmware attestation, the chip may need to be permanently mated to a host CPU that does have such hardware, with the CPU then orchestrating the logging operations.
Footnote 6: These include standard components of a hardware security module: firmware verification and attestation, firmware anti-rollback protection, and the ability to sign, authenticate, and encrypt messages. It also includes the ability for firmware to raise interrupts, read device memory, and (to efficiently hash weight snapshots) execute code on the chip’s processor.
A chip's weight-shards can be captured by _snapshotting_ the chip's high-bandwidth memory, which contains the weights. These snapshots must be saved at random times, as otherwise the Prover can swap out the true weights for a fake set of weights right before the scheduled logging. With some small probability at every clock cycle, the firmware present on each of the Prover's chips should raise an interrupt and briefly freeze the chip's operations. The chip should then send a copy of the weight shard regions \(\omega\) to its host CPU to save in Prover-controlled long-term storage. The firmware also computes a hash \(h=\mathcal{H}(\omega)\) of the relevant memory regions, 7 using a publicly-agreed cryptographic hash function \(\mathcal{H}\).
Footnote 7: Such hashing can be done cheaply so long as the firmware can leverage the ML chip’s processing power.
\begin{table}
\begin{tabular}{l r r r r r r}
**Model** & **Training** & **H100-** & **H100s to** & **Samples/yr** & **Samples/yr** & **Samples/yr** \\ & **FLOPs \(H\)** & **days \(H/a\)** & **train in 1 yr** & **if \(C=10^{3}\)** & **if \(C=10^{5}\)** & **if \(C=10^{7}\)** \\ \hline GPT-3 [6] & 3.14e+23 & 3.64e+3 & 10 & 243 & 2.43e+4 & 2.43e+6 \\ Chinchilla [21] & 5.76e+23 & 6.67e+3 & 19 & 132 & 1.33e+4 & 1.33e+6 \\ PaLM [10] & 2.56e+24 & 2.96e+4 & 82 & 29 & 2.98e+3 & 2.99e+5 \\ Chinchilla-280B \({}^{\dagger}\) & 9.9e+24 & 1.15e+5 & 314 & 7 & 771 & 7.72e+4 \\ Chinchilla-1T \({}^{\dagger}\) & 1.27e+26 & 1.47e+6 & 4.03e+3 & — & 60 & 6.02e+3 \\ Chinchilla-10T \({}^{\dagger}\) & 1.3e+28 & 1.5e+8 & 4.12e+5 & — & — & 58 \\ \end{tabular}
\end{table}
Table 1: Example numbers of required total _annual_ samples \(365\cdot s/T_{m}\) to catch a chip from every large-scale training run within \(T=30\) days, given \(a=10^{15}\cdot 24\cdot 3600\) (the daily 16-bit Tensor Core FLOPs of an NVIDIA H100 5XM GPU [42]), \(f=0.1\) weight snapshots per day (see Section 4), and the Verifier’s desired probability of catching a rule-violating training run \(p=0.9\). Models marked with \({\dagger}\) are projections for future training requirements [21].
This hash \(h\) is then saved for the Verifier to find it later, either by having the firmware store it on-chip8 in nonvolatile memory writable only by the firmware, or by having the chip sign the hash with its private key and then sending it to a Verifier-trusted central server (which must return an acknowledgement of receipt before the chip can resume operating). Later, a Verifier can use \(h\) to ask the Prover to provide them with the original weight shard \(\omega\), as this is the only input to \(\mathcal{H}\) whose hash matches \(\mathcal{H}(\omega)=h\).
Footnote 8: Keeping the hash in local NVRAM is useful if the ML chip is in a secure data-center and is restricted from communication with the outside world, such as via air-gapping.
More frequent snapshots mean more performance overhead for the Prover, especially if all the other synchronized chips in the training run must hang until the single chip's snapshotting concludes. This freezing may become expensive if the rest of the training run must hang until it completes, especially if it's done by many chips. 9 The frequency of snapshots should be set as low as possible while still ensuring that used chips reliably contain a relevant weight snapshot if they were employed in a large-scale training run (see Eq. (1)).
Footnote 9: A possible alternative would be to orchestrate less-frequent snapshots via a Verifier-trusted central server, though this would require a solution to the problem of the Prover swapping out the chip’s weights before propagating the Verifier’s signal.
Though the Prover may claim to have used firmware that logged snapshots as described above, the Verifier may not trust the Prover to have indeed used the appropriate logging firmware. For this reason, ML chips also need hardware mechanisms for enforcing that firmware be signed, and in turn, chipmakers will need to only sign firmware if it implements appropriate logging mechanisms. 101112
Footnote 10: A similar firmware mechanism was used by NVIDIA to enforce cryptocurrency mining limits on its GPUs [17].
Footnote 11: Assuming that logging-free versions have been signed in the past, ML chips may need to include anti-rollback functionality [28]. The Prover might also delay updating the chip’s firmware until shortly before inspection. To avoid this, ML chips may need to be made to remotely attest at an earlier time that their firmware has been updated.
An obstacle to logging all weight-shards stored in ML device memory is that different ML training code will store a model's weights in different regions of memory. The chip's firmware must be able to determine which region of memory the weights are stored in. It may be possible to determine the weight-regions retroactively, by logging the regions of memory claimed to correspond to the weights, along with a copy of the compiled on-device code, which can then be matched to Prover-provided source code and its memory allocation pattern analyzed. 13314 As a more invasive alternative, the Prover could proactively certify that its chip-level ML training code stores the weights in a specific memory region, by having its chip-code verified and then signed by a Verifier-trusted server before it can be executed by the firmware. 15
Footnote 11: Additional challenges arise if the Verifier suspects that the chipmaker (who authors the firmware) and Prover have collocated to create firmware with backdoors that disable logging. Increasing Verifiers’ confidence in the firmware may be an important consideration when verifying the operations of powerful nation-state Provers.
Footnote 13: It may even be possible to modify standard libraries for generating chip-level ML training code (e.g., PyTorch-generated CUDA) to make their memory allocation processes more easily checkable by a subsequent Verifier.
Footnote 14: Revealing the Prover’s source code to the Verifier directly may be unacceptable, demanding a more complicated verification procedure like that described in Section 5.2.
Footnote 15: The iOS App Store uses a similar method to ensure Apple devices only run signed programs [1].
A more challenging problem is that "ML chips" are general-purpose hardware accelerators, and thus are frequently used for other high-performance computing workloads like climate modeling. There is no straightforward way to determine whether an ML chip is running a neural network training job (and therefore should log weight snapshots), or an unrelated type of job exempt from Verifier oversight. 16 Relatedly, it would also be desirable to be able to exempt a fraction of chips from monitoring, for use in education or small-scale R&D where the overhead of compliance would be prohibitive. 17 Until we find methods for reliably distinguishing large-scale ML training, some fraction of ML chips must be exempted from using weight-logging firmware. The larger system's guarantee would then be that _for those of the Provers' chips that the Prover allows the Verifier to monitor_, no rule-violating training occurred. The Verifier may then use their leverage to incentivize Provers into opting-in a larger and larger fraction of their chips over time.
Footnote 17: This might be addressable by having these ML chips’ interconnect topology restricted to prevent their use in large-scale training. However, methods for Verifiers to retroactively confirm the topology that ML chips were configured in at a data-center are beyond the scope of this work.
## 5 At the data-center
From Sections 3.2 and 4, we know how a Verifier can get access to at least one weight shard \(\omega\) snapshotted during a large-scale training run. In this section, we will show how the Verifier can use this weight shard to determine whether the associated training run followed the agreed-upon rules.
On its own, knowing \(\omega\) is useless to the Verifier, since the Verifier does not know the values of the other weight-shards comprising the NN, or even how many other weight shards there are. Yet \(\omega\) holds the key to the training run's history. Any given weight-shard vector is incredibly unlikely to be yielded by the training process (there are \(\gg 2^{10^{6}}\) possibilities even if we round to 1-bit precision). But this particular \(\omega\) is the nearly-deterministic result of the training run's hyperparameters \(\mathbb{M}\) (including its random seed) and sequence of training data \(\mathbb{D}\). We refer to this information, along with information yielded throughout the training run \(\mathbb{W}\) (such as intermediate weight snapshots), as the model's _training transcript_\(\mathbb{T}=\{\mathbb{M},\mathbb{D},\mathbb{W}\}\). It is plausible that it would be very difficult for the Prover to find a second "legitimate" training transcript \(\mathbb{T}^{\prime}=\{\mathbb{M}^{\prime},\mathbb{D}^{\prime},\mathbb{W}^{ \prime}\}\neq\mathbb{T}\) that would exactly yield a full set of weights that included \(\omega\) as a contiguous weight-shard. It is also plausible that, if the Prover tried to lie and provide an \(\mathbb{H}^{\prime},\mathbb{D}^{\prime}\), and \(\mathbb{M}^{\prime}\) that would _not_ result in \(\omega\) but which they _claimed_ would result in \(\omega\), then the Verifier might find a suite of tests that would detect such a faked transcript reliably. If the Verifier could reliably detect such kales, then by asking the Prover to provide a training transcript that would yield each logged snapshot \(\omega\), the Verifier could force the Prover to reveal the true data \(\mathbb{D}\), and hyperparameters \(\mathbb{M}\) used in every large-scale training run (along with the full weights of the resulting model, not just \(\omega\)). This would be sufficient information to check that the training run complied with any of the training rule types listed in Section 2.1.
Unfortunately, having the Verifier simply re-run the full training process to verify a transcript's correctness is unworkable for several reasons. First, the Prover would likely not be willing to reveal their training data, model weights, and hyperparameters, so the Verifier must do any verification without direct access to the data. (We address this in Section 5.2.) Second, the compute cost to fully re-run the training transcript would be massive, as large as every original training run. Third, the training run would likely not be perfectly reproducible: due to hardware-level noise, even two repetitions of the same sequence of training updates would gradually diverge. Fourth, the Prover _may_ be able to construct a second "spoof" training transcript, that yields an exact match for \(\omega\) but differs from the original training run that yielded \(\omega\) in the first place.18
Footnote 18: As a trivial example, the Prover could claim that \(\omega\) was simply the value of the random initialization, and no training had happened at the time of the snapshot.
Thanfully, a close variant of this problem has already been studied in the literature, known as "Proof of Learning" [24]. The goal of a Proof-of-Learning (PoL) schema is to establish proof of ownership over a model \(W_{t}\) (e.g., to corroborate IP claims) by having the model-trainer save the training transcript \(\mathbb{T}\) (including hyperparameters \(\mathbb{M}\), data sequence \(\mathbb{D}\), and a series of intermediate full-model weight checkpoints19\(\mathbb{W}=\{W_{0},W_{k},W_{2k}\dots\}\)) which only the original model trainer would know. Jia et al. [24] propose a verification procedure that makes it difficult for any third party to construct a spoofed transcript \(\mathbb{T}^{\prime}\), if they only have access to \(W_{t}\) and the unordered dataset.
Footnote 19: We use “weight checkpoints” as shorthand, but if using an optimizer like Adam [26], the optimizer state should also be included.
The solution of [24] is as follows: once a Prover reports a training transcript \(\mathbb{T}\), the Verifier checks that the initialization appears random, and then chooses a number of pairs of adjacent weight snapshots that are \(k\) gradient steps apart \((W_{i},W_{i+k}),\dots,(W_{j},W_{j+k})\). Then, rather than re-running all of training, the Verifier only reruns the training of these specific segments, starting at \(W_{i}\) and progressing through the appropriate data batches \(D_{i}\dots D_{i+k}\) to yield a \(W^{\prime}_{i+k}\). The Verifier then confirms that the resulting checkpoint is approximately correct: \(\|W_{i+k}-W^{\prime}_{i+k}\|_{2}<\epsilon\) for some small \(\epsilon\) that depends on the reproduction noise and the number of steps \(k\). The original paper proposed a heuristic for choosing which segments to verify, though subsequent papers have demonstrated spoofing attacks that evade the original verification approach [70; 15]. Thus far, the literature on PoL has been heuristic-based, and discovering PoL protocols with higher degrees of reliability is the subject of ongoing work [15].
### Proof-of-Training-Transcript
Our requirement, which we will define as "Proof-of-Training-Transcript" (PoTT), differs from Proof-of-Learning because our Prover knows a true training transcript that leads to the final weight-snapshot (since they executed the original training run), and the Verifier wants to verify that the Prover has reported the _original_ transcript that yielded \(\omega\) (or at least one very close to the original). Further, unlike PoL, we allow that an adversarial Prover might be willing to spend _more than the original training compute_\(H\) in order to construct a spoof (e.g., an adversary may be willing to pay double the cost in order to execute a rule-violating training run without being detected). Thus, we must hope to push the cost to construct a spoof to be as high as possible, so as to make evasion impractical.
**Definition 1**.: _A PoTT protocol \((R,V)\) is defined by a Prover training and reporting protocol \(R\) and a Verifier verification protocol \(V\). A target weight vector \(W_{t}\) is produced as the true result of a training run which complies with a training and reporting protocol \(R\), and which can be summarized via a training transcript \(\mathbb{T}\in\mathcal{T}\), where \(\mathbb{T}=\{\mathbb{M},\mathbb{D},\mathbb{W}\}\), \(\mathbb{M}\) is the metadata required to reproduce the training run (including hyperparameters, random seed, and loss function), \(\mathbb{D}\) is an ordered sequence of sets of training data batches, and \(\mathbb{W}\) is a collection of intermediate weight checkpoints resulting from the training procedure. The verification protocol \(V\) should accept any such true training transcript with high probability, \(\Pr[V(\mathbb{T},W_{t})=\text{accept}]>1-\delta_{1}\) for some small \(\delta_{1}\). A "spoofed" training transcript \(\mathbb{T}^{\prime}=\{\mathbb{M}^{\prime},\mathbb{D}^{\prime},\mathbb{W}^{ \prime}\}\) is a transcript, which may not correspond to any valid training run, and which is substantially different from the original transcript \(\mathbb{T}\) in its data or hyperparameters: \(d_{1}(\mathbb{D},\mathbb{D}^{\prime})\geq\delta_{3}\) for some edit distance \(d_{1}\) quantifying the number of data point insertions/deletions, and/or \(d_{2}(\mathbb{M},\mathbb{M}^{\prime})\geq\delta_{4}\) for some hyperparameter distance \(d_{2}\). A reporting/verification protocol pair \((R,V)\) is \(J\)-efficient and \(F\)-hard if \(V\) runs in at most \(J\) time, and there does not exist any spoof-generation algorithm \(A\in\mathcal{A}:\mathcal{T}\rightarrow\mathcal{T}\) such that \(\Pr[V(A(\mathbb{T}),W_{t})=\text{accept}]>1-\delta_{2}\) where \(A\) runs in less than \(F\) time._
Colloquially, we want a Prover training and reporting protocol and Verifier verification protocol such that the Verifier only accepts _original_ training transcripts that would result in a final weight checkpoint which contains a shard matching our on-chip weight-shard snapshot \(\omega\). We leave the problem of finding provably secure, efficient methods for PoTT as an important avenue for future work, but we discuss a few promising directions below.
PoTT appears to be strictly harder than PoL, as it requires robustness to a better-resourced adversary has additional information (i.e., they know the true transcript \(\mathbb{T}\)) and has more compute-time to construct a spoof. Given that existing PoL schemes are still heuristic-based and not yet provably secure, there may be a long way to go until we have PoTT methods that are both efficient and hard to spoof. Still, one dynamic favoring the Verifier is that the Prover must _commit_ to a training transcript without knowing the Verifier's verification strategies. Thus, Verifiers can amass secret collections of verification heuristics, much as the IAEA does not disclose all its methods for detecting nuclear activity. Even if PoTTs are only ever heuristic-based, the presence of this dynamic may dissuade Provers from taking the risk of being detected by an unexpected test.
Defining conditions on the types of legitimate training runs is another useful source of leverage. For example, one Prover cheating strategy could be for the Prover to report one long training run as many shorter training runs, each claimed to be "initialized" where the previous training run ended. A simple prevention would be for the training-and-reporting protocol \(R\) to require the Prover to initialize every training run's weights via a known pseudorandom generator and a short seed. This means that the randomness of the initial weights can later be confirmed by the Verifier.
Another promising strategy may be to require the Prover to _pre-commit_ to portions of its training transcript (e.g., the hyperparameters \(\mathbb{M}\) and training batches \(\mathbb{D}\)) at the start of training. This could be done by having the ML chip firmware log a hash of this precommitment, which would prove that the precommitment preceded the chip's snapshot \(\omega\). At the time of precommitment, the Prover does not know what trajectory the training run will follow or at what time it will be snapshotted, as the training has not yet been performed. The Prover would be unable to construct a spoofed training transcript that would end at \(\omega\) and precommit to it, because \(\omega\) isn't known yet. However, it is not obvious how to extend this approach to online learning settings like online RL, where the training data cannot be known ahead of time.
A final complication of our setting derives from the fact that the Verifier observes only a shard of the weights \(\omega\), and not the full weight vector \(W_{t}\). It could be easier to construct a spoofed training transcript for some \(\hat{W}\) which contains a shard matching \(\omega\), but which differs from the true original weights \(W_{t}\neq\hat{W}\) on the portion of the weight vector outside the shard. We briefly describe an intuition for why this is likely to be as hard as general PoTT-spoofing. Assuming \(\omega\) must contain weights from more than a single linear layer, any Prover must at minimum construct a valid PoTT for this smaller NN represented by \(\omega\), except without any original training transcript to start from (making it similarly hard to the original Proof of Learning problem). Alternatively, if the Prover tries to reuse the original training transcript, it will require them to synthesize fake data and labels to exactly reproduce the true inputs/gradients, and these must correspond to a reasonable loss function that is actually decreasing over time. If the Prover alternatively attempts to fake the surrounding layers' weights, then the weights in these surrounding layers must also have their own valid training trajectories as part of the training transcript _and_ must generate the appropriate inputs/gradients to yield \(\omega\), simultaneously.
### Real-world implementation
As the literature uncovers new methods for verifying training transcripts, the Verifier can run these protocol to check that, for each of the logged weight snapshots on the Prover's chips that the Verifier inspects, the Prover has provided a
legitimate corresponding training transcript.20 The Verifier can then examine the data, hyperparameters, and resulting models as reported by the training transcript, and thereby detect whether the Prover compiled with the training rules.
Footnote 20: Note that this requires the Prover to save and report training transcripts for all training runs corresponding to chips sampled by the Verifier, not just the largest-scale ones. This is because, without a matching training transcript, it’s impossible for the Verifier to determine whether a given weight-shard was part of a large-scale training run or a small-scale one. Alternate methods for proving that a chip was only involved in a short training run are an important direction for future work.
A key design constraint is that the Prover does not disclose the raw transcripts (which likely contain confidential data that the Prover would prefer the Verifier did not see). We propose a method for the Verifier to verify the Prover's previously-reported transcript's correctness without ever observing the raw data. First, the Prover and Verifier utilize a "trusted cluster": a set of chips that both parties trust, which is air-gapped to prevent the Prover's sensitive data from leaking, and which is equipped to execute small amounts of ML inference and training (so as to segment-wise approximately reproduce the Prover's original training process). 21 Second, instead of reporting the full transcript, the Prover initially only provides the Verifier with hashes of the training transcript \(\mathbb{T}\). Then, at a later verification stage, the Prover supplies the true \(\mathbb{M},\mathbb{D}\), and \(\mathbb{W}\) within the trusted cluster (which must match the reported hashes), executes the verification protocol, and reports whether both steps succeeded. Finally, the Verifier can use this same trusted cluster to run programs checking that the Prover's training transcript complies with all agreed ML training rules (e.g., measuring the performance of the final model weights on agreed benchmarks). We provide more details on a possible procedure in Appendix D.
Footnote 21: Maintaining such compatible training clusters may prove quite challenging. One desirable direction for future work is in verification methods by which the Verifier does not need to directly reexecute model training. For example, it may be possible for the Verifier to interactively query the Prover for additional information on segments of the training run beyond what was included in the training transcript. There may be queries that have valid answers _only if_ the original training transcript was authentic (e.g., a series of weight sub-checkpoints between two checkpoints, each with progressively lower loss), and the Prover could dynamically recompute answers to these queries using their own data-center. While some properties of the verification would still need to be confirmed using a neutral cluster to maintain the confidentiality of the query-responses, such clusters may not need to be equipped for large-scale training, and thus be much easier to maintain.
When evaluating which training transcript verification protocols to pursue, Verifiers should prioritize those verification strategies that get the most detection-probability, for the lowest cost. Beyond the upfront costs of building trusted clusters or modifying chip hardware, the system has three ongoing operating costs: the efficiency loss from pausing to save weight checkpoints and the weight-shard snapshots (as described in Section 4), the storage costs for maintaining training transcripts (and in particular the weight-checkpoints, each of which may require terabytes) until the Verifier inspects them, and the compute costs to execute the training-transcript verification protocols on the trusted clusters. These costs seem likely to scale linearly with the total compute used by the Prover, and will ultimately depend on the efficiency with which training transcripts can be verified. Even though governments could in principle pressure Provers into paying the costs of compliance, a 1% overhead for each dollar spent on training compute would be much easier for Provers to comply with than a 20% overhead. Indeed, for International Verifiers, the history of arms control suggests that maximally-stringent verification measures may have limited utility, as they may reduce the likelihood of compliance [46]. One important avenue for future work is finding cheaper, lower-fidelity alternatives to NN-retraining-based verification, which need only establish limited properties of the weight-shard's corresponding training run, and which could prompt more expensive verification methods if needed.
## 6 At the supply chain
We need supply-chain monitoring to accomplish two goals: to construct a "chip directory" of who owns each ML chip, for the purposes of sampling; and to ensure that each chip has the hardware features needed to provably log its training activity as in Section 4. Unlike the chip and data-center interventions (Sections 4 and 5), monitoring the international ML chip supply chain cannot be done by a single Verifier. Instead, an international consortium of governments may need to implement these interventions on behalf of other Verifiers (much as the IAEA runs inspections on behalf of member states).
### Creating a chip-owner directory
For a Verifier to be confident that a Prover is reporting the activity of all the Prover's ML chips, they need to know both which ML chips the Prover owns, and that there are no secret stockpiles of chips beyond the Verifier's knowledge. Such ownership monitoring would represent a natural extension of existing supply chain management practices, such as those used to enforce U.S. export controls on ML chips. It may be relatively straightforward to reliably determine the total number of cutting-edge ML chips produced worldwide, by monitoring the production lines at high-end chip fabrication facilities. The modern high-end chip fabrication supply chain is extremely concentrated, and as of 2023 there are fewer than two dozen facilities worldwide capable of producing chips at a node size of 14nm or lower [32], the size used for
efficient ML training chips. As [4] shows, the high-end chip production process may be monitorable using a similar approach to the oversight of nuclear fuel production (e.g., continuous video monitoring of key machines).
As long as each country's new fab can be detected by other countries (e.g., by monitoring the supply chain of lithography equipment), an international monitoring consortium can require the implementation of verification measures at each fab, to provide assurances for all Verifiers. After processing, each wafer produced at a fab is then sent onward for dicing and packaging. Since the facilities required for postprocessing wafers are less concentrated, it is important for the wafers (and later the dies) to be securely and verifiably transported at each step. If these chip precursors ever go missing, responsibility for the violation would lie with the most recent holder. This chain of custody continues until the chip reaches its final owner, at which point the chip's unique ID is associated with that owner in a _chip owner directory_ trusted by all potential Verifiers and Provers. This ownership directory must thereafter be kept up-to-date, e.g., when chips are resold or damaged.22 The continued accuracy of this registry can be validated as part of the same random sampling procedure discussed in Section 3.1. As a second layer of assurance, chips could also be discovered by inspecting datacenters, if those datacenters are detectable via other signals [4].
Footnote 22: In the rare scenario where a large number of chips owned by the same Prover are lost or destroyed beyond recognition, the Verifier or international consortium can launch an investigation to determine whether the Prover is lying to evade oversight.
Given the high prices and large power and cooling requirements of these ML chips, they are largely purchased by data-center operators. These organizations are well-suited to tracking and reporting transfers of their ML chips, and complying with occasional inspections. Though a small fraction of data-center ML chip purchases are made by individuals, so long as these are a small fraction of chips they may be exempted from the overall monitoring framework.
### Trusting secure hardware
We require in Section 4 that each ML chip produced by the semiconductor supply chain is able to provably log traces of its usage. The second goal of supply-chain monitoring is to provide Verifiers with high confidence in the reliability of these on-chip activity-logging mechanisms. This requires ML chip designers to integrate security features into their hardware and firmware designs, especially in ways that make them externally-legible to Verifiers that may not trust the chip-designer. Key priorities include the immutability of the chip's burned-in ID, the integrity of the hardware-backed mechanism for only booting signed firmware, and the resilience of the on-chip hardware-roots-of-trust to side-channel attacks that could steal the chip's encryption keys [27; 9] and thus fake its logs.
A concern for Verifiers checking the conduct of powerful Provers (e.g., states verifying each others' ML training runs) is the possibility of supply-chain attacks [48], which could enable a Prover to undetectably disable/spoof the ML chips' logging functionality. Fully mitigating the threat of supply-chain attacks is a major global issue and beyond the scope of this paper. However, one particularly useful step for building trust in ML chip mechanisms' integrity would be for ML chip designers to use open-source Hardware-Roots-of-Trust. This transparency means that chips' designs can be validated by untrusting Verifiers to confirm there are no backdoors. For example, Google's Project OpenTitan has produced such an HRoT [31], and many major ML chip designers (Google, Microsoft, NVIDIA, and AMD) have agreed to integrate the Open Compute Project's "Caliptra" Root of Trust. [45]
## 7 Discussion
The described additions to the production and operation of ML training chips, if successfully implemented, would enable untrusting parties (like a government and its domestic companies, or the US and Chinese governments) to verify rules and commitments on advanced ML development using these chips. There are many useful measures that governments and companies could begin taking today to enable future implementation of such a framework if it proved necessary, and that would simultaneously further businesses' and regulators' other objectives.
* Chipmakers can include improved hardware security features in their data-center ML chips, as many of these are already hardware security best practices (and may already be present in some ML chips [42]). These features are likely to be independently in-demand as the costs of model training increase, and the risk of model theft becomes a major consideration for companies or governments debating whether to train an expensive model that might simply be stolen.
* Similarly, many of the security measures required for this system (firmware and code attestation, encryption/decryption modules, verification of produced models without disclosing training code) would also be useful for "cloud ML training providers", who wish to prove to security-conscious clients that the clients' data did not leave the chips, and that the clients' models did not have backdoors inserted by a third party [34]. Procurement programs like the US's FedRAMP could encourage such standards for government contracts, and
thereby incentivize cloud providers and chipmakers to build out technical infrastructure that could later be repurposed for oversight.
* Individual companies and governments can publicly commit to rules on ML development that they would like to abide by, if only they could have confidence that their competitors would follow suit.
* Responsible companies can log and publicly disclose (hashed) training transcripts for their large training runs, and assist other companies in verifying these transcripts using simple heuristic. This would not prove the companies _hadn't also_ trained undisclosed models, but the process would prove technical feasibility and create momentum around an industry standard for (secure) training run disclosure.
* Companies and governments can build trusted neutral clusters of the sort described in Section 5.2. These would be useful for many other regulatory priorities, such as enabling third-party auditors to analyze companies' models without leaking the model weights. 23 Footnote 23: For similar reasons, the US Census Bureau operates secured Federal Statistical Research Data Centers to securely provide researchers access to sensitive data [8].
* Governments can improve tracking of ML chip flows via supply-chain monitoring, to identify end-users who own significant quantities of ML chips. In the West, such supply-chain oversight is already likely to be a necessary measure for enforcing US-allied export controls.
* Responsible companies can work with nonprofits and government bodies to practice the physical inspection of ML chips in datacenters. This could help stakeholders create best practices for inspections and gain experience implementing them, while improving estimates of implementation costs.
* Researchers can investigate more efficient and robust methods for detecting spoofed training transcripts, which may be useful for in proving that no backdoors were inserted into ML models.
For the hardware interventions, the sooner such measures are put in place, the more ML chips they can apply to, and the more useful any verification framework will be. Starting on these measures early will also allow more cycles to catch any security vulnerabilities in the software and hardware, which often require multiple iterations to get right.
### Politics of Implementation
Given the substantial complexity and cost of a monitoring and verification regime for large-scale ML training runs, it will only become a reality if it benefits the key stakeholders required to implement it. In this last section, we discuss the benefits of this proposal among each of the required stakeholders.
* _The global public_: Ordinary citizens should worry about the concentration of power associated with private companies possessing large quantities of ML chips, without any meaningful oversight by the public. Training run monitoring is a way to make powerful companies' advanced ML development accountable to the public, and not just the free market. Most importantly, ordinary people benefit from the security and stability enabled by laws and agreements that limit the most harmful applications of large-scale ML systems.
* _Chipmakers and cloud providers_: Absent mechanisms for verifying whether ML chips are used for rule-violating training runs, governments may increasingly resort to banning the sale of chips (or even cloud-computing access to those chips) to untrusted actors [5]. By enabling provable monitoring of large-scale ML training runs, chipmakers may reverse this trend and may even be able to resume sales to affected markets.
* _AI companies_: Responsible AI companies may themselves prefer not to develop a particular capability into their products, but may feel they have no choice due to competitive pressure exerted by less-scrupulous rivals. Verifying training runs would allow responsible AI companies to be recognized for the limits they impose on themselves, and would facilitate industry-wide enforcement of best practices on responsible ML development.
* _Governments and militaries_: Governments' and militaries' overarching objective is to ensure the security and prosperity of their country. The inability to coordinate with rivals on limits to the development of highly-capable ML systems is a threat to their own national security. There would be massive benefit to a system that enabled (even a subset of) countries to verify each others' adherence with ML training agreements, and thus to maintain an equilibrium of responsible ML development.
Even if only a subset of responsible companies and governments comply with the framework, they still benefit from verifiably demonstrating their compliance with self-imposed rules by increasing their rivals' and allies' confidence in their behavior [22] (and thus reducing their rival's uncertainty and incentive towards recklessness).
Finally, we highlight that the discussed verification framework requires continuous participation and consent by the Prover. This makes the framework fundamentally non-coercive, and respects national sovereignty much as nuclear
nonproliferation and arms control agreements respect national sovereignty. Indeed, the ongoing success of such a system relies on all parties' self-interest in continuing to live in a world where no one - neither they, nor their rivals - violates agreed guardrails on advanced ML development.
## Acknowledgements
The author would like to thank Tim Fist, Miles Brundage, William Moses, Gabriel Kaptchuk, Cynthia Dwork, Lennart Heim, Shahar Avin, Mauricio Baker, Jacob Austin, Lucy Lim, Andy Jones, Cullen O'Keefe, Helen Toner, Julian Hazell, Richard Ngo, Jade Leung, Jess Whittestone, Ariel Procaccia, Jordan Schneider, and Rachel Cummings Shavit for their helpful feedback and advice in the writing of this work.
|
2302.02941 | On Over-Squashing in Message Passing Neural Networks: The Impact of
Width, Depth, and Topology | Message Passing Neural Networks (MPNNs) are instances of Graph Neural
Networks that leverage the graph to send messages over the edges. This
inductive bias leads to a phenomenon known as over-squashing, where a node
feature is insensitive to information contained at distant nodes. Despite
recent methods introduced to mitigate this issue, an understanding of the
causes for over-squashing and of possible solutions are lacking. In this
theoretical work, we prove that: (i) Neural network width can mitigate
over-squashing, but at the cost of making the whole network more sensitive;
(ii) Conversely, depth cannot help mitigate over-squashing: increasing the
number of layers leads to over-squashing being dominated by vanishing
gradients; (iii) The graph topology plays the greatest role, since
over-squashing occurs between nodes at high commute (access) time. Our analysis
provides a unified framework to study different recent methods introduced to
cope with over-squashing and serves as a justification for a class of methods
that fall under graph rewiring. | Francesco Di Giovanni, Lorenzo Giusti, Federico Barbero, Giulia Luise, Pietro Lio', Michael Bronstein | 2023-02-06T17:16:42Z | http://arxiv.org/abs/2302.02941v3 | # On Over-Squashing in Message Passing Neural Networks:
###### Abstract
Message Passing Neural Networks (MPNNs) are instances of Graph Neural Networks that leverage the graph to send messages over the edges. This inductive bias leads to a phenomenon known as over-squashing, where a node feature is insensitive to information contained at distant nodes. Despite recent methods introduced to mitigate this issue, an understanding of the causes for over-squashing and of possible solutions are lacking. In this theoretical work, we prove that: (i) Neural network width can mitigate over-squashing, but at the cost of making the whole network more sensitive; (ii) Conversely, depth cannot help mitigate over-squashing: increasing the number of layers leads to over-squashing being dominated by vanishing gradients; (iii) The graph topology plays the greatest role, since over-squashing occurs between nodes at high commute time. Our analysis provides a unified framework to study different recent methods introduced to cope with over-squashing and serves as a justification for a class of methods that fall under 'graph rewiring'.
Machine Learning, Deep Learning, Neural Networks, Graph Networks, Graph Networks, Graph Networks
## 1 Introduction
Learning on graphs with Graph Neural Networks (GNNs) (Sperduti, 1993; Goller & Kuchler, 1996; Gori et al., 2005; Scarselli et al., 2008; Bruna et al., 2014; Defferrard et al., 2016) has become an increasingly flourishing area of machine learning. Typically, GNNs operate in the _message-passing paradigm_ by exchanging information between nearby nodes (Gilmer et al., 2017), giving rise to the class of Message-Passing Neural Networks (MPNNs). While message-passing has demonstrated to be a useful inductive bias, it has also been shown that the paradigm has some fundamental flaws, from expressivity (Xu et al., 2019; Morris et al., 2019), to over-smoothing (Nt & Maehara, 2019; Cai & Wang, 2020; Bodnar et al., 2022; Rusch et al., 2022; Di Giovanni et al., 2022; Zhao et al., 2022) and over-squashing. The first two limitations have been thoroughly investigated, however _less is known about over-squashing_.
Alon & Yahav (2021) described over-squashing as an issue emerging when MPNNs propagate messages across distant nodes, with the exponential expansion of the receptive field of a node leading to many messages being'squashed' into fixed-size vectors. Topping et al. (2022) formally justified this phenomenon via a sensitivity analysis on the Jacobian of node features and, partly, linked it to the existence of edges with high-negative curvature. However, some important **questions are left open** from the analysis in Topping et al. (2022): (i) What is the impact of _width_ in mitigating over-squashing? (ii) Can over-squashing be avoided by sufficiently _deep_ models? (iii) How does over-squashing relate to the graph-spectrum and the underlying _topology_ beyond curvature bounds that only apply to 2-hop propagation? The last point is particularly relevant due to recent works trying to combat over-squashing via methods that depend on the graph spectrum (Arnaiz-Rodriguez et al., 2022; Deac et al., 2022; Karhadkar et al., 2022). However, it is yet to be clarified if and why these works alleviate over-squashing.
In this work, we aim to address all the questions that are left open in Topping et al. (2022) to provide a better theoretical understanding on the causes of over-squashing as well as on what can and cannot fix it.
Contributions and outline.An MPNN is generally constituted by two main parts: a choice of architecture, and an underlying graph over which it operates. In this work, we investigate how these factors participate in the over-squashing phenomenon. We focus on the width and depth of the MPNN, as well as on the graph-topology.
* In Section 3, we prove that the _width_ can mitigate over-squashing (Theorem 3.2), albeit at the potential cost of generalization. We also verify this with experiments.
* In Section 4, we show that depth may not be able to alleviate over-squashing. We identify two regimes. In
the first one, the number of layers is comparable to the graph diameter, and we prove that over-squashing is likely to occur among distant nodes (Theorem 4.1). In fact, the distance at which over-squashing happens is strongly dependent on the graph topology - as we validate experimentally. In the second regime, we consider an arbitrary (large) number of layers. We prove that at this stage the MPNN is, generally, dominated by vanishing gradients (Theorem 4.2). This result is of independent interest, since it characterizes analytically conditions of vanishing gradients of the loss for a large class of MPNNs that also include residual connections.
* In Section 5 we show that the _topology_ of the graph has the greatest impact on over-squashing. In fact, we show that over-squashing happens among nodes with high commute time (Theorem 5.5) and we validate this empirically. This provides a unified framework to explain why all spatial and spectral _rewiring_ approaches (discussed in Section 2.3) do mitigate over-squashing.
## 2 Background and related work
### The message-passing paradigm
Let \(\mathsf{G}\) be a graph with nodes \(\mathsf{V}\) and edges \(\mathsf{E}\). The connectivity is encoded in the adjacency matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\), with \(n\) the number of nodes. We assume that \(\mathsf{G}\) is undirected, connected, and that there are features \(\{\mathbf{h}_{v}^{(0)}\}_{v\in\mathsf{V}}\subset\mathbb{R}^{p}\). Graph Neural Networks (GNNs) are functions of the form \(\mathsf{GNN}_{\theta}:(\mathsf{G},\{\mathbf{h}_{v}^{(0)}\})\mapsto y_{ \mathsf{G}}\), with parameters \(\theta\) estimated via training and whose output \(y_{\mathsf{G}}\) is either a node-level or graph-level prediction. The most studied class of GNNs, known as the Message Passing Neural Network (MPNN) (Gilmer et al., 2017), compute node representations by stacking \(m\) layers of the form:
\[\mathbf{h}_{v}^{(t)}=\mathsf{com}^{(t)}(\mathbf{h}_{v}^{(t-1)},\mathsf{agg}^{ (t)}(\{\mathbf{h}_{u}^{(t-1)}:(v,u)\in\mathsf{E}\})),\]
for \(t=1,\ldots,m\), where \(\mathsf{agg}^{(t)}\) is some _aggregation_ function invariant to node permutation, while \(\mathsf{com}^{(t)}\)_combines_ the node's current state with messages from its neighbours. In this work, we usually assume \(\mathsf{agg}\) to be of the form
\[\mathsf{agg}^{(t)}(\{\mathbf{h}_{u}^{(t-1)}:(v,u)\in\mathsf{E}\})=\sum_{u} \mathbf{A}_{vu}\mathbf{h}_{u}^{(t-1)}, \tag{1}\]
where \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is a **Graph Shift Operator (GSO)**, meaning that \(\mathbf{A}_{vu}\neq 0\) if and only if \((v,u)\in\mathsf{E}\). Typically, \(\mathbf{A}\) is a (normalized) adjacency matrix that we also refer to as message-passing matrix. While instances of MPNN differ based on the choices of \(\mathbf{A}\) and \(\mathsf{com}\), they all aggregate messages over the neighbours, such that in a layer, only nodes connected via an edge exchange messages. This presents two advantages: MPNNs (i) can capture graph-induced'short-range' dependencies well, and (ii) are efficient, since they can leverage the sparsity of the graph. Nonetheless, MPNNs have been shown to suffer from a few drawbacks, including _limited expressive power_ and _over-squashing_. The problem of expressive power stems from the equivalence of MPNNs to the Weisfeiler-Leman graph isomorphism test (Xu et al., 2019; Morris et al., 2019). This framework has been studied extensively (Jegelka, 2022). On the other hand, the phenomenon of over-squashing, which is the main focus of this work, is more elusive and less understood. We review what is currently known about it in the next subsection.
### The problem: introducing over-squashing
Since in an MPNN the information is aggregated over the neighbours, for a node \(v\) to be affected by features at distance \(r\), an MPNN needs at least \(r\) layers (Barcelo et al., 2019). It has been observed though that due to the expansion of the receptive field of a node, MPNNs may end up sending a number of messages growing exponentially with the distance \(r\), leading to a potential loss of information known as _over-squashing_(Alon and Yahav, 2021). Topping et al. (2022) showed that for an MPNN with message-passing matrix \(\mathbf{A}\) as in Eq. (1) and _scalar_ features, given nodes \(v,u\) at distance \(r\), we have \(|\partial h_{v}^{(r)}/\partial h_{u}^{(0)}|\leq c\cdot(\mathbf{A}^{r})_{vu}\), with \(c\) a constant depending on the Lipschitz regularity of the model. If \((\mathbf{A}^{r})_{vu}\) decays exponentially with \(r\), then the feature of \(v\) is insensitive to the information contained at \(u\). Moreover, Topping et al. (2022) showed that over-squashing is related to the existence of edges with _high negative curvature_. Such characterization though only applies to propagation of information up to 2 hops.
### Related work
Multiple solutions to mitigate over-squashing have already been proposed. We classify them below; in Section 5, we provide a unified framework that encompasses all such solu
Figure 1: Effect of different rewirings \(\mathcal{R}\) on the graph connectivity. The colouring denotes Commute Time – defined in Section 5 – w.r.t. to the star node. From left to right, the graphs shown are: the base, spatially rewired and spectrally rewired. The added edges significantly reduce the Commute Time and hence mitigate over-squashing in light of Theorem 5.5.
tions. We first introduce the following notion:
**Definition 2.1**.: Consider an \(\mathsf{MPNN}\), a graph \(\mathsf{G}\) with adjacency \(\mathbf{A}\), and a map \(\mathcal{R}:\mathbb{R}^{n\times n}\to\mathbb{R}^{n\times n}\). We say that \(\mathsf{G}\) has been **rewired** by \(\mathcal{R}\), if the messages are exchanged on \(\mathcal{R}(\mathsf{G})\) instead of \(\mathsf{G}\), with \(\mathcal{R}(\mathsf{G})\) the graph with adjacency \(\mathcal{R}(\mathbf{A})\).
Recent approaches to combat over-squashing share a common idea: replace the graph \(\mathsf{G}\) with a rewired graph \(\mathcal{R}(\mathsf{G})\) enjoying better connectivity Figure 1. We then distinguish these works based on the choice of the rewiring \(\mathcal{R}\).
Spatial methods.Since \(\mathsf{MPNN}\)s fail to propagate information to distant nodes, a solution consists in replacing \(\mathsf{G}\) with \(\mathcal{R}(\mathsf{G})\) such that \(\operatorname{diam}(\mathcal{R}(\mathsf{G}))\ll\operatorname{diam}(\mathsf{G})\). Typically, this is achieved by either explicitly adding edges (possibly attributed) between distant nodes (Bruel-Gabrielsson et al., 2022; Abboud et al., 2022) or by allowing distant nodes to communicate through higher-order structures (e.g., cellular or simplicial complexes, (Bodnar et al., 2021; \(\mathsf{b}\)), which requires additional domain knowledge and incurs a computational overhead). _Graph-Transformers_ can be seen as an extreme example of rewiring, where \(\mathcal{R}(\mathsf{G})\) is a _complete graph_ with edges weighted via attention (Kreuzer et al., 2021; Mialon et al., 2021; Ying et al., 2021; Rampasek et al., 2022). While these methods do alleviate over-squashing, since they _bring all pair of nodes closer_, they come at the expense of making the graph \(\mathcal{R}(\mathsf{G})\) much denser. In turn, this has an impact on computational complexity and introduces the risk of mixing local and non-local interactions.
We include in this group Topping et al. (2022) and Banerjee et al. (2022), where the rewiring is _surgical_ - but requires specific pre-processing - in the sense that \(\mathsf{G}\) is replaced by \(\mathcal{R}(\mathsf{G})\) where edges have only been added to'mitigate' bottlenecks as identified, for example, by negative curvature (Ollivier, 2007; Di Giovanni et al., 2022).
We finally mention that spatial rewiring, intended as accessing information beyond the 1-hop when updating node features, is common to many existing frameworks (Abu-El-Haija et al., 2019; Klicpera et al., 2019; Chen et al., 2020; Ma et al., 2020; Wang et al., 2020; Nikolentzos et al., 2020). However, this is usually done via powers of the adjacency matrix, which is the main culprit for over-squashing (Topping et al., 2022). Accordingly, although the diffusion operators \(\mathbf{A}^{k}\) allow to aggregate information over non-local hops, they are not suited to mitigate over-squashing.
Spectral methods.The connectedness of a graph \(\mathsf{G}\) can be measured via a quantity known as the _Cheeger constant_, defined as follows (Chung and Graham, 1997):
**Definition 2.2**.: For a graph \(\mathsf{G}\), the Cheeger constant is
\[\mathsf{h}_{\mathsf{Cheeg}}=\min_{\mathsf{U}\in\mathsf{V}}\frac{|\{(u,v)\in \mathsf{E}:u\in\mathsf{U},v\in\mathsf{V}\setminus\mathsf{U}\}|}{\min(\mathrm{ vol}(\mathsf{U}),\mathrm{vol}(\mathsf{V}\setminus\mathsf{U}))},\]
where \(\mathrm{vol}(\mathsf{U})=\sum_{u\in\mathsf{U}}d_{u}\), with \(d_{u}\) the degree of node \(u\).
The Cheeger constant \(\mathsf{h}_{\mathsf{Cheeg}}\) represents the energy required to disconnect \(\mathsf{G}\) into two communities. A small \(\mathsf{h}_{\mathsf{Cheeg}}\) means that \(\mathsf{G}\) generally has two communities separated by only few edges - over-squashing is then expected to occur here _if_ information needs to travel from one community to the other. While \(\mathsf{h}_{\mathsf{Cheeg}}\) is generally intractable to compute, thanks to the Cheeger inequality we know that \(\mathsf{h}_{\mathsf{Cheeg}}\sim\lambda_{1}\), where \(\lambda_{1}\) is the positive, smallest eigenvalue of the graph Laplacian. Accordingly, a few new approaches have suggested to choose a rewiring that depends on the spectrum of \(\mathsf{G}\) and yields a new graph satisfying \(\mathsf{h}_{\mathsf{Cheeg}}(\mathcal{R}(\mathsf{G}))>\mathsf{h}_{\mathsf{ Cheeg}}(\mathsf{G})\). This strategy includes Arnaiz-Rodriguez et al. (2022); Deac et al. (2022); Karhadkar et al. (2022). It is claimed that sending messages over such a graph \(\mathcal{R}(\mathsf{G})\) alleviates over-squashing, however this has not been shown analytically yet.
The goal of this work.The analysis of Topping et al. (2022), which represents our current theoretical understanding of the over-squashing problem, leaves some important open questions which we address in this work: (i) We study the role of the **width** in mitigating over-squashing; (ii) We analyse what happens when the **depth** exceeds the distance among two nodes of interest; (iii) We prove how over-squashing is related to the graph structure (beyond local curvature-bounds) and its **spectrum**. As a consequence of (iii), we _provide a unified framework to explain how spatial and spectral approaches alleviate over-squashing_. We reference here the concurrent work of Black et al. (2023), who, similarly to us, drew a strong connection between over-squashing and Effective Resistance (see Section 5).
Notations and conventions to improve readability.In the following, to prioritize readability we often leave sharper bounds with more optimal and explicit terms to the Appendix. From now on, \(p\) always denotes the width (hidden dimension) while \(m\) is the depth (i.e. number of layers). The feature of node \(v\) at layer \(t\) is written as \(\mathbf{h}_{v}^{(t)}\). Finally, we write \([\ell]=\{1,\ldots,\ell\}\) for any integer \(\ell\). All proofs can be found in the Appendix.
## 3 The impact of width
In this Section we assess whether the width of the underlying \(\mathsf{MPNN}\) can mitigate over-squashing and the extent to which this is possible. In order to do that, we extend the sensitivity analysis in Topping et al. (2022) to higher-dimensional node features. We consider a class of \(\mathsf{MPNN}\)s parameterised by neural networks, of the form:
\[\mathbf{h}_{v}^{(t+1)}=\sigma\left(c_{\mathsf{r}}\mathbf{W}_{\mathsf{r}}^{(t)} \mathbf{h}_{v}^{(t)}+c_{\mathsf{a}}\mathbf{W}_{\mathsf{a}}^{(t)}\sum_{u} \boldsymbol{\mathsf{A}}_{vu}\mathbf{h}_{u}^{(t)}\right), \tag{2}\]
where \(\sigma\) is a pointwise-nonlinearity, \(\mathbf{W}_{r}^{(t)},\mathbf{W}_{a}^{(t)}\in\mathbb{R}^{p\times p}\) are learnable weight matrices and \(\mathbf{A}\) is a graph shift operator. Note that Eq. (2) includes common \(\mathsf{MPNN}\)s such as \(\mathsf{GCN}\)(Kipf and Welling, 2017), \(\mathsf{SAGE}\)(Hamilton et al., 2017), and \(\mathsf{GIN}\)(Xu et al., 2019), where \(\mathbf{A}\) is one of \(\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\), \(\mathbf{D}^{-1}\mathbf{A}\) and \(\mathbf{A}\), respectively, with \(\mathbf{D}\) the diagonal degree matrix. In Appendix B, we extend the analysis to a more general class of \(\mathsf{MPNN}\)s (see Theorem B.1), which includes stacking multiple nonlinearities. We also emphasize that the positive scalars \(c_{r},c_{a}\) represent the weighted contribution of the residual term and of the aggregation term, respectively. To ease the notations, we introduce a family of message-passing matrices that depend on \(c_{r},c_{a}\).
**Definition 3.1**.: For a graph shift operator \(\mathbf{A}\) and constants \(c_{r},c_{a}>0\), we define \(\mathbf{S}_{r,a}:=c_{r}\mathbf{I}+c_{a}\mathbf{A}\in\mathbb{R}^{n\times n}\) to be the message-passing matrix adopted by the \(\mathsf{MPNN}\).
As in Xu et al. (2018) and Topping et al. (2022), we study the propagation of information in the \(\mathsf{MPNN}\) via the Jacobian of node features after \(m\) layers.
**Theorem 3.2** (**Sensitivity bounds)**.: _Consider an \(\mathsf{MPNN}\) as in Eq. (2) for \(m\) layers, with \(c_{\sigma}\) the Lipschitz constant of the nonlinearity \(\sigma\) and \(w\) the maximal entry-value over all weight matrices. For \(v,u\in\mathsf{V}\) and width \(p\), we have_
\[\left|\left|\frac{\partial\mathbf{h}_{v}^{(m)}}{\partial\mathbf{h}_{u}^{(0)}} \right|\right|_{L_{1}}\leq\underbrace{(c_{\sigma}w)}_{\mathrm{model}}{}^{m} \underbrace{(\mathbf{S}_{r,a}^{m})_{vu}}_{\mathrm{topology}}, \tag{3}\]
_with \(\mathbf{S}_{r,a}^{m}\) the \(m^{th}\)-power of \(\mathbf{S}_{r,a}\) introduced in Definition 3.1._
Over-squashing occurs if the right hand side of Eq. (3) is too small - this will be related to the distance among \(v\) and \(u\) in Section 4.1. A small derivative of \(\mathbf{h}_{v}^{(m)}\) with respect to \(\mathbf{h}_{u}^{(0)}\) means that after \(m\) layers, _the feature at \(v\) is mostly insensitive to the information initially contained at \(u\)_, and hence that messages have not been propagated effectively. Theorem 3.2 clarifies how the model can impact over-squashing through (i) its Lipschitz regularity \(c_{\sigma},w\) and (ii) its width \(p\). In fact, given a graph \(\mathsf{G}\) such that \((\mathbf{S}_{r,a}^{m})_{vu}\) decays exponentially with \(m\), the \(\mathsf{MPNN}\) can compensate by increasing the width \(p\) and the magnitude of \(w\) and \(c_{\sigma}\). This confirms analytically the discussion in Alon and Yahav (2021): **a larger hidden dimension \(p\) does mitigate over-squashing**. However, this is not an optimal solution since increasing the contribution of the model (i.e. the term \(c_{\sigma}wp\)) may lead to over-fitting and poorer generalization (Bartlett et al., 2017). Taking larger values of \(c_{\sigma},w,p\) affects the model _globally_ and does not target the sensitivity of specific node pairs induced by the topology via \(\mathbf{S}_{r,a}\).
Validating the theoretical results.We validate empirically the message from Theorem 3.2: if the task presents long-range dependencies, increasing the hidden dimension mitigates over-squashing and therefore has a positive impact on the performance. We consider the following 'graph transfer' task, building upon Bodnar et al. (2021): given a graph, consider source and target nodes, placed at distance \(r\) from each other. We assign a one-hot encoded label to the target and a constant unitary feature vector to all other nodes. The goal is to assign to the source node the feature vector of the target. Partly due to over-squashing, performance is expected to degrade as \(r\) increases.
To validate that this holds irrespective of the graph structure, we evaluate across three graph topologies, called \(\mathsf{CrossReg}\), \(\mathsf{Ring}\) and \(\mathsf{CliquePath}\) - see Appendix E for further details. While the topology is also expected to affect the performance (as confirmed in Section 4), given a fixed topology, we expect the model to benefit from an increase of hidden dimension.
To verify this behaviour, we evaluate \(\mathsf{GCN}\)(Kipf and Welling, 2017) on the three graph transfer tasks increasing the hidden dimension, but keeping the number of layers equal to the distance between source and target, as shown in Figure 2. The results verify the intuition from the theorem that a higher hidden dimension helps the \(\mathsf{GCN}\) model solve the task to larger distances across the three graph-topologies.
**Message of the Section:**_The Lipschitz regularity, weights, and width of the underlying \(\mathsf{MPNN}\) can help mitigate the effect of over-squashing. However, this is a remedy that comes at the expense of generalization and does not address the real culprit behind over-squashing: the graph-topology_.
Figure 2: Performance of \(\mathsf{GCN}\) on the \(\mathsf{CrossedRing}\), \(\mathsf{Ring}\), and \(\mathsf{CliquePath}\) tasks obtained by varying the hidden dimension. Increasing the hidden dimension helps mitigate the over-squashing effect, in accordance with Theorem 3.2.
## 4 The impact of depth
Consider a graph \(\mathsf{G}\) and a task with 'long-range' dependencies, meaning that there exists (at least) a node \(v\) whose embedding has to account for information contained at some node \(u\) at distance \(r\gg 1\). One natural attempt at resolving over-squashing amounts to increasing the number of layers \(m\) to compensate for the distance. We prove that the depth of the \(\mathsf{MPNN}\) will, generally, not help with over-squashing. We show that: (i) When the depth \(m\) is comparable to the distance, over-squashing is bound to occur among distant nodes - in fact, the distance at which over-squashing happens, is strongly dependent on the underlying topology; (ii) If we take a large number of layers to cover the long-range interactions, we rigorously prove under which _exact_ conditions, \(\mathsf{MPNNs}\) incur the vanishing gradients problem.
### The shallow-diameter regime: over-squashing occurs among distant nodes
Consider the scenario above, with two nodes \(v,u\), whose interaction is important for the task, at distance \(r\). We first focus on the regime \(m\sim r\). We refer to this as the _shallow-diameter_ regime, since the number of layers \(m\) is comparable to the diameter of the graph.
From now on, we set \(\textbf{A}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\), where we recall that **A** is the adjacency matrix and **D** is the degree matrix. This is not restrictive, but allows us to derive more explicit bounds and, later, bring into the equation the spectrum of the graph. We note that results can be extended easily to \(\mathbf{D}^{-1}\mathbf{A}\), given that this matrix is similar to **A**, and, in expectation, to **A** by normalizing the Jacobian as in Xu et al. (2019) and Section A in the Appendix of Topping et al. (2022).
**Theorem 4.1** (**Over-squashing among distant nodes)**.: _Given an \(\mathsf{MPNN}\) as in Eq. (2), with \(c_{\mathsf{a}}\leq 1\), let \(v,u\in\mathsf{V}\) be at distance \(r\). Let \(c_{\sigma}\) be the Lipschitz constant of \(\sigma\), \(w\) the maximal entry-value over all weight matrices, \(d_{\min}\) the minimal degree of \(\mathsf{G}\), and \(\gamma_{\ell}(v,u)\) the number of walks from \(v\) to \(u\) of maximal length \(\ell\). For any \(0\leq k<r\), there exists \(C_{k}>0\)_**independent** _of \(r\) and of the graph, such that_
\[\left|\left|\frac{\partial\mathbf{h}_{v}^{(r+k)}}{\partial\mathbf{h}_{u}^{(0 )}}\right|\right|_{L_{1}}\leq C_{k}\gamma_{r+k}(v,u)\left(\frac{2c_{\sigma}wp} {d_{\min}}\right)^{r}. \tag{4}\]
To understand the bound above, fix \(k<r\) and assume that nodes \(v,u\) are 'badly' connected, meaning that the number of walks \(\gamma_{r+k}(v,u)\) of length at most \(r+k\), is small. If \(2\,c_{\sigma}wp<d_{\min}\), then the bound on the Jacobian in Eq. (4) _decays exponentially with the distance \(r\)_. Note that the bound above considers \(d_{\min}\) and \(\gamma_{r+k}\) as a worst case scenario. If one has a better understanding of the topology of the graph, sharper bounds can be derived by estimating \((\mathbf{S}_{\mathsf{r},\mathsf{a}}^{r})_{vu}\). Theorem 4.1 implies that, when the depth \(m\) is comparable to the diameter of \(\mathsf{G}\), _over-squashing becomes an issue if the task depends on the interaction of nodes \(v,u\) at 'large' distance \(r\)_. In fact, Theorem 4.1 shows that the distance at which the Jacobian sensitivity falls below a given threshold, depends on both the model, via \(c_{\sigma},w,p\), and on the graph, through \(d_{\min}\) and \(\gamma_{r+k}(v,u)\). We finally observe that Theorem 4.1 generalizes the analysis in Topping et al. (2022) in multiple ways: (i) it holds for any width \(p>1\); (ii) it includes cases where \(m>r\); (iii) it provides explicit estimates in terms of number of walks and degree information.
**Remark**.: What if \(2c_{\sigma}wp>d_{\min}\)? Taking larger weights and hidden dimension increases the sensitivity of node features. However, this occurs _everywhere_ in the graph the same. Accordingly, nodes at shorter distances will, on average, still have sensitivity exponentially larger than nodes at large distance. This is validated in our synthetic experiments below, where we do not have constraints on the weights.
Validating the theoretical results.From Theorem 4.1, we derive a strong indication of the difficulty of a task by calculating an upper bound on the Jacobian. We consider the same graph transfer tasks introduced above, namely \(\mathsf{CrossedRing}\), \(\mathsf{Ring}\), and \(\mathsf{CliquePath}\). For these special cases, we can derive a refined version of the r.h.s in Eq. (4): in particular, \(k=0\) (i.e. the depth coincides with the distance among source and target) and the term \(\gamma_{r}(v,u)(d_{\min})^{-r}\) can be replaced by the exact quantity \((\mathbf{S}_{\mathsf{r},\mathsf{a}}^{r})_{vu}\). Fixing a distance \(r\) between source \(u\) and target \(v\) then, if we consider for example the \(\mathsf{GCN}\)-case, we have \(\mathbf{S}_{\mathsf{r},\mathsf{a}}=\textbf{A}\) so that the term \((\mathbf{S}_{\mathsf{r},\mathsf{a}}^{r})_{vu}\) can be computed explicitly:
\[(\mathbf{S}_{\mathsf{r},\mathsf{a}}^{r})_{vu} =(3/2)^{-(r-1)} \text{for $\mathsf{CrossedRing}$}\] \[(\mathbf{S}_{\mathsf{r},\mathsf{a}}^{r})_{vu} =2^{-(r-1)} \text{for $\mathsf{Ring}$}\] \[(\mathbf{S}_{\mathsf{r},\mathsf{a}}^{r})_{vu} =2^{-(r-2)}/(r\sqrt{r-2}) \text{for $\mathsf{CliquePath}$}.\]
Given an \(\mathsf{MPNN}\), terms like \(c_{\sigma},w,p\) entering Theorem 4.1 are independent of the graph-topology and hence can be assumed to behave, roughly, the same across different graphs. As a consequence, we can expect over-squashing to be more problematic for \(\mathsf{CliquePath}\), followed by \(\mathsf{Ring}\), and less prevalent comparatively in \(\mathsf{CrossedRing}\). Figure 3 shows the behaviour of \(\mathsf{GIN}\)(Xu et al., 2019), \(\mathsf{SAGE}\)(Hamilton et al., 2017), \(\mathsf{GCN}\)(Kipf & Welling, 2017), and GAT (Velickovic et al., 2018) on the aformentioned tasks. We verify the conjectured difficulty. \(\mathsf{CliquePath}\) is the consistently hardest task, followed by \(\mathsf{Ring}\), and \(\mathsf{CrossedRing}\). Furthermore, the decay of the performance to random guessing for the _same_ architecture across different graph topologies highlights that this drop cannot be simply labelled as 'vanishing gradients' since for certain topologies the same model can, in fact, achieve perfect accuracy. This validates that the underlying topology has a strong impact on the distance at which over-squashing is expected to happen. Moreover, we confirm
that in the regime where the depth \(m\) is comparable to the distance \(r\), over-squashing will occur if \(r\) is large enough.
### The deep regime: vanishing gradients dominate
We now focus on the regime where the number of layers \(m\gg r\) is large. We show that in this case, vanishing gradients can occur and make the entire model insensitive. Given a weight \(\theta^{(k)}\) entering a layer \(k\), one can write the gradient of the loss after \(m\) layers as (Pascanu et al., 2013)
\[\frac{\partial\mathcal{L}}{\partial\theta^{(k)}}=\sum_{v,u\in V}\left(\frac{ \partial\mathcal{L}}{\partial\mathbf{h}_{v}^{(m)}}\frac{\partial\mathbf{h}_{ u}^{(k)}}{\partial\theta^{(k)}}\right)\underbrace{\frac{\partial\mathbf{h}_{v}^{(m)} }{\partial\mathbf{h}_{u}^{(k)}}}_{\text{sensitivity}} \tag{5}\]
We provide **exact conditions** for MPNNs to incur the vanishing gradient problem, intended as the gradients of the loss decaying exponentially with the number of layers \(m\).
**Theorem 4.2** (Vanishing gradients).: _Consider an_ MPNN _as in Eq. (2) for \(m\) layers with a quadratic loss \(\mathcal{L}\). Assume that (i) \(\sigma\) has Lipschitz constant \(c_{\sigma}\) and \(\sigma(0)=0\), and (ii) weight matrices have spectral norm bounded by \(\mu>0\). Given any weight \(\theta\) entering a layer \(k\), there exists a constant \(C>0\) independent of \(m\), such that_
\[\left|\frac{\partial\mathcal{L}}{\partial\theta}\right|\leq C\left(c_{\sigma} \mu(c_{\tau}+c_{\mathsf{a}})\right)^{m-k}\left(1+\left(c_{\sigma}\mu(c_{ \tau}+c_{\mathsf{a}})\right)^{m}\right). \tag{6}\]
_In particular, if \(c_{\sigma}\mu(c_{\tau}+c_{\mathsf{a}})<1\), then the gradients of the loss decay to zero exponentially fast with \(m\)._
The problem of vanishing gradients for graph convolutional networks have been studied from an empirical perspective (Li et al., 2019, 2021). Theorem 4.2 provides sufficient conditions for the vanishing of gradients to occur in a large class of MPNNs that also include (a form of) residual connections through the contribution of \(c_{\tau}\) in Eq. (2). This extends a behaviour studied for Recurrent Neural Networks (Bengio et al., 1994; Hochreiter and Schmidhuber, 1997; Pascanu et al., 2013; Rusch and Mishra, 2021; ) to the MPNN class. We also mention that some discussion on vanishing gradients for MPNNs can be found in Ruiz et al. (2020) and Rusch et al. (2022). A few final comments are in order. (i) The bound in Theorem 4.2 seems to 'hide' the contribution of the graph. This is, in fact, because the spectral norm of the graph operator \(\mathbf{S}_{r,\mathsf{a}}\) is \(c_{r}+c_{\mathsf{a}}\) - we reserve the investigation of more general graph shift operators (Dasoulas et al., 2021) to future work. (ii) Theorem 4.1 shows that if the distance \(r\) is large enough and we take the number of layers \(m\) s.t. \(m\sim r\), over-squashing arises among nodes at distance \(r\). Taking the number of layers large enough though, may incur the vanishing gradient problem Theorem 4.2. In principle, there might be an intermediate regime where \(m\) is larger than \(r\), but _not_ too large, in which the depth could help with over-squashing before it leads to vanishing gradients. Given a graph \(\mathsf{G}\), and bounds on the Lipschitz regularity and width, we conjecture though that there exists \(\tilde{r}\), depending on the topology of \(\mathsf{G}\), such that if the task has interactions at distance \(r>\tilde{r}\), no number of layers can allow the MPNN class to solve it. This is left for future work.
**Message of the Section:**_Increasing the depth \(m\) will, in general, not fix over-squashing. As we increase \(m\),_ MPNNs _transition from over-squashing (Theorem 4.1) to vanishing gradients (Theorem 4.2)._
## 5 The impact of topology
We finally discuss the impact of the graph topology, and in particular of the graph spectrum, on over-squashing. This allows us to draw a unified framework that shows why existing approaches manage to alleviate over-squashing by either spatial or spectral rewiring (Section 2.3).
### On over-squashing and access time
Throughout the section we relate over-squashing to well-known properties of random walks on graphs. To this aim, we first review basic concepts about random walks.
Access and commute time.A Random Walk (RW) on \(\mathsf{G}\) is a Markov chain which, at each step, moves from a node
Figure 3: Performance of GIN, SAGE, GCN, and GAT on the CliquePath, Ring, and CrossedRing tasks. In the case where depth and distance are comparable, over-squashing highly depends on the topology of the graph as we increase the distance.
\(v\) to one of its neighbours with probability \(1/d_{v}\). Several properties about RWs have been studied. We are particularly interested in the notion of _access time_\(\mathsf{t}(v,u)\) and of _commute time_\(\tau(v,u)\) (see Figure 1). The access time \(\mathsf{t}(v,u)\) (also known as _hitting time_) is the expected number of steps before node \(u\) is visited for a RW starting from node \(v\). The commute time instead, represents the expected number of steps in a RW starting at \(v\) to reach node \(u\) and _come back_. A high access (commute) time means that nodes \(v,u\) generally struggle to visit each other in a RW - this can happen if nodes are far-away, but it is in fact more general and strongly dependent on the topology.
Some connections between over-squashing and the topology have already been derived (Theorem 4.1), but up to this point 'topology' has entered the picture through 'distances' only. In this section, we further link over-squashing to other quantities related to the topology of the graph, such as access time, commute time and the Cheeger constant. We ultimately provide a unified framework to understand how existing approaches manage to mitigate over-squashing via graph-rewiring.
Integrating information across different layers.We consider a family of \(\mathsf{MPNN}\)s of the form
\[\mathbf{h}_{v}^{(t)}=\mathsf{ReLU}\left(\mathbf{W}^{(t)}\left(c_{\mathsf{r}} \mathbf{h}_{v}^{(t-1)}+c_{\mathsf{a}}(\boldsymbol{\mathsf{A}}\mathbf{h}^{(t- 1)})_{v}\right)\right). \tag{7}\]
Similarly to Kawaguchi (2016); Xu et al. (2018), we require the following:
**Assumption 5.1**.: All paths in the computation graph of the model are activated with the same probability of success \(\rho\).
Take two nodes \(v\neq u\) at distance \(r\gg 1\) and imagine we are interested in sending information _from_\(u\)\(to\)\(v\). Given a layer \(k<m\) of the \(\mathsf{MPNN}\), by Theorem 4.1 we expect that \(\mathbf{h}_{v}^{(m)}\) is much more sensitive to the information contained _at the same_ node \(v\) at an earlier layer \(k\), i.e. \(\mathbf{h}_{v}^{(k)}\), rather than to the information contained at a distant node \(u\), i.e. \(\mathbf{h}_{u}^{(k)}\). Accordingly, we introduce the following quantity:
\[\mathbf{J}_{k}^{(m)}(v,u):=\frac{1}{d_{v}}\frac{\partial\mathbf{h}_{v}^{(m)}} {\partial\mathbf{h}_{v}^{(k)}}-\frac{1}{\sqrt{d_{v}d_{u}}}\frac{\partial \mathbf{h}_{v}^{(m)}}{\partial\mathbf{h}_{u}^{(k)}}.\]
We note that the normalization by degree stems from our choice \(\boldsymbol{\mathsf{A}}=\mathbf{D}^{-1/2}\boldsymbol{\mathsf{A}}\mathbf{D}^{ -1/2}\). We provide an intuition for this term. Say that node \(v\) at layer \(m\) of the \(\mathsf{MPNN}\) is mostly insensitive to the information sent from \(u\) at layer \(k\). Then, on average, we expect \(||\partial\mathbf{h}_{v}^{(m)}/\partial\mathbf{h}_{u}^{(k)}||\ll||\partial \mathbf{h}_{v}^{(m)}/\partial\mathbf{h}_{v}^{(k)}||\). In the opposite case instead, we expect, on average, that \(||\partial\mathbf{h}_{v}^{(m)}/\partial\mathbf{h}_{u}^{(k)}||\sim||\partial \mathbf{h}_{v}^{(m)}/\partial\mathbf{h}_{v}^{(k)}||\). Therefore \(||\mathbf{J}_{k}^{(m)}(v,u)||\) will be _larger_ when \(v\) is (roughly) independent of the information contained at \(u\) at layer \(k\). We extend the same argument by accounting for messages sent at each layer \(k\leq m\).
**Definition 5.2**.: The Jacobian obstruction of node \(v\) with respect to node \(u\) after \(m\) layers is \(\mathsf{O}^{(m)}(v,u)=\sum_{k=0}^{m}||\mathbf{J}_{k}^{(m)}(v,u)||\).
As motivated above, a larger \(\mathsf{O}^{(m)}(v,u)\) means that, after \(m\) layers, the representation of node \(v\) is more likely to be insensitive to information contained at \(u\) and conversely, a small \(\mathsf{O}^{(m)}(v,u)\) means that nodes \(v\) is, on average, able to receive information from \(u\). Differently from the Jacobian bounds of the earlier sections, here we consider the contribution coming from all layers \(k\leq m\) (note the sum over layers \(k\) in Definition 5.2).
**Theorem 5.3** (**Over-squashing and access-time)**.: _Consider an \(\mathsf{MPNN}\) as in Eq. (7) and let Assumption 5.1 hold. If \(\nu\) is the smallest singular value across all weight matrices and \(c_{\mathsf{r}},c_{\mathsf{a}}\) are such that \(\nu(c_{\mathsf{r}}+c_{\mathsf{a}})=1\), then, in expectation, we have_
\[\mathsf{O}^{(m)}(v,u)\geq\frac{\rho}{\nu c_{\mathsf{a}}}\frac{\mathsf{t}(u,v)} {2|\mathsf{E}|}+o(m),\]
_with \(o(m)\to 0\) exponentially fast with \(m\)._
We note that an exact expansion of the term \(o(m)\) is reported in the Appendix. We also observe that more general bounds are possible if \(\nu(c_{\mathsf{r}}+c_{\mathsf{a}})<1\) - however, they will progressively become less informative in the limit \(\nu(c_{\mathsf{r}}+c_{\mathsf{a}})\to 0\). Theorem 5.3 shows that the obstruction is a function of the access time \(\mathsf{t}(u,v)\); **high access time, on average, translates into high obstruction for node \(v\) to receive information from node \(u\) inside the \(\mathsf{MPNN}\)**. This resonates with the intuition that access time is a measure of how easily a 'diffusion' process starting at \(u\) reaches \(v\). We emphasize that the obstruction provided by the access time cannot be fixed by increasing the number of layers and in fact this is independent of the number of layers, further corroborating the analysis in Section 4. Next, we relate over-squashing to commute time, and hence, to effective resistance.
### On over-squashing and commute time
We now restrict our attention to a slightly more special form of over-squashing, where we are interested in nodes \(v,u\) exchanging information both ways - differently from before where we looked at node \(v\) receiving information from node \(u\). Following the same intuition described previously, we introduce the symmetric quantity:
\[\tilde{\mathbf{J}}_{k}^{(m)}(v,u) :=\left(\frac{1}{d_{v}}\frac{\partial\mathbf{h}_{v}^{(m)}}{ \partial\mathbf{h}_{v}^{(k)}}-\frac{1}{\sqrt{d_{v}d_{u}}}\frac{\partial \mathbf{h}_{v}^{(m)}}{\partial\mathbf{h}_{u}^{(k)}}\right)\] \[+\left(\frac{1}{d_{u}}\frac{\partial\mathbf{h}_{u}^{(m)}}{ \partial\mathbf{h}_{u}^{(k)}}-\frac{1}{\sqrt{d_{v}d_{u}}}\frac{\partial \mathbf{h}_{u}^{(m)}}{\partial\mathbf{h}_{v}^{(k)}}\right).\]
Once again, we expect that \(||\tilde{\mathbf{J}}_{k}^{(m)}(v,u)||\) is larger if nodes \(v,u\) are failing to communicate in the \(\mathsf{MPNN}\), and con
versely to be smaller whenever the communication is sufficiently robust. Similarly, we integrate the information collected at each layer \(k\leq m\).
**Definition 5.4**.: The symmetric Jacobian obstruction of nodes \(v,u\) after \(m\) layers is \(\tilde{\mathsf{O}}^{(m)}(v,u)=\sum_{k=0}^{m}\lvert\lvert\tilde{\mathbf{J}}_{k}^{ (m)}(v,u)\rvert\rvert\).
The intuition of comparing the sensitivity of a node \(v\) with a different node \(u\) and to itself, and then swapping the roles of \(v\) and \(u\), resembles the concept of commute time \(\tau(v,u)\). In fact, this is not a coincidence:
**Theorem 5.5** (**Over-squashing and commute-time**).: _Consider an \(\mathsf{MPNN}\) as in Eq. (7) with \(\mu\) the maximal spectral norm of the weight matrices and \(\nu\) the minimal singular value. Let Assumption 5.1 hold. If \(\mu(c_{\mathsf{r}}+c_{\mathsf{a}})\leq 1\), then there exists \(\epsilon_{\mathsf{G}}\), independent of nodes \(v,u\), such that in expectation, we have_
\[\epsilon_{\mathsf{G}}(1-o(m))\frac{\rho}{\nu c_{\mathsf{a}}}\frac{\tau(v,u)}{ 2\lvert\mathsf{E}\rvert}\leq\tilde{\mathsf{O}}^{(m)}(v,u)\leq\frac{\rho}{\mu c _{\mathsf{a}}}\frac{\tau(v,u)}{2\lvert\mathsf{E}\rvert},\]
_with \(o(m)\to 0\) exponentially fast with \(m\) increasing._
We note that an explicit expansion of the \(o(m)\)-term is reported in the proof of the Theorem in the Appendix. By the previous discussion, a **smaller**\(\tilde{\mathsf{O}}^{(m)}(v,u)\) means \(v\) is more sensitive to \(u\) in the \(\mathsf{MPNN}\) (and viceversa when \(\tilde{\mathsf{O}}^{(m)}(v,u)\) is large). Therefore, Theorem 5.5 implies that nodes at small commute time will exchange information better in an \(\mathsf{MPNN}\) and conversely for those at high commute time. This has some **important consequences**:
1. When the task only depends on local interactions, the property of \(\mathsf{MPNN}\) of reducing the sensitivity to messages from nodes with high commute time _can_ be beneficial since it decreases harmful redundancy.
2. Over-squashing is an issue when the task depends on the interaction of nodes with high commute time.
3. The commute time represents an obstruction to the sensitivity of an \(\mathsf{MPNN}\) which is _independent of the number of layers_, since the bounds in Theorem 5.5 are independent of \(m\) (up to errors decaying exponentially fast with \(m\)).
We note that the very same comments hold in the case of access time as well if, for example, the task depends on node \(v\) receiving information from node \(u\) but not on \(u\) receiving information from \(v\).
### A unified framework
Why spectral-rewiring works.First, we justify why the spectral approaches discussed in Section 2.3 mitigate over-squashing. This comes as a consequence of Lovasz (1993) and Theorem 5.5:
**Corollary 5.6**.: _Under the assumptions of Theorem 5.5, for any \(v,u\in\mathsf{V}\), we have_
\[\tilde{\mathsf{O}}^{(m)}(v,u)\leq\frac{4}{\rho\mu c_{\mathsf{a}}}\frac{1}{ \hbar_{\mathsf{Cheng}}^{2}}.\]
Corollary 5.6 essentially tells us that the obstruction among _all_ pairs of nodes decreases (so better information flow) if the \(\mathsf{MPNN}\) operates on a graph \(\mathsf{G}\) with larger Cheeger constant. This rigorously justifies why recent works like Arnaiz-Rodriguez et al. (2022); Deac et al. (2022); Karhadkar et al. (2022) manage to alleviate over-squashing by propagating information on a rewired graph \(\mathcal{R}(\mathsf{G})\) with larger Cheeger constant \(\hbar_{\mathsf{Cheng}}\). Our result also highlights why bounded-degree expanders are particularly suited - as leveraged in Deac et al. (2022) - given that their commute time is only \(\mathcal{O}(\lvert\mathsf{E}\rvert)\)Chandra et al. (1996), making the bound in Theorem 5.5 scale as \(\mathcal{O}(1)\) w.r.t. the size of the graph. In fact, the concurrent work of Black et al. (2023) leverages directly the effective resistance of the graph \(\mathsf{Res}(v,u)=\tau(v,u)/2\lvert\mathsf{E}\rvert\) to guide a rewiring that improves the graph connectivity and hence mitigates over-squashing.
Why spatial-rewiring works.Chandra et al. (1996) proved that the commute time satisfies: \(\tau(v,u)=2\lvert\mathsf{E}\rvert\mathsf{Res}(v,u)\), with \(\mathsf{Res}(v,u)\) the **effective resistance** of nodes \(v,u\). \(\mathsf{Res}(v,u)\) measures the voltage difference between nodes \(v,u\) if a unit current flows through the graph from \(v\) to \(u\) and we take each edge to represent a unit resistance (Thomassen, 1990; Dorfler et al., 2018), and has also been used in Velingker et al. (2022) as a form of structural encoding. Therefore, we emphasize that Theorem 5.5 can be _equivalently rephrased as saying that nodes at high-effective resistance struggle to exchange information in an \(\mathsf{MPNN}\)_ and viceversa for node at low effective resistance. We recall that a result known as Rayleigh's monotonicity principle (Thomassen, 1990), asserts that the _total_ effective resistance \(\mathsf{Res}_{\mathsf{G}}=\sum_{v,u}\mathsf{Res}(v,u)\) decreases when adding new edges - which offer a new interpretation as to why spatial methods help combat over-squashing.
What about curvature?Our analysis also sheds further light on the relation between over-squashing and curvature derived in Topping et al. (2022). If the effective resistance is bounded from above, this leads to lower bounds for the resistance curvature introduced in Devriendt & Lambiotte (2022) and hence, under some assumptions, for the Ollivier curvature too (Ollivier, 2007, 2009). Our analysis then recovers why preventing the curvature from being 'too' negative has benefits in terms of reducing over-squashing.
About the Graph Transfer task.We finally note that the results in Figure 3 also validate the theoretical findings of Theorem 5.5. If \(v,u\) represent target and source nodes
on the different graph-transfer topologies, then \(\mathsf{Res}(v,u)\) is highest for \(\mathsf{CliquePath}\) and lowest for the \(\mathsf{CrossedRing}\). Once again, the distance is only a partial information. Effective resistance provides a better picture for the impact of topology to over-squashing and hence the accuracy on the task; in Appendix F we further validate that via a synthetic experiment where we study how the propagation of a signal in a \(\mathsf{MPNN}\) is affected by the effective resistance of \(\mathsf{G}\).
**Message of the Section:**\(\mathsf{MPNN}\)_s struggle to send information among nodes with high commute (access) time (equivalently, effective resistance). This connection between over-squashing and commute (access) time provides a unified framework for explaining why spatial and spectral-rewiring approaches manage to alleviate over-squashing._
## 6 Conclusion and discussion
**What did we do?** In this work, we investigated the role played by width, depth, and topology in the over-squashing phenomenon. We have proved that, while width can partly mitigate this problem, depth is, instead, generally bound to fail since over-squashing spills into vanishing gradients for a large number of layers. In fact, we have shown that the graph-topology plays the biggest role, with the commute (access) time providing a strong indicator for whether over-squashing is likely to happen independently of the number of layers. As a consequence of our analysis, we can draw a unified framework where we can rigorously justify why all recently proposed rewiring methods do alleviate over-squashing.
Limitations.Strictly speaking, the analysis in this work applies to \(\mathsf{MPNN}\)s that weigh each edge contribution the same, up to a degree normalization. In the opposite case, which, for example, includes GAT (Velickovic et al., 2018) and \(\mathsf{GatedGCN}\)(Bresson & Laurent, 2017), over-squashing can be further mitigated by pruning the graph, hence alleviating the dispersion of information. However, the attention (gating) mechanism can fail if it is not able to identify which branches to ignore and can even amplify over-squashing by further reducing 'useful' pathways. In fact, GAT still fails on the Graph Transfer task of Section 4, albeit it seems to exhibit slightly more robustness. Extending the Jacobian bounds to this case is not hard, but will lead to less transparent formulas: a thorough analysis of this class, is left for future work. We also note that determining when the sensitivity is 'too' small is generally also a function of the resolution of the readout, which we have not considered. Finally, Theorem 5.5 holds in expectation over the nonlinearity and, generally, Definition 5.2 encodes an average type of behaviour: a more refined (and exact) analysis is left for future work.
**Where to go from here.** We believe that the relation between over-squashing and vanishing gradient deserves further analysis. In particular, it seems that there is a phase transition that \(\mathsf{MPNN}\)s undergo from over-squashing of information between distant nodes, to vanishing of gradients at the level of the loss. In fact, this connection suggests that traditional methods that have been used in RNNs and GNNs to mitigate vanishing gradients, may also be beneficial for over-squashing. On a different note, this work has not touched on the important problem of over-smoothing; we believe that the theoretical connections we have derived, based on the relation between over-squashing, commute time, and Cheeger constant, suggest a much deeper interplay between these two phenomena. Finally, while this analysis confirms that both spatial and spectral-rewiring methods provably mitigate over-squashing, it does not tell us which method is preferable, when, and why. We hope that the theoretical investigation of over-squashing we have provided here, will also help tackle this important methodological question.
## Acknowledgements
We are grateful to Adrian Arnaiz, Johannes Lutzeyer, and Ismail Ceylan for providing insightful and detailed feedback and suggestions on an earlier version of the manuscript. We are also particularly thankful to Jacob Bamberger for helping us fix a technical assumption in one of our arguments. Finally, we are grateful to the anonymous reviewers for their input. This research was supported in part by ERC Consolidator grant No. 274228 (LEMAN). |
2310.15786 | Amortised Inference in Neural Networks for Small-Scale Probabilistic
Meta-Learning | The global inducing point variational approximation for BNNs is based on
using a set of inducing inputs to construct a series of conditional
distributions that accurately approximate the conditionals of the true
posterior distribution. Our key insight is that these inducing inputs can be
replaced by the actual data, such that the variational distribution consists of
a set of approximate likelihoods for each datapoint. This structure lends
itself to amortised inference, in which the parameters of each approximate
likelihood are obtained by passing each datapoint through a meta-model known as
the inference network. By training this inference network across related
datasets, we can meta-learn Bayesian inference over task-specific BNNs. | Matthew Ashman, Tommy Rochussen, Adrian Weller | 2023-10-24T12:34:25Z | http://arxiv.org/abs/2310.15786v1 | [
###### Abstract
In many machine learning applications, well-calibrated posterior predictive distributions are required for a number of closely-related datasets. Given similarity between datasets, it is natural to wish to develop meta-learning algorithms that utilise other datasets to reduce the computational complexity and / or improve predictive performance when deploying models on newly-seen datasets at test time. There have been a number of significant recent developments in meta-learning for predictive distributions, most notably that of the neural process (NP) family (Garnelo et al., 2018, 2018; Foong et al., 2020; Gordon et al., 2018, 2019). Despite the utility of these methods on large-scale meta-datasets, they perform poorly in settings where the number of datasets and the total number of datapoints is small. We argue that this is a result of the large number of shared model parameters overfitting to the meta-dataset. A natural solution is to remove these shared model parameters, and instead train a meta-model to learn to approximate fully Bayesian inference over task-specific model parameters.
Recently, Ober and Aitchison (2021) developed a variational approximation for Bayesian neural networks (BNNs) based on using a set of inducing inputs to construct a series of conditional distributions that accurately approximate the conditionals of the true posterior distribution. Notably, the variational distribution consists of the prior multiplied by a set of approximate likelihoods for each inducing input. Our key insight is that these inducing inputs can be replaced by the actual data, such that the variational distribution consists of a set of approximate likelihoods for each datapoint. This structure lends itself to amortised inference, in which the parameters of each approximate likelihood are obtained by passing each datapoint through a meta-model known as the inference network. By training this inference network across related datasets, we can meta-learn Bayesian inference over task-specific BNNs, addressing the challenge above.
A]Amortised Inference in Neural Networks for Small-Scale Probabilistic Meta-Learning Matthew Ashman]Matthew Ashman\({}^{*}\) [email protected]
Tommy Rochussen\({}^{*}\) [email protected]
A]Marian Weller [email protected]
## 1 Introduction
In many machine learning applications, well-calibrated posterior predictive distributions are required for a number of closely-related datasets. Given similarity between datasets, it is natural to wish to develop meta-learning algorithms that utilise other datasets to reduce the computational complexity and / or improve predictive performance when deploying models on newly-seen datasets at test time. There have been a number of significant recent developments in meta-learning for predictive distributions, most notably that of the neural process (NP) family (Garnelo et al., 2018, 2018; Foong et al., 2020; Gordon et al., 2018, 2019). Despite the utility of these methods on large-scale meta-datasets, they perform poorly in settings where the number of datasets and the total number of datapoints is small. We argue that this is a result of the large number of shared model parameters overfitting to the meta-dataset. A natural solution is to remove these shared model parameters, and instead train a meta-model to learn to approximate fully Bayesian inference over task-specific model parameters.
Recently, Ober and Aitchison (2021) developed a variational approximation for Bayesian neural networks (BNNs) based on using a set of inducing inputs to construct a series of conditional distributions that accurately approximate the conditionals of the true posterior distribution. Notably, the variational distribution consists of the prior multiplied by a set of approximate likelihoods for each inducing input. Our key insight is that these inducing inputs can be replaced by the actual data, such that the variational distribution consists of a set of approximate likelihoods for each datapoint. This structure lends itself to amortised inference, in which the parameters of each approximate likelihood are obtained by passing each datapoint through a meta-model known as the inference network. By training this inference network across related datasets, we can meta-learn Bayesian inference over task-specific BNNs, addressing the challenge above.
## 2 Related Work
Neural processesOur work is most similar to the NP family (Garnelo et al., 2018, 2018), which seeks to meta-learn predictive distributions either through maximisation of the posterior predictive likelihood or variational inference (VI) (Foong et al., 2020). Similar to our method, NPs utilise an encoder to create embeddings for each datapoint. These embeddings
are then aggregated to form a distribution over a latent variable which is then sampled and passed, together with a test datapoint, through a decoder. Volpp et al. (2020) propose the use of Bayesian aggregation, in which embeddings of individual datapoints take the form of approximate likelihoods which are multiplied together with the prior to form an approximate posterior distribution over the latent variable. Whilst these methods differ to ours in their use of shared model parameters, our method can be reinterpreted as a member of the NP family in which the latent variables are the parameters of the decoder. Through this perspective we can train our model in an identical way to NPs.
Meta-learning neural networksMeta-learning for neural networks had received a significant amount of attention from the research community. Notable examples include MAML Finn et al. (2017) and its extensions (Yoon et al., 2018; Antoniou et al., 2018), which seek good parameter initialisation, and those which explicitly condition on the dataset to obtain task specific parameters (Requeira et al., 2019; Gordon et al., 2018). Whilst conceptually similar to our approach, these methods differ in their use of shared model parameters--the task specific parameters amount to a small subset of the overall model parameters. In addition to requiring a large meta-dataset, this limits these methods to meta-datasets in which the individual datasets are very similar. By contrast, our approach does not use any shared model parameters but rather meta-learns inference in a BNN. We discuss the relationship between our method and NPs in Appendix A, demonstrating that they conceptually differ only in a change of objective functions.
## 3 Background
In this section we review the GI-BNN of Ober and Aitchison (2021) and NP family (Garnelo et al., 2018). Throughout, let \(\mathbf{\Xi}=\{\mathcal{D}\}\) denote a meta-dataset of \(|\Xi|\) datasets, and \(\mathcal{D}=\{\mathbf{X},\mathbf{y}\}\) denote a dataset consisting of inputs \(\mathbf{X}\in\mathbb{R}^{N\times D_{0}}\) and outputs \(\mathbf{y}\in\mathbb{R}^{N\times P}\).
### Global Inducing Point Variational Posteriors for BNNs
Let \(\mathbf{W}=\{\mathbf{W}^{\ell}\}_{l=1}^{L}\) denote the weights of an \(L\)-layer neural network, such that \(\mathbf{W}^{\ell}\in\mathbb{R}^{D^{\ell-1}\times D^{\ell}}\) where \(D^{\ell}\) denotes the dimensions of the \(\ell\)-th hidden layer, and \(\psi(\cdot)\) denote the element-wise activation function acting between layers. Ober and Aitchison (2021) introduce the global inducing point variational approximation for BNNs, in which the variational approximation to the posterior \(p(\mathbf{W}|\mathcal{D})\) is defined recursively as \(q_{\phi}(\mathbf{W})=\prod_{\ell=1}^{L}q_{\phi}(\mathbf{W}^{\ell}|\{\mathbf{W }^{\ell}\}_{\ell^{\prime}=1}^{\ell-1},\mathbf{U}^{0})\), where
\[q_{\phi}\left(\mathbf{W}^{\ell}|\{\mathbf{W}^{\ell^{\prime}}\}_{\ell^{\prime }=1}^{\ell-1},\mathbf{U}_{0}\right)\propto\prod_{d=1}^{D^{\ell}}p\left( \mathbf{w}_{d}^{\ell}\right)\underbrace{\mathcal{N}\left(\mathbf{v}_{d}^{\ell };\psi(\mathbf{U}^{\ell})\mathbf{w}_{d}^{\ell},\left[\mathbf{\Lambda}_{d}^{ \ell}\right]^{-1}\right)}_{t_{d}^{\ell}(\mathbf{w}_{d}^{\ell})}. \tag{1}\]
This mirrors the structure of the true posterior in the sense that it is equivalent to the product of the prior and an _approximate likelihood_, \(t_{d}^{\ell}(\mathbf{w}_{d}^{\ell})\). The variational parameters \(\phi\) of this approximation are the parameters of each approximate likelihood, \(\mathbf{v}_{d}^{\ell}\in\mathbb{R}^{M}\) and \(\mathbf{\Lambda}_{d}^{\ell}\in\mathbb{R}^{M\times M}\)--which themselves can be interpreted as _pseudo observations_--and the _global
inducing locations_, \(\mathbf{U}_{0}\in\mathbb{R}^{M\times D_{0}}\), which are used to define \(\{\mathbf{U}^{\ell}\}_{\ell=1}^{L}\) according to
\[\mathbf{U}^{1}=\mathbf{U}^{0}\mathbf{W}_{1},\quad\mathbf{U}^{\ell}=\psi( \mathbf{U}^{\ell-1})\mathbf{W}^{\ell}\quad\ell=2,\ldots,L. \tag{2}\]
Optimisation of \(\phi\) is achieved through maximisation of the evidence lower bound (ELBO):
\[\mathcal{L}_{\mathrm{ELBO}}(\phi;\mathcal{D})=\mathbb{E}_{q_{\phi}(\mathbf{W}) }\left[\log p(\mathbf{y}|\mathbf{W},\mathbf{X})\right]-\mathrm{KL}\left[q_{ \phi}(\mathbf{W})||p(\mathbf{W})\right]. \tag{3}\]
We refer to this variational approximation as pseudo-observation variational inference for BNNs (POVI-BNN).
Ober and Aitchison (2021) demonstrate the efficacy of POVI-BNNs relative to mean-field Gaussian variational approximations for BNNs, achieving state-of-the-art performance on a number of regression and classification experiments. Their effectiveness is further demonstrated by Bui (2022), who shows that the estimate of the marginal likelihood provided by the POVI-BNN approximation is close to the true value, indicating the approximation is close to the true posterior.
## 4 Amortising Inference in Bayesian Neural Networks
In this section, we build upon the work of Ober and Aitchison (2021) to develop an effective method of performing amortised inference in BNNs.
Consider the same variational approximation described in Section 3.1, except with diagonal precision matrices \(\boldsymbol{\Lambda}_{d}^{\ell}\) and inducing locations \(\mathbf{U}\) replaced by training locations \(\mathbf{X}\), such that
\[q\left(\mathbf{W}^{\ell}|\{\mathbf{W}^{\ell^{\prime}}\}_{\ell^{\prime}=1}^{ \ell-1},\mathcal{D}\right)\propto\prod_{d=1}^{D_{\ell}}p\left(\mathbf{w}_{d}^ {\ell}\right)\prod_{n=1}^{N}\underbrace{\mathcal{N}\left(v_{d,n}^{\ell};x_{d,n }^{\ell},\sigma_{d,n}^{\ell}\right)}_{t_{d,n}^{\ell}(\mathbf{w}_{d}^{\ell})} \tag{4}\]
where
\[\mathbf{x}_{n}^{1}=\mathbf{W}^{1}\mathbf{x}_{n},\quad\mathbf{x}_{n}^{\ell}= \mathbf{W}^{\ell}\psi(\mathbf{x}_{n}^{\ell-1})\quad\forall\ell=2,\ldots,L. \tag{5}\]
This form of variational approximation enables the use of per-datapoint amortised inference. Specifically, rather than treating the variational parameters \(\{\{\{\{\{\{\{\{\{\{\{\{\{\{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}\}\}\}\} \mathbf{\}}\mathbf{\{\{\{{{\{{\{{{\{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{ \mathbf{{\mathbf{\mathbf{{}}}}}}}}}}}}}}}\}\}\}\}}}}\mathbf{\{\{{{{{{ \{{\{{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{\mathbf{{\mathbf{{\mathbf{\mathbf{{ }}}}}}}}}}}}}}\mathbf{{}}}}\}}}}}}}}}\mathbf{\{ \{{{{{\{{{\{{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{ \mathbf{{\mathbf{{\mathbf{{\mathbf{{}}}}}}}}}}}}}}}}}\mathbf{}}}\}\}}}}}}} }\mathbf{\{\{{{{{\{{{\{{{\{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ \mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ }}}}}}}}}}}}}}}}}}\mathbf{{}}}\mathbf{{{{{{ {\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ }}}}}}}}}}}}}\mathbf{{{{{{{{ \mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \mathbf{\{\{{{{{{\mathbf{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ \mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{{\mathbf{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\mathbf{\{ \{{{{{\{{{\{{{\{{{\{{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{ \mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{} }} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\}\\\\\\}\}\\\\\\\\\\\}\\\\\\\\\\\}}\\\\\\\\\\\\\ \}\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\\}\\\\\\\\
An important limitation of this approach is that performing stochastic optimisation through mini-batching datapoints is not possible, as to compute the \(q(\mathbf{W}|\mathcal{D})\) we require passing the entire dataset \(\mathcal{D}\) through the inference network. Nonetheless, this limitation is only significant for large datasets--provided the entire dataset can be passed through the network at once (i.e. in the small to medium-sized dataset regime which we consider here) this is not an issue. At test time, we can obtain an approximate posterior \(q(\mathbf{W}|\mathcal{D}_{*})\) with a single pass of the dataset through the inference networks.
## 5 Results and Discussion
In this section, we evaluate the performance of our model in the meta-learning setting. We consider a synthetic meta-dataset consisting of samples from a GP with a squared-exponential (SE) kernel. Each dataset consists of between 10 and 20 training datapoints. We compare A-POVI-BNNs to amortised mean-field variational inference for BNNs (A-MFVI-BNN), which we detail in Appendix B, and a ConvCNP (Gordon et al., 2019). All
Figure 1: Posterior predictive distributions for A-POVI-BNN (top), A-MFVI-BNN (middle), and ConvCNP (bottom) after training on a meta-dataset of size \(|\Xi|=1\) (left) and \(|\Xi|=100\) (right). Data points are shown as black dots, the predictive mean is shown as a dark blue line and the 95% confidence interval is shown as shaded blue.
NNs used in the amortised BNN architectures (both the model and inference network) consist of two layers of 50 hidden units and ReLU activation functions. The ConvCNP implementation and architecture is identical to that provided in [https://github.com/cambridge-mlg/convcnp/blob/master/convcnp_regression.ipynb](https://github.com/cambridge-mlg/convcnp/blob/master/convcnp_regression.ipynb).
Figures 1(_a_) to 1(_f_) compare the posterior predictive distributions of each method on an unseen dataset drawn from a GP with the same hyperparameters as those in the meta-dataset. We evaluate the predictions of each method trained on a meta-dataset of size \(|\Xi|=1\) and \(|\Xi|=100\). We see that only A-POVI-BNN obtains a sensible predictive posterior on the unseen dataset in both cases. The ConvCNP performs poorly for \(|\Xi|=1\), which is unsurprising given the large number of model parameters increasing it susceptibility to overfitting. The A-MFVI-BNN performs significantly better for \(|\Xi|=100\), yet, in both cases the quality of its predictive posterior is poor relative to both the A-POVI-BNN and ConvCNP.
Despite these results being very preliminary, they are encouraging and suggest that A-POVI-BNNs may provide a more effective alternative to NPs when the size of meta-datasets are small. We intend to explore the effectiveness of A-POVI-BNNs in more diverse settings, such as image completion, in future work.
|
2309.01690 | Direction-of-arrival estimation with conventional co-prime arrays using
deep learning-based probablistic Bayesian neural networks | The paper investigates the direction-of-arrival (DOA) estimation of narrow
band signals with conventional co-prime arrays by using probabilistic Bayesian
neural networks (PBNN). A super resolution DOA estimation method based on
Bayesian neural networks and a spatially overcomplete array output formulation
overcomes the pre-assumption dependencies of the model-driven DOA estimation
methods. The proposed DOA estimation method utilizes a PBNN model to capture
both data and model uncertainty. The developed PBNN model is trained to do the
mapping from the pseudo-spectrum to the super resolution spectrum. This
learning-based method enhances the generalization of untrained scenarios, and
it provides robustness to non-ideal conditions, e.g., small angle separation,
data scarcity, and imperfect arrays, etc. Simulation results demonstrate the
loss curves of the PBNN model and deterministic model. Simulations are carried
out to validate the performance of PBNN model compared to a deterministic model
of conventional neural networks (CNN). | Wael Elshennawy | 2023-09-04T16:07:30Z | http://arxiv.org/abs/2309.01690v1 | Direction-of-arrival estimation with conventional co-prime arrays using deep learning-based probabilistic Bayesian neural networks
###### Abstract
The paper investigates the direction-of-arrival (DOA) estimation of narrow band signals with conventional co-prime arrays by using probabilistic Bayesian neural networks (PBNN). A super resolution DOA estimation method based on Bayesian neural networks and a spatially overcomplete array output formulation overcomes the pre-assumption dependencies of the model-driven DOA estimation methods. The proposed DOA estimation method utilizes a PBNN model to capture both data and model uncertainty. The developed PBNN model is trained to do the mapping from the pseudo-spectrum to the super resolution spectrum. This learning-based method enhances the generalization of untrained scenarios, and it provides robustness to non-ideal conditions, e.g., small angle separation, data scarcity, and imperfect arrays, etc. Simulation results demonstrate the loss curves of the PBNN model and deterministic model. Simulations are carried out to validate the performance of PBNN model compared to a deterministic model of conventional neural networks (CNN).
Wael Elshennawy
[email protected] Direction-of-arrival (DOA) estimation, co-prime arrays, Bayesian neural networks, neural networks
## 1 Introduction
The co-prime arrays are class of sparse arrays, which can achieve higher degrees-of-freedom (DOF) that can be exploited in both beamforming and DOA estimation [1]. The coprime arrangement has shown to possess the capability of cancelling spatial aliasing [2], Though the side lobes may still exist in the beampattern that affects the resolution of a DOA estimation algorithm. Therefore, DOA estimation approaches are needed to further explore the advantage of the co-prime arrays. The earlier approaches rely on the subspace-based DOA estimation methods such as multiple signal classification (MUSIC) [3, 4], etc. Meanwhile, these methods require spatial smoothing to restore the rank of the signal covariance matrix [5]. A short and non-exhaustive list of recent works is based on sparse reconstruction so as to use all the unique lags [2, 6]. However, these model-driven methods face great robustness challenges under non-ideal conditions [4].
Another approach provides robust performance against non-ideal conditions include the use of deep convolutional neural networks in [2, 7]. Nevertheless, it is based on the deterministic neural networks. The necessity to develop an approach that exhibits a robustness to the adverse environment. Probabilistic deep learning removes this limitation by quantifying and processing the uncertainty [8]. To further tackle the model and data uncertainty, an off-grid DOA estimation method is proposed from the perspective of variational Bayesian inference [9]. Motivated by the advantages of Bayesian neural networks in [10], this deep probabilistic model is developed based on the normalizing flows for Bayesian neural network to model complex probability distributions [11].
Generally, existing sparsity-inducing DOA estimation methods based on sparse Bayesian learning (SBL) have been demonstrated to achieve enhanced precision [7]. However, the learning process of those methods converges much slowly when the SNR is relatively low. To overcome this challenge in this paper, the coprime arrays is used which provides high SNRs. Regarding the PBNN model, it offers adaptation to various array imperfections and enhanced generalization to unseen scenarios. In addition to, PBNN focuses on marginalization, the estimates would be maximum a posteriori (MAP), and it relies on variational inference and normalization flows to find the optimal values. Its goal is to quantify the model and data uncertainty to explain the trustworthiness of the prediction. Thereby avoiding overfitting problem.
The main contribution of this paper is mainly to consider a probabilistic approach integrated with the deep learning that allows to account for the uncertainty in DOAs estimation of co-prime arrays [11], so that the trained model can assign less levels of confidence to incorrect DOAs predictions. The PBNN model is implemented by using the TensorFlow Probability (TFP) library [12]. Many concepts have been used throughout this paper, including latent variables [13], probabilistic layers [14], bijectors [15], evidence lower bound (ELBO) optimization, and Kullback-Leibeier divergence (KL) divergence regularizers [16] to develop the PBNN model. The presented deep learning approach tends to bring more reliable DOAs estimation, and it has the potential to be applied in real-world environments.
## 2 Signal Model of Co-Prime Arrays
The co-prime arrays are the union of two uniform linear sub-arrays as illustrated in Fig. 1. One sub-array consists of \(2M\)-elements with a spacing of \(N\) units. The other composed of \(N\)-elements with a spacing of \(M\) units. The positions are given by the set \(\mathbb{P}\) in [6] as
\[\mathbb{P}=\{Mnd,0\leq n\leq N-1\}\cup\{Nmd,0\leq m\leq 2M-1\}. \tag{1}\]
Where \(M\) and \(N\) are co-prime, and it is assumed that \(M<N\). The zeroth sensor positions are collocated, so the co-prime arrays consist of \(N+2M-1\) elements. The fundamental spacing \(d\) usually sets to a half-wavelength to avoid the spatial aliasing. \(K\) independent narrow band sources \(\mathbf{s}(t)=[s_{1}(t)s_{2}(t)...s_{K}(t)]\) are impinging on the co-prime arrays from the directions \(\{\theta_{1},...,\theta_{K}\}\). The array output is formulated in [5] as
\[\mathbf{x}(t)=\sum_{k=1}^{K}\mathbf{a}(\theta_{k})s_{k}(t)+\mathbf{n}(t)= \mathbf{A}\mathbf{s}(t)+\mathbf{n}(t), \tag{2}\]
where \(\mathbf{A}=[\mathbf{a}(\theta_{1}),\mathbf{a}(\theta_{2}),...,\mathbf{a}( \theta_{K})]\) denotes the array manifold matrix, and
\[\mathbf{a}(\theta_{k})=[e^{-j2\pi d_{1}/\lambda\sin\theta_{k}},...,e^{-j2\pi d _{N+2M-1}/\lambda\sin\theta_{k}}]^{T} \tag{3}\]
is the steering vector corresponding to \(\theta_{k}\). The \(d_{1},d_{2},...,\)
\(d_{N+2M-1}\) hold the information of the sparse elements positions. Whereas \([.]^{T}\) denotes the transpose of a matrix. \(\mathbf{s}(t)\) represents the source signals vector with \(s_{k}(t)\) distributed as \(\mathcal{CN}(0,\sigma_{k}^{2})\). The source signals are assumed to be temporally uncorrelated. The entries of the noise vector \(\mathbf{n}(t)\) are assumed to be independent and identically distributed (i.i.d) random variables. Also, \(\mathbf{n}(t)\) follows a complex Gaussian distributed \(\mathcal{CN}(0,\sigma_{n}^{2})\), and their entries are not correlated with source signals.
## 3 Proposed Approach
Many machine learning models, like deep neural networks, are capable of automatically extracting the necessary features from large inputs. However, there are two main problems with this approach i.e., computational complexity and it requires lots of training dataset. The problem is that, as the machine learning model grows, it becomes more complex. It would take a lot of processing power to automatically learn what features are useful for the classifier, and the use of feature extractor with high dimension inputs is challenging in real time. Also, it is required to store at least that amount of data in memory and has to perform mathematical operations on each of those values. Feature extraction sections in such a model can often be several layers deep, in addition to the computational complexity required to perform inference.
Thus, it eventually requires an endless supply of training data and an endless amount of time for training. Though, we do not have that luxury in applications like DOA estimation problem especially it is in the nearest future will be programmed into embedded systems. One ultimately wants to keep machine learning model as small and fast as possible. Most machine learning algorithms require a lot of memory and processing power, and those are in limited supply in most embedded systems. Feature extraction fulfills this requirement: it builds valuable information from raw dataset. The features cycle through reformatting, combining, transforming primary features into new ones, until it yields a new set of data that can be consumed by the machine learning models to achieve their goals. Also, it includes filters for a much faster alternative, filters usually do not test any algorithm, but rank the original features according to their relationship with the problem (labels) itself and just select the top of them. Here, this is features extractor for the PBNN model is achieved by employing the preprocessing step as outlined in the next subsection.
The proposed DOA estimation method for co-prime arrays is illustrated in Fig. 2. The array output is preprocessed to be used by a Bayesian neural network-based model for classification. The pseudo spectrum is calculated from the observation vector and the extended array manifold matrix of a virtual array. This pseudo spectrum is used as the input vector of the PBNN model, and the corresponding super resolution spectrum will be recovered in the output. Thus, this allows to integrate the probabilistic deep learning into a super-resolution DOA estimation method. In addition to, this processing fully maintains the virtual
Figure 1: Geometry of co-prime arrays. Adapted from [3].
Figure 2: Architecture of the proposed DOA estimation method. Adapted from [2].
proves the original SNR.
### Preprocessing and Feature Extraction
The covariance matrix \(\mathbf{R}\) is given by [1]
\[\mathbf{R}=\mathbb{E}[\mathbf{x}(t)\mathbf{x}^{H}(t)]=\sum_{k=1}^{K}\mu_{k} \mathbf{a}(\theta_{k})\mathbf{a}^{H}(\theta_{k})+\sigma_{n}^{2}\mathbf{I}, \tag{4}\]
where \(\mathbf{R}\) can only be estimated using \(Q\) snapshots in practical applications, i.e.
\[\mathbf{\hat{R}}=\frac{1}{Q}\sum_{q=1}^{Q}\mathbf{x}(t_{q})\mathbf{x}^{H}(t_{ q})=\mathbf{R}+\Delta\mathbf{R}, \tag{5}\]
where \(\mathbf{\hat{R}}\) is the maximum likelihood estimator of \(\mathbf{R}\) and \(\Delta\mathbf{R}\) is the estimation error of \(\mathbf{R}\)[2]. By vectorizing \(\mathbf{\hat{R}}\), the observation vector of the virtual array is given in [3] as
\[\mathbf{y} =\mathbf{vec}(\mathbf{\hat{R}})=\mathbf{vec}(\mathbf{R})+ \mathbf{vec}(\Delta\mathbf{R})\] \[=\mathbf{\hat{A}}\boldsymbol{\mu}+\sigma_{n}^{2}\mathbf{vec}( \mathbf{I})+\Delta\mathbf{y}, \tag{6}\]
where
\[\mathbf{\tilde{A}} =[\mathbf{a}^{*}(\theta_{1})\otimes\mathbf{a}(\theta_{1}), \mathbf{a}^{*}(\theta_{2})\otimes\mathbf{a}(\theta_{2}),...,\mathbf{a}^{*}( \theta_{K})\otimes\mathbf{a}(\theta_{K})]\] \[=[\mathbf{\hat{a}}(\theta_{1}),\mathbf{\tilde{a}}(\theta_{2}),...,\mathbf{\tilde{a}}(\theta_{K})], \tag{7}\]
\(\otimes\) represents the Kroncher product and \((.)^{*}\) is the conjugate operation. The signal of interest becomes \(\boldsymbol{\mu}=[\mu_{1},\mu_{2},...,\mu_{K}]^{\mathbf{T}}\), \(\mu_{k}\) denotes the input signal power of the \(k\)th sources and \(\Delta\mathbf{y}=\mathbf{vec}(\Delta\mathbf{R})\), where \(\Delta\mathbf{y}\) becomes negligible as the number of snapshots \(Q\rightarrow\infty\) under stationary and ergodic assumptions. Note that \(\mathbf{y}\) amounts to the received data from a virtual array with a much larger aperture defined by the virtual steering matrix \(\mathbf{\hat{A}}\) having the co-array lag locations [5]. Therefore \(\mathbf{\hat{A}}\) behaves like the manifold of a longer equivalent virtual array [6].
Next, by removing the repeated elements of \(\mathbf{y}\) and sorting the remaining in an increasing order from \(-MNd\) to \(MNd\), the output \(\mathbf{\tilde{y}}\) is extracted without redundancy for a linear model [3]. By extending the corresponding steering vector, the output of the virtual array can be reconstructed in [1] as
\[\mathbf{\tilde{y}} =\mathbf{B}\boldsymbol{\mu}+\sigma_{n}^{2}\mathbf{vec}(\mathbf{I }),\] \[\mathbf{B} =[\mathbf{b}^{*}(\theta_{1})\otimes\mathbf{b}(\theta_{1}), \mathbf{b}^{*}(\theta_{2})\otimes\mathbf{b}(\theta_{2}),...,\mathbf{b}^{*}( \theta_{W})\otimes\mathbf{b}(\theta_{W})], \tag{8}\]
where \(\mathbf{B}\in\mathbb{C}^{(N+2M-1)^{2}\times W}\). \(\boldsymbol{\mu}=[\mu_{1},\mu_{2},...,\mu_{W}]^{\mathbf{T}}\), \(W\geq K\). \([\theta_{1},\theta_{2},...,\theta_{W}]\) is sampled from the spatial spectrum of incident signals with an interval of \(\Delta\theta\). The spatial spectrum \(\boldsymbol{\mu}\) is constructed with \(W\) grids, which has nonzero values at the true signal directions. The pseudo-spectrum is given by [1]
\[\mathbf{\tilde{\mu}}=\mathbf{B}^{\mathbf{H}}\mathbf{\tilde{y}}, \tag{9}\]
as the input of the Bayesian neural network. This strategy maintains the virtual array generating from co-prime arrays, and it effectively improves the original SNR [7].
To demonstrate the resolution of the pseudo-spectrum \(\mathbf{\tilde{\mu}}\) of co-prime arrays and shows how it helps in training of the Bayesian neural network. Consider co-prime arrays consisting of 10 physical antenna elements, which is designed by assuming \(M\) = 3 and \(N\) = 5. Suppose two different signal sources are impinging on the array from directions sets \(A_{s}=\{-65^{o},-23^{o},4^{o},36^{o}\}\) and \(B_{s}=\{-65^{o},-50^{o},-27^{o},-15^{o},-5^{o},5^{o},15^{o},35^{o},47^{o},61^{o}\}\). Clearly, the co-prime arrays can achieve higher DOF and resolution as illustrated in Fig. 3,
## 4 Doa Estimation Based on Pbnn Model
The idea is to create a neural network with weight uncertainty by combining the neural network with Bayesian inference. Usually, there are two categories of uncertainty; aleatoric and epistemic [11], so there is a necessity to introduce a method for designing a deep learning model that accounts for the uncertainty. In practice, and especially considering the dataset as being finite, there will most likely be many possible parameters values that can do a good job of modeling the re
Figure 3: Pseudo spectrum.
lationship between the dataset inputs and the targets values. If more dataset is being collected, then the model would have more information about that relationship, and the likely sets of model parameters would probably narrow down. This likely set of parameters values given a dataset is represented as a distribution over all possible parameter values and is called the posterior distribution [16]. Conventionally the term weights will be used to refer to weights and biases for the remainder sections of the paper. Here, the PBNN model is based on the use of probabilistic neural network [15] and the probabilistic layers are implemented by employing TFP library [17].
### Bayesian Inference and Posterior Probability
The Bayesian approach is usually implemented by using Backprop algorithm [13], that uses variational inference to give an approximation of the posterior distribution over the model weights [9]. Concisely, the true labels and the likelihood function are used to find the best weights of the Bayesian neural network [12]. For instance, the neural network is a function that maps a pseudo-spectrum data point \(\tilde{\mu}_{i}\) to the proper parameters of some distribution. The PBNN model with weights \(\mathbf{W}\) is developed to classify data points \(\tilde{\mu}_{i}\). Hence, the neural network prediction (the feed-forward value) \(\hat{\mu}_{i}\) is defined in [15] as
\[\hat{\mu}_{i}=\text{BNN}(\tilde{\mu}_{i}|\mathbf{W}). \tag{10}\]
Determining \(\mathbf{W}\) implies that training a model and assuming that the prediction \(\hat{\mu}_{i}\) forms a part of a distribution that the true label is drawn from. Let the data be \(\tilde{\mu}_{i}\) and the true labels \(\mu_{i}\) for \(i=1,...,N_{s}\), where \(N_{s}\) is the number of training samples. Then the training dataset is given as
\[\textbf{{D}}=\{(\tilde{\mu}_{i},\mu_{i}),...,((\tilde{\mu}_{N_{s}},\mu_{N_{s}} )\}. \tag{11}\]
For each point \(\tilde{\mu}_{i}\) has the corresponding prediction \(\hat{\mu}_{i}\), where it assumes specifying a distribution in addition to the true label \(\mu_{i}\). The weights of the trained neural network are then those that minimise the negative log-likelihood loss function in [10] as
\[\mathbf{W}^{*} =\underset{\mathbf{W}}{\text{arg min}}(-\sum_{i}^{N_{s}}\text{ log}L(\mu_{i}|\hat{\mu}_{i})),\] \[=\underset{\mathbf{W}}{\text{arg min}}(-\sum_{i}^{N_{s}}\text{ log}L(\mu_{i}|\text{BNN}(\tilde{\mu}_{i}|\mathbf{W}))). \tag{12}\]
In practice, determining the true optimum \(\mathbf{W}^{*}\) is not always possible. Instead, an approximated value is sought using optimization algorithms such as root mean squared propagation (RMSProp) or adaptive moment estimation (adam) [11].
## 5 Simulations Results
In this section, the implementation of both deterministic model and PBNN model are presented. DOAs prediction is modeled as a multi-label classification task [17]. The training, validation, and testing datasets are generated by using Keras generator [17]. The simulations are computed by using Python 3 Google compute engine backend enabling graphics processing unit (GPU) in Google colaboratory notebooks with a mounted drive of a size 12.7 GB.
### Simulation Settings and Network Training
Consider co-prime arrays consisting of 10 physical antenna elements, which are designed by taking \(M\) = 3, \(N\) = 5. The unit spacing \(d\) is chosen to be a half-wavelength. The covariance matrices are computed by using 256 snapshots. The spectrum grid of interest \([-15^{o},15^{o}]\) is sampled using \(1^{o}\) intervals to form 31 spectrum grid units. The PBNN model and deterministic model are trained using two-signal sources. The simulated signal sources satisfy the far-field narrow-band plane wave conditions. The SNRs of signal sources are generated from a range \([-10,10]\)dB with an 1dB interval.
The models are trained for 10-epochs with a mini-batch size of 32, and the samples set is shuffled at every epoch. The models are fine-tuned using the RMSProp optimizer [17] with a learning rate of 0.05. The total number of training dataset and testing dataset samples is 500 and 100 respectively. With 10\(\%\) validation dataset off training samples set aside to evaluate the models after each epoch. The architecture of the deterministic model is illustrated in Fig. 4. The deterministic model is stacked by the following Keras layers: Conv1D. BatchNormalization, AveragePooling1D, Flatten, Dropout, and Dense [11].
To build PBNN model, the deterministic model is transformed into a probabilistic model as an intermediate step, by setting the output of the model final layer to a distribution instead of a deterministic tensor. Then this probabilistic model can capture the aleatoric uncertainty on the target DOAs. This is implemented by an addition of a probabilistic layer as a final model layer [14]. Next, turning this probabilistic model into a PBNN model that is designed to capture aleatoric and epistemic uncertainty by changing model layers into reparametrization layers [12] as illustrated in Fig. 4. To further embed an epistemic uncertainty into the model weights by replacing the Conv1D and Dense layers of the deterministic model with Convolution1DReparameterization and DenseVariational layers [12] respectively.
These models are trained using the same conditions for comparison purpose. The loss functions, negative log-likelihood and root mean square error (RMS), are used to measure the DOA estimation performance for each model as illustrated in Fig. 5. The PBNN model provides faster convergence at the early stages and lower training loss values throughout the whole training procedure. Considering that the number of trainable variables of PBNN model is quiet large compared to the deterministic model as tabulated in 1. The training and validation loss curves of PBNN model
are almost very close which reveal that the PBNN model is well generalized. Clearly, the validation loss curves level off before 10-epochs. Thus, there is no overfitting in the training phase of the PBNN model.
Also, the training step covers different scenarios including changing of the angle of separation, and number of snapshots, etc. Model uncertainty due to insufficient data availability for the model to learn effectively, this is already mitigated by increasing the size of the training data generated by using Keras data generator, which is based on data augmentation method for better model regularization. BNN place a probability distribution on network weights and give a built-in regularization effect making the proposed PBNN model able to learn well from small datasets without overfitting. By introducing a prior, and posterior probabilities, so it preserves the uncertainty that reflects the instability of statistical inference of a small number of instances of evidence dataset. The two properties sparsity of the recovery method and stability are at odds of each other, but the variational Bayesian interference introduces algorithmically stable model.
The PBNN model consists of only 2-hidden network layers as illustrated in Fig. 4. Therefore, it would be useful to increase the size of the model, e.g. by stacking extra network layers for lower loss and MSE values for the case of a wider angular spectral range. Though, the ultimate goal is to develop a simple PBNN model for easy deployment on micro controller chip providing a tinyML operating on the realm of edgeAI. This can serve as an ultra-low power machine learning at the edge.
Finally, testing the angular resolution of the trained PBNN model by incrementally changing the angular separation between two closely spaced signal sources for an angular range varying between \(1^{o}\) and \(7^{o}\) per step size \(1^{o}\). As illustrated in Fig.6, It is obvious that the PBNN model indeed learned to predict the DOAs, and the PBNN model shows robustness. Most importantly, probabilistic Bayesian neural network gives a built-in regularization effect making the PBNN model able to learn well from small datasets without overfitting. Though, Bayesian estimation is computationally very expensive since it greatly widens the parameter space [14]. The pros and cons must be weighed by the user to determine whether the choice of this neural network type is appropriate for the application used. Since the weights of the network are distributions instead of single values, more data is required to accurately estimate the weights.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Parameter** & **Deterministic** & **PBNN** \\ \hline
**Training Loss** & 0.0441 & 33.610 \\ \hline
**Validation Loss** & 0.0362 & 31.606 \\ \hline
**Training Time (s)** & 1130.0 & 1034.5 \\ \hline
**Trainable Variables** & 1,695 & 3,326 \\ \hline
**Total Parameter** & 1,727 & 3,326 \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison between deterministic model and PBNN model.
Figure 4: Deterministic model and PBNN model architectures.
Figure 5: Training and validation losses versus epoch.
## 6 Conclusion
The paper presents a PBNN-based sparse signal recovery method for DOA estimation with co-prime arrays. The DOA estimation based PBNN accounts for the modeling of data and model uncertainty. A CNN is combined with probabilistic layers to learn the mapping from the pseudo-spectrum to the spatial spectrum. The input processing fully maintains the DOF and resolution of the virtual array. The PBNN model can achieve faster convergence at the early training stages and lower training losses. Moreover, the PBNN model adapts well to small angular separation, Simulation results demonstrate that the performance advantages of the PBNN model over deterministic model according to multiple evaluation metrics. Noteworthy, the PBNN model has higher computational complexity than deterministic model. DOAs coarse estimation is obtained by balancing the accuracy and efficiency of parameter estimation using the variational Bayesian-based DOA estimation method. With this PBNN model, the possibility of misclassification is minimized. Thus, this proposed DOA estimation method can achieve spectrum autocalibration under non-ideal conditions for the co-prime arrays. In the future, the goal is to develop the PBNN model for real-time scenario with limited computational resources such as embedded machine learning deployed on a hardware accelerator.
|
2305.10460 | Topology Optimization using Neural Networks with Conditioning Field
Initialization for Improved Efficiency | We propose conditioning field initialization for neural network based
topology optimization. In this work, we focus on (1) improving upon existing
neural network based topology optimization, (2) demonstrating that by using a
prior initial field on the unoptimized domain, the efficiency of neural network
based topology optimization can be further improved. Our approach consists of a
topology neural network that is trained on a case by case basis to represent
the geometry for a single topology optimization problem. It takes in domain
coordinates as input to represent the density at each coordinate where the
topology is represented by a continuous density field. The displacement is
solved through a finite element solver. We employ the strain energy field
calculated on the initial design domain as an additional conditioning field
input to the neural network throughout the optimization. The addition of the
strain energy field input improves the convergence speed compared to standalone
neural network based topology optimization. | Hongrui Chen, Aditya Joglekar, Levent Burak Kara | 2023-05-17T07:42:24Z | http://arxiv.org/abs/2305.10460v1 | Topology Optimization using Neural Networks with Conditioning Field Initialization for Improved Efficiency
###### Abstract
We propose conditioning field initialization for neural network based topology optimization. In this work, we focus on (1) improving upon existing neural network based topology optimization, (2) demonstrating that by using a prior initial field on the unoptimized domain, the efficiency of neural network based topology optimization can be further improved. Our approach consists of a topology neural network that is trained on a case by case basis to represent the geometry for a single topology optimization problem. It takes in domain coordinates as input to represent the density at each coordinate where the topology is represented by a continuous density field. The displacement is solved through a finite element solver. We employ the strain energy field calculated on the initial design domain as an additional conditioning field input to the neural network throughout the optimization. The addition of the strain energy field input improves the convergence speed compared to standalone neural network based topology optimization.
## 1 Introduction
There has been a recent increase in machine learning driven topology optimization approaches, particularly using neural networks for performing topology optimization. Both data-driven and online training based approaches have been explored. Data-driven approaches require large training database generation and a long training time. They perform instant optimal topology generation during inference time. Online training approaches use the neural network to represent the density field of a single to a small subset of designs for better parameterization. The online training approaches require similar or more time compared to conventional topology optimization approaches like SIMP (Solid Isotropic Material with Penalisation) [1, 2]. We find that the results of the online training approaches, particularly the convergence speed, can be improved through insights derived from the mechanical aspects of the problem.
Machine learning driven topology optimization approaches offer the advantage of being easily able to accommodate additional insights in the form of pre-computed fields. The usage of these fields has been explored in data-driven approaches such as TopologyGAN [3], which use physical fields such as von Mises stress and strain energy density for achieving better results. However, there has been no work incorporating these physical fields in the online training topology optimization setting. In this work, we further improve upon TOuNN (Topology Optimization using Neural Networks), an online training approach proposed by Chandrasekhar and Suresh [4], by adding a strain energy field in addition to the domain coordinates as a conditioning input to the neural network. We show that this improves the convergence speed and can give a better compliance. With the additional strain energy field as a conditioning input, the neural network not only learns a mapping function between the domain coordinates to the density field output but also between the strain energy field to the density field output. Ideally, if the conditioning field is the same as the converged topology, then the neural network only needs to learn a constant function which is the identity function. However, the converged topology is not known at the beginning of the optimization. Thus, the strain energy field is used as a good alternative since it can be computed through a single function call of Finite Element Analysis (FEA)
prior to the online training of the neural network. We verify the performance increase obtained with this additional conditioning input across parametric experiments with varying boundary conditions and volume fractions.
The code for running the experiments in this paper can be found at: [https://github.com/HongRayChen/Hybrid-TopOpt](https://github.com/HongRayChen/Hybrid-TopOpt)
## 2 Related Work
_Conventional topology optimization_: Bendsoe and Kikuchi [5] introduced the homogenization approach for topology optimization. The SIMP method [1, 2] considers the relative material density in each element of the Finite Element (FE) mesh as design variables, allowing for a simpler interpretation and optimised designs with more clearly defined features. Other common approaches to topology optimization include the level-set method [6, 7] and evolutionary algorithms [8].
All these methods use an iterative process to create a complex mapping from problem characteristics (supports, loads and objective function) to an optimised structure, where each iteration has an expensive FEA calculation involved. A more accurate and detailed solution can be obtained with greater number of elements in the FE mesh, however this increases the computational cost. Therefore, current developments within the field are strongly motivated by the desire to either limit the number of iterations needed to obtain an optimised structure or the computational cost of completing an iteration [9]. Recent advances in deep learning, particularly for image analysis tasks, have showed potential for removing the expensive FEA iterations required until the convergence of the topology in the conventional topology optimization approaches. Hence, various topology optimization approaches that utilize neural networks have been proposed. Woldseth et al. [9] provide an extensive overview on this topic.
_Data-driven topology optimization_: We refer to data-driven topology optimization methods as those that aim to learn a neural network model from a database of topology optimization results for instant prediction of the optimal topology. Many methods rely on Convolutional Neural Networks (CNN) for their capabilities to learn from a large set of image data. Banga et al. [10] used a 3D encoder-decoder CNN to generate 3D topology results and show that interpolating the final output using the 3D CNN from the initial iterations obtained from the 'TopOpt' [11] solver, offers a 40\(\%\) reduction in time over the conventional approach of using the solver alone. Yu et al. [12] use a conditional generative adversarial network (cGAN) in addition to CNN based encoder-decoder network. However, the results indicate there sometimes there may be disconnections present in the predicted topology which may drastically affect the compliance values. Nakamura and Suzuki [13] improve on the results with their direct design network and with a larger dataset, however, disconnections are still observed in some solutions. Behzadi and Ilies [14] used deep transfer learning with CNN. Zheng et al. [15] used U-net CNN for 3D topology synthesis. Nie at al. [3] used various physical fields computed on the original, unoptimized material domain, as inputs to the generator of a cGAN and achieved a 3 times
Figure 1: The strain energy field is calculated at the beginning of the optimization based on the boundary condition. The strain energy conditioning field is fixed throughout the training. Domain coordinates and the strain energy value at each coordinate point is used as the input to the neural network. The neural network outputs density \(\rho\) at each coordinate point. By sampling coordinate point across the design domain, we obtain the density field. From the density field, we calculate the current volume fraction and the compliance from a FEA solver. The compliance and volume fraction is then formulated as a loss function which is used in back propagation of the training process until convergence.
reduction in mean square error as compared to a baseline cGAN. Maze and Ahmed [16] show that diffusion models can outperform GANs for this task. They use regressor and classifier guidance to ensure that the generated structures are manufacturable and mechanical compliance has been minimized.
All these data-driven approaches aim to reduce optimal topology prediction time but face difficulties in generalization. Though over the years there have been improvements on the generalization capability, suitable training dataset generation is not trivial, especially for the 3D domain, and satisfactory and reliable results have not been achieved yet for direct use in real-world problems.
_Online training topology optimization_: We refer to online training topology optimization methods as those which do not use any prior data, rather train a neural network in an self-supervised manner for learning the optimal density distribution/topology. Chandrasekhar and Suresh [4] explored a online approach where the density field is parameterized using a neural network. Fourier projection based neural network for length scale control [17] and application for multi-material topology optimization [18] has also been explored. Deng and To [19] propose topology optimization with Deep Representation Learning, with a similar concept of re-parametrization, and demonstrate the effectiveness of proposed method on minimum compliance and stress-constrained problems. Deng and To [20] also propose a neural network based method for level-set topology optimization, where the implicit function of level-set is described by a fully connected deep neural network. Zehnder et al. [21] effectively leverage neural representations in the context of mesh-free topology optimization and use multilayer perceptrons to parameterize both density and displacement fields. It enables self-supervised learning of continuous solution spaces for topology optimization problems. Mai et al. [22] develop a similar approach for optimum design of truss structures. Hoyer et al. [23] use CNNs for density parametrization and directly enforce the constraints in each iteration, reducing the loss function to compliance only. They observe that the CNN solutions are qualitatively different from the baselines and often involve simpler and more effective structures. Zhang at al. [24] adopt a similar strategy and show solutions for different optimization problems including stress-constrained problems and compliant mechanism design.
Generalization is not an issue with all these online training topology optimization methods. However, the computational time and cost is similar to traditional topology optimization approaches. An advantage offered is that the density representation is independent of the FE mesh and because of the analytical density-field representation, sharper structural boundaries can be obtained [4]. We show that by adding an initial condition field as an extra input, we can improve the convergence speed and get better results.
## 3 Proposed Method
In our proposed method, the density distribution of the geometry is directly represented by the topology neural network. The strain energy field and the compliance used for backpropagation is calculated from an FE solver. The program is implemented in Python and backpropagation of the loss function into each module is handled by the machine learning package TensorFlow [25].
Figure 3: For the gamma filtering of the conditioning field, we adjust the gamma based on the volume fraction target of the optimization
Figure 2: Two method is evaluated in terms of processing of the strain energy conditioning field. We used a gamma filter in (a-c) and a log filter in (d)
### Neural network
The topology network \(T(\textbf{X})\) (Figure 1), learns a density field in a different manner as compared to typical topology optimization which represents the density field as a finite element mesh. The topology neural network takes in domain coordinates \(x,y\), as well as the strain energy value \(e\) at coordinate \(x,y\). The strain energy value gets concatenated with the domain coordinates to form the input to the topology network, \(\textbf{X}=[x,y,e]\). The domain coordinates are normalized between \(-0.5\) to \(0.5\) for the longest edge. It outputs the density value \(\rho\) at each coordinate point. The domain coordinates represent the center of each element in the design domain. During topology optimization, a batch of domain coordinates that correspond to the mesh grid and the corresponding strain energy field is fed into the topology network. The output is then sent to the Finite Element Analysis (FEA) solver. The solver outputs the compliance which is combined with the volume fraction violation as a loss. The loss is then backpropagated to learn the weights of the topology network.
For the topology network design, we employed a simple architecture that resembles the function expression of \(f(x)=\textbf{w}sin(\textbf{k}x+\textbf{b})\). Similar neural network architectures have been used to control the length scale of geometry in topology optimization[17]. The conditioned domain coordinates are multiplied with a kernel **K**. The kernel **K** regulates the frequency of the sine function. We add a constant value of 1 to break the sine function's rotation symmetry around the origin. We use a Sigmoid function to guarantee the output is between 0 and 1. The topology network can be formulated as follows:
\[T(\textbf{X})=\sigma(\textbf{W}\sin(\textbf{K}\textbf{X}+1)) \tag{1}\]
where:
\(\textbf{X}\): Domain coordinate input, \(\textbf{X}=(x,y,e)\)
\(\sigma\): Sigmoid activation function
**K**: Trainable frequency kernels, initialized in \([-25,25]\)
**W**: Trainable weights, initialized to 0
We can upsample the 3D coordinate input or only sample specific regions of the density field to manipulate the resolution of the discretized visualization. Due to the strain energy conditioning field computed from the finite element mesh grid, interpolation needs to be used to calculate the intermediate values when upsampling the domain coordinates.
### Strain energy conditioning field
The strain energy conditioning field is used to augment the domain coordinate input. We calculate the conditioning field from the initial homogeneous density domain. In topology optimization, for a 2D problem with n elements of four nodes each, the strain energy field **E** can be calculated as follows:
\[\textbf{E}=\sum\left(\textbf{U}_{e}\times\textbf{S}_{e}\right)\circ\textbf{U} _{e} \tag{2}\]
where:
\(\textbf{U}_{e}\): the displacement matrix, \(n\times 8\)
\(\textbf{S}_{e}\): the element stiffness matrix, \(8\times 8\)
The summation is along the axis containing the values for each element.
In most topology optimization implementations, the compliance is then calculated by summation of the above strain energy for all elements.
The strain energy field can vary greatly in range depending on the problem domain size, boundary condition, and geometry constraints. Therefore, normalization needs to be done to regulate the value range of the strain energy field. Otherwise, the range of the strain energy field will deviate from the normalized range of the domain coordinates. Furthermore, a simple normalization will not suffice as the high max value of the strain energy field reduces the amplitude of other relevant features and patterns (Figure 2 (a)). We explore gamma and logarithmic filtering to normalize the strain energy field. For the gamma filtering, we clip the strain energy field by using the 99th percentile, \(P_{99}\). After clipping, more details of the field **E**\({}_{c}\) can be seen (Figure 2 (b)). We also further adjust the feature of the strain energy field by using gamma correction. The gamma value is set to be the complement of the target volume fraction \(V^{*}\) for the optimization (\(\gamma=1-V^{*}\)). The effect of the gamma correction based on the volume fraction is illustrated in Figure 3. As the volume fraction increases, the edge feature in the strain energy field is more and more pronounced. Finally,
after the gamma correction step, the strain energy field is normalized between 0 and 0.4 to obtain the processed field \(\mathbf{E}_{p}\). The processing step on the gamma filtering of strain energy field can be summarized in the following equation:
\[\mathbf{E}_{c}=min(\mathbf{E},P_{99}) \tag{3}\]
\[\mathbf{E}_{\gamma}=0.4\Big{\{}\frac{\mathbf{E}_{c}-min(\mathbf{E}_{c})}{max( \mathbf{E}_{c})-min(\mathbf{E}_{c})}\Big{\}}^{\gamma} \tag{4}\]
For the logarithmic filtering, we do not clip the value, instead, the log filter is directly applied to the strain energy field and then normalized between 0 and 0.4. We determine this range empirically to give the best results.
\[\mathbf{E}_{\mathrm{log}}=0.4\frac{\log\mathbf{E}-min(\log\mathbf{E})}{max( \log\mathbf{E})-min(\log\mathbf{E})} \tag{5}\]
### Online topology optimization with neural network
During optimization, the topology network outputs the density value at the center for each element. These density values are then sent to the finite element solver to calculate compliance based on the SIMP interpolation.
The finite element solver is treated as a black box within the neural network. It takes in the density of each element and outputs the compliance and the sensitivity for each element with respect to the compliance. Variables that are being optimized are the weights **W** and kernels **K** of the neural network. Adam[26] is used to train the neural network. The constrained optimization problem needs to be transferred into unconstrained minimization problem for neural network. We adopt the loss function formulated by Chandrasekhar and Suresh [4] of compliance minimization and volume fraction constraint. The combined loss function is
\[L=\frac{c}{c_{0}}+\alpha(\frac{\bar{\rho}}{V^{*}}-1)^{2} \tag{6}\]
Figure 4: Comparing the convergence history for a beam example for with and without strain energy conditioning field. The result presented is using the gamma filtering. For FENN-logCF took 22.5s while FENN took 22.1s.
In the optimization, the target volume fraction \(V^{*}\) is an equality constraint and \(\bar{\rho}\) is the volume fraction of the current design. When \(\alpha\) increased to infinity, the equality constraint is satisfied. We assign a maximum value of 100 for \(\alpha\) with initial value of 1 and gradually increase \(\alpha\) every iteration. \(c\) is the current compliance and \(c_{0}\) is the initial compliance calculated on the design domain with the uniform volume fraction \(V^{*}\).
## 4 Results and Discussions
The possible combinations of boundary conditions, problem size, and configurations is enormous. It is impossible for us to cover all. To demonstrate the effectiveness of our proposed approach, we explore both a beam problem and a parametric study in 2D. In the beam problem, we showcase the convergence of the network's output and the convergence history. In the parametric study, problems across different boundary conditions and volume fractions are explored. We report the compliance value where subscript FENN represents Finite Element (FE) compliance solver with Neural Network (NN) as topology representation, and FENNCF as neural topology optimization with strain energy Conditioning Field (CF). For these two experiments, the problem size is 40\(\times\)20 pixels. All experiments are run on a PC with i7-12700K as processor, 32 GB of RAM, and Nvidia RTX3080 GPU.
### Beam example
Our first experiment is the beam example. The left side of the domain is fixed and a downward point load is on the center-right side. The boundary condition illustration and the strain energy conditioning field are shown in Figure 4. The target volume fraction is 0.3. We run the online topology optimization for a total of 1000 epochs.
The convergence history plot is illustrated in Figure 4 (c). We observe that by epoch 50, with the strain energy conditioning field, the network's compliance takes over the lead and maintains lower compliance all the way to the end of the training epochs. We also show the density field snapshot and the corresponding compliance during training in Figure 4 (b). Analyzing the geometry of the neural network with the conditioning field, we observe that there is a subtle difference compared to without the conditioning field. The neural network with conditioning field shares greater similarities to the strain energy field where the top and bottom edges are shorter. We can also observe that most of the geometry convergence happens between 0 and 400 epochs. Between 400 to 1000 epochs, the geometry remained relatively unchanged. The only change being a darker tone of red, showing the density values get pushed closer towards 1. In both of the examples, the final volume fraction is within \(1\%\) error of the given target volume fraction. Therefore, we do not include the volume fraction convergence plot.
### Parametric study
We set up a parametric study to analyze the effectiveness of the gamma and log filter of the conditioning field. The boundary condition setup is illustrated in Figure 5 (a). The bottom right loading point is varied across the region highlighted in green which accounts for 50 load conditions. We also vary the target volume fraction between 0.2 to 0.5 with an increment of 0.1. In total, this sums up to 200 total combinations. In the previous beam example, we observe that geometries do not change significantly after 400 epochs, therefore we limit the total epochs for the parametric study to 400 epochs.
The parametric study result is summarized in Figure 6. In Figure 6 (a), we sort with respect to the compliance of topology optimization without conditioning field and show the compliance from both methods. We observe that the overall
Figure 5: Boundary conditions and some sample topology optimization results with 0.3 volume fraction within the parametric study examples
conditioning field converged at lower compliance. The improvement of the conditioning field is more significant when the compliance is higher. The higher compliance occurs when the volume fraction is low. To visualize the convergence speed increase, Figure 6 shows the percentage improvement with the conditioning field. The percentage improvement is calculated by identifying the epoch at which the conditioning field reaches a lower compliance compared to the final compliance of the optimization without the conditioning field. The average performance increase with gamma filter is \(37.6\%\) and with log filter is \(44.7\%\). With both filters, the performance increase is more pronounced with lower volume fraction examples. The log filter has a better overall performance increase across all solutions compared to gamma filter.
We compare our result against the result of "88-lines" by Andreassen et. al. [27] with a filtering radius of 1.5 to accommodate the problem size. We observe that when the compliance is low, FENN performed slightly better than SIMP. This is also consistent with the result reported by Chandrasekhar and Suresh [4]. For problems with relatively higher compliance, we observe that FENN with conditioning field can in some cases converge to a lower compliance than "88-lines". We note that in general, the Matlab code [27] takes around 0.2 to 1.5s to run whereas FENN and FENN with either conditioning field takes around 10s. However, a definite time comparison is difficult to establish as "88-lines" runs on Matlab whereas FENN runs on Python. In "88-lines" the optimizer is optimality criteria whereas FENN rely on Adam with a learning rate of 0.002.
We also observe that within the 200 examples with gamma filter, there are four cases where the conditioning field does not improve convergence speed. When plotting out example results in Figure 5, the examples with the load on the right bottom edge have lower performance increase with the conditioning field. On the other hand the examples with the load close to the center have a greater performance increase and a bigger gap in compliance. Our hypothesis is that the conditioning field approach performs best when the topology is complex. The complexity in geometry can occur based on the volume fraction constraint or the configuration of the boundary conditions. As the volume fraction decrease, thinner members are required which increase complexity of the structure. Whereas the geometries in Figure 5 (b) showed that for the same volume fraction, the length scale of the part is also dependent on the boundary condition.
### Additional examples
In Figure 7, we demonstrate the improvements resulting from the conditioning field on 4 complex boundary conditions in 2D. Cases 2, 3 and 4 in Figure 7 have obstacle regions (passive elements). Furthermore, in Figure 8, we analyze the impact of increasing the problem resolution (i.e. the FE mesh size) for the boundary conditions of case 1 in Figure 7, and observe similar improvements. We also show the improvements seen for a 3D problem in Figure 9.
Figure 6: Comparing the final compliance and the speed of convergence for parametric study examples for with gamma and log filter and without the conditioning field. We also run the same problem configuration with ”88-lines” by Andreassen et al. [27] denoted by the legend ”SIMP” in the figure.
Figure 7: Four additional test cases across varying boundary conditions and passive elements, all using 0.2 target volume fraction. Each example is 60\(\times\)60 in resolution and takes around 30 seconds to run with no significant difference between with and without conditioning field. Log filtered conditioning field demonstrates good convergence speed increase.
## 5 Limitations and Future Work
We exploit the ability of neural networks as a universal function approximator to learn the additional mapping from the strain energy conditioning field to the density field output. Currently, the improvement with the conditioning field is not stable across all possible boundary condition configurations. More tuning and testing is required. Another aspect is that the current conditioning field remains fixed during optimization. This is due to the neural network's inability to encode temporal features. The strain energy field changes throughout the optimization, without the ability to capture the temporal feature of the changing strain energy field. As such, the neural network has difficulty providing stable optimization results.
This work also demonstrates promising results using a conditioning field for online neural topology optimization. The strain energy field may not be the best conditioning field out there and future work may focus on trying out different combinations of conditioning fields similar to TopologyGAN [3]. This conditioning field approach may demonstrate great synergy with the existing data-driven approach. Using the output of data-driven topology optimization as the conditioning field, online optimization can exploit a conditioning field that is much closer to the final solution. This reduces the complexity of the mapping function for which the neural network needs to learn. Since most data-driven approaches lack the guarantee of compliance minimization, online optimization can serve as the final post-processing step to connect disconnected edges and truly minimize the compliance.
Figure 8: We run the same boundary condition for Case 1 with two and three times the resolution. The runtime for 120\(\times\)120 is 3 min and for \(180\times 180\) is 20 min
Figure 9: Comparing the results for a 3D cantilever beam example. All examples are run for 200 epochs. b) Top3d [28](standard 3d topology optimization code using SIMP). c) Using a neural network for density parametrization. d) Using a neural network for density parametrization and additional initial strain energy input with log filtering. We observe that FENN and FENN-logCF choose to create a shell around both side which gives an illusion that the volume fraction is higher. However, the volume fraction is also very close to the target volume fraction of 0.3 (both converged to 0.3003 specifically).
In this work, we also compare our result against SIMP using "88-lines" [27]. However, it may be not possible to determine which one is definitively better or worse. As each program is tuned for different platforms and the possible combinations of problem configuration is endless. Covering all possible problem configurations to reach a conclusion may not be possible. There are exciting possibilities with neural network-based topology optimization, for example, since the design density field is represented by a continuous function, one can infinitely upsample the result to obtain very crisp boundaries [4]. We can also use the same neural network architecture with physics-informed neural networks to conduct mesh-free topology optimization without a FE solver[29] to name a few.
## 6 Conclusions
We have proposed a novel approach for improving neural network based topology optimization using a conditioning field. Our method involves using a topology neural network that is trained on a case-by-case basis to represent the geometry for a single topology optimization problem. By incorporating the strain energy field calculated on the initial design domain as an additional conditioning field input to the neural network, we have demonstrated faster convergence speed can be achieved. Our results suggest that the efficacy of neural network based topology optimization can be further improved using a prior initial field on the unoptimized domain. We believe that our proposed conditioning field initialization approach could have broad applications in the field of topology optimization, particularly for problems that involve complex geometries.
|
2304.10749 | Multi-scale Evolutionary Neural Architecture Search for Deep Spiking
Neural Networks | Spiking Neural Networks (SNNs) have received considerable attention not only
for their superiority in energy efficiency with discrete signal processing but
also for their natural suitability to integrate multi-scale biological
plasticity. However, most SNNs directly adopt the structure of the
well-established Deep Neural Networks (DNNs), and rarely automatically design
Neural Architecture Search (NAS) for SNNs. The neural motifs topology, modular
regional structure and global cross-brain region connection of the human brain
are the product of natural evolution and can serve as a perfect reference for
designing brain-inspired SNN architecture. In this paper, we propose a
Multi-Scale Evolutionary Neural Architecture Search (MSE-NAS) for SNN,
simultaneously considering micro-, meso- and macro-scale brain topologies as
the evolutionary search space. MSE-NAS evolves individual neuron operation,
self-organized integration of multiple circuit motifs, and global connectivity
across motifs through a brain-inspired indirect evaluation function,
Representational Dissimilarity Matrices (RDMs). This training-free fitness
function could greatly reduce computational consumption and NAS's time, and its
task-independent property enables the searched SNNs to exhibit excellent
transferability on multiple datasets. Furthermore, MSE-NAS show robustness
against the training method and noise. Extensive experiments demonstrate that
the proposed algorithm achieves state-of-the-art (SOTA) performance with
shorter simulation steps on static datasets (CIFAR10, CIFAR100) and
neuromorphic datasets (CIFAR10-DVS and DVS128-Gesture). The thorough analysis
also illustrates the significant performance improvement and consistent
bio-interpretability deriving from the topological evolution at different
scales and the RDMs fitness function. | Wenxuan Pan, Feifei Zhao, Guobin Shen, Yi Zeng | 2023-04-21T05:36:37Z | http://arxiv.org/abs/2304.10749v5 | # Emergence of Brain-inspired Small-world Spiking Neural Network through Neuroevolution
###### Abstract
Human brain is the product of evolution during hundreds over millions of years and can engage in multiple advanced cognitive functions with low energy consumption. Brain-inspired artificial intelligence serves as a computational continuation of this natural evolutionary process, is imperative to take inspiration from the evolutionary mechanisms of brain structure and function. Studies suggest that the human brain's high efficiency and low energy consumption may be closely related to its small-world topology and critical dynamics. However, existing efforts on the performance-oriented structural evolution of spiking neural networks (SNNs) are time-consuming and ignore the core structural properties of the brain. In this paper, we propose a multi-objective Evolutionary Liquid State Machine (ELSM) with the combination of small-world coefficient and criticality as evolution goals and simultaneously integrate the topological properties of spiking neural networks from static and dynamic perspectives to guide the emergence of brain-inspired efficient structures. Extensive experiments show a consistent and comparable performance of the proposed model compared to LSM-based and hierarchical SNNs algorithms: it achieves 97.23%
on NMNIST, and reaches the state-of-art performance compared to all LSM models on MNIST and Fashion-MNIST (98.05% and 88.81%, respectively). A thorough analysis reveals the spontaneous emergence of hub nodes, short paths, long-tailed degree distributions, and numerous community structures in evolutionary models. This work evolves recurrent spiking neural networks into brain-inspired efficient structures and dynamics, providing the potential to achieve adaptive general artificial intelligence.
keywords: Spiking Neural Networks, Neuroevolution, Small-world Topologies, Critical Dynamics, Liquid State Machines +
Footnote †: journal: Journal of Computational and Graphical Statistics
## 1 Introduction
How can the human brain perform many complex advanced cognitive functions while running on less power than a light bulb? Its mysterious wiring rules and firing patterns have attracted much research interest. There is a degree of commonality in brain anatomy across the human species: different regions are often thought to be responsible for specific cognitive functions [1]. It is worth mentioning that the densely connected community structure and hub nodes existing in these specific regions help efficient information processing and integration in the brain [2; 3].
From a static topology perspective, researchers have demonstrated that the mammalian cortical (including the human brain) is a complex network whose topological properties are neither random nor regular, but somewhere in between [4; 5; 6; 7], with small-world properties of dense local clustering and short path length [4; 8; 9]. From a network dynamics perspective, when dealing with complex and changeable environments, the human brain can exhibit powerful and flexible adaptive processing capabilities and achieve a delicate balance between efficiency and robustness due to its well-evolved internal wiring rules. In this case, biological neural network dynamics achieve optimal computational and processing capabilities near a certain point called a critical state where networks oscillate between order and disorder, synchronous and asynchronous [10; 11; 12],
and can reach optimal value of information transmission [13].
Efficient transmission topology and optimal dynamics enable the human brain to exhibit powerful low-energy, high-efficiency information processing capabilities. The formation of such structures is not artificially designed but evolved naturally. Existing human-crafted network structures may help improve performance, but difficult to escape from the inherent paradigm [14]. To enable the model to find the optimal network architecture adaptively, a field called Neural Architecture Search (NAS) is emerging [15; 16; 17; 18; 19; 20]. Most work on NAS follows the wave of deep learning to search deep network structures [21; 22; 23; 24; 25], however, to the best of our knowledge, no NAS algorithm takes into account the biologically economical small-world topology and criticality in brain.
Spiking neural network (SNN), as the third-generation neural network, not only simulates the discrete communication of biological neurons but also can combine multiple biological plasticity learning rules, which is more in line with the information processing mechanism in the biological brain [26]. In this paper, we employ a large-scale, recurrently connected SNN called Liquid State Machines (LSM) [27], a kind of reservoir network, due to its advantages of complex liquid structure, low training cost, good at processing spatiotemporal information, and more suitable for studying brain-inspired connective architecture [27; 28; 29; 30; 31; 32]. A standard LSM consists of three parts: the input information is processed by a liquid layer containing randomly fixed connections, and then abstracted by the readout neurons into the final output. The only SNN-based NAS works search for neuron operations [33; 34] and cross-layer connections [34], while the evolutionary LSM works evolve reservoir parameters, including structural parameters such as liquid density, excitatory neuron ratio, number of liquid neurons [35; 36]. [37] changes the structure of LSM by dividing a large liquid into multiple smaller liquids. These studies lack the in-depth inspiration of the unique topological characteristics of the brain, thus limiting their learning efficiency and performance.
Inspired by the small-world properties and the critical state of biological
nervous system, this paper proposes an evolutionary Liquid State Machines (ELSM) for emerging with brain-inspired small-world architecture and dynamic firing patterns. **Structurally**, the evolved liquid layer presents the network architecture characteristics of small-world, combining dense local clustering and short path length. **Dynamically**, ELSM enables the network to operate near the critical state, which is significantly more efficient and biologically plausible. The proposed multi-objective evolutionary algorithm not only brings the small-world structure and critical dynamics, but also naturally achieves higher performance and efficiency.
The main highlights of this paper can be summarized in the following three points:
* We evolve the structure of the recurrent spiking neural network to exhibit biological plausible small-world topological properties (densely local-connected hub nodes, large number of communities and long-tail degree distribution) as well as dynamic critical steady state. Brain-inspired evolutionary goals simultaneously bring about an improvement in classification accuracy.
* The proposed multi-objective evolutionary algorithm considers small-world coefficients (including shortest path length and clustering coefficients) and criticality as fitness functions, guiding the emergence of brain-inspired efficient structures.
* Our model achieves classification accuracies of 98.05%, 97.23% and 88.81% on MNIST, NMNIST and Fashion-MNIST respectively, which are comparable performance to deep SNNs. Experimental results demonstrate that adaptively evolved LSMs improve performance with biologically plausible structures and firing patterns at lower complexity. The degree distribution of the evolved network nodes exhibits the characteristics of a long-tailed distribution, similar to that found in biological brains.
## 2 Results
### The Architecture of Reservoir-based SNN
The architecture of reservoir-based SNN (LSM) is shown in Fig 1. The standard LSM model is divided into three layers: input layer, liquid layer formed of thousands of neurons sparsely connected, and readout layer. All neurons accumulate potentials according to the law shown in Eq. 2, and transmit information through spikes. The weight between the readout layer and the liquid layer is optimized by backpropagation algorithm [38], while the weights in the liquid layer are randomly fixed.
In this paper, we use the leaky integrate-and-fire (LIF) neuron as the basic unit of signal transmission, and the formula for its membrane potential update over time is:
\[\delta=\frac{I(t)-V_{\mathrm{m}}(t)}{\tau} \tag{1}\]
\[V_{\mathrm{m}}(t+1)=\left(V_{\mathrm{m}}(t)+\delta\right)\left(1-S(t)\right)+ V_{r}S(t) \tag{2}\]
Figure 1: The architecture of LSM. In the traditional definition of a reservoir, randomly connected spiking neurons receive time-varying signals from external inputs and other nodes simultaneously. The recursive connectivity enables input signals to be converted to liquid layer dynamics, which are then abstracted by the readout layer.
\[S(t)=\left\{\begin{array}{l}1,V_{\text{m}}(t)\geq V_{th}\\ 0,V_{\text{m}}(t)<V_{th}\end{array}\right. \tag{3}\]
\(V_{m}(t+1)\) and \(V_{m}(t)\) are the membrane potential at time \(t+1\) and \(t\), respectively. As shown in Eq. 1, \(\delta\) is determined by the membrane potential \(V_{m}(t)\), the magnitude of the current \(I(t)\) and the membrane potential time constant \(\tau\). When the membrane potential reaches the threshold \(V_{th}\), the membrane potential is reset to \(V_{r}\) at the same time as the spike is delivered (indicated by \(S(t)\) as Eq. 3). According to the membrane potential \(V_{m}(t)\) and \(S(t)\) at time \(t\), the update law of the membrane potential at the next time time is shown in Eq. 2. The LIF neuron model and learning rules of the proposed evolutionary LSM are based on BrainCog framework [39].
```
0: Population \(P(0)=\{C_{1},C_{2},...,C_{N_{c}}\}\);
0: Evolved individual \(C_{opt}\) ; for\(g=0\) to \(G_{th}\)do if\(g=G_{th}-1\)then \(Accuracy\) = Train (\(P(g)\), 100) \(C_{opt}\) = Max (\(P(g)\), \(Accuracy\)) return\(C_{opt}\) endif \(obj[g,0]\) = SmallWorld (\(P(g)\)) \(obj[g,1]\) = Criticality (\(P(g)\), data) \(Q(g)\)=CrossoverAndMutate (Select(\(P(g)\),\(obj\),\(N_{off}\))) \(P(g+1)\) = Merge (\(P(g)\), \(Q(g)\)) \(P(g+1)\) = Select (\(P(g+1)\),\(obj\), \(N_{c}\)) endfor
```
**Algorithm 1** The neuroevolution process of ELSM.
### Neuroevolution Algorithm
Liquid layer connectivity formed by random initialization in the reservoir will be evolved to emerge more brain-inspired structures and dynamics. The
whole neuroevolution process is presented as Algorithm 1 and Fig. 2.
#### 2.2.1 Fitness Function
**Identifying small-world topologies.**[40] describes small-world networks exhibit two properties: highly clustered and short path length. Local short paths between most nodes with hubs induce highly connected sub-networks and a few long-distance connections, enabling efficient information transmission in the brain. We refer to the quantification method of [41] to quantify the small-world characteristics, which is called small-world coefficient, and the calculation is as
Figure 2: **The neuroevolution process of ELSM.****a.** Schematic diagram of initialization. After determining the number of neurons in the liquid layer \(N\) and the initial liquid density \(\rho_{init}\), phenotypes (LSM liquid layers) are converted to genotypes (connection patterns \(C\), sizes of which are \(N*N\)). **b.** The process of crossover, mutation and selection. Initialized individuals make up \(P(g)\) in generation \(g\), and \(Q(g)\) is obtained by performing k-point crossover and bitflip mutation operations on the mating pool generated by \(P(g)\). Merge \(Q(g)\) and \(P(g)\), apply elitism method and non-dominated sorting strategy in NSGA-II to select the next generation group \(P(g+1)\), until the iteration reaches \(G_{th}\) and the evolution ends. **c.** Brain-inspired evolution objective functions. Statically, evolve the small-world coefficient of \(C\). Dynamically, a small amount of training data is input to obtain firing patterns of the liquid layer within \(T\) time period, and the branching ratio \(\mu(t)\) is calculated as the quantification of criticality.
follows:
\[\lambda=\frac{H}{L} \tag{4}\]
The clustering coefficient and short path length between nodes is measured by \(H\) and \(L\), respectively. The clustering computing of a single node is as Eq. 5, where \(o_{i}\) is the degree of neuron \(i\). If two edges that both pass through neuron \(i\) are called a pair, \(e_{i}\) is the number of all pairs of edges with \(i\) as the intermediate node. The total clustering coefficient is the average of total neurons (assuming there are \(N\) neurons in the liquid layer).
\[h_{i} =\frac{2e_{i}}{o_{i}\left(o_{i}-1\right)} \tag{5}\] \[H =\frac{\sum_{i}^{N}h_{i}}{N} \tag{6}\]
The shortest path length of the network is calculated as Eq. 7, where \(d_{st}\) represents the shortest path length between \(s\) neuron and \(t\) neuron (\(d_{st}=0\) if the path does not exist). \(V\) is the liquid neuron set.
\[L=\sum_{s,t\in V}\frac{d_{st}}{N(N-1)} \tag{7}\]
**Identifying criticality.** A commonly used concept to measure the critical state of the nervous system is called branching ratio, derived from the branching process theory [42], reflecting the spatiotemporal cascade activity of the cerebral cortex in homeostasis. The local branching ratio \(\mu_{i}(t)\) describes the tendency of a neuron \(i\) to be more or less active as spikes transmitted within the liquid layer, defined as follow [43] at time \(t\).
\[\mu_{i}(t)=\frac{\sum_{j}^{N}\sum_{l=t+\phi+1}^{t+\phi+\Delta}m_{j}(l)c_{ij}}{ \sum_{j}^{N}\sum_{l=t-\phi-\Delta}^{t-\phi-1}m_{j}(l)c_{ji}} \tag{8}\]
\[\mu(t)=\frac{\sum_{i}^{N}m_{i}(t)\mu_{i}(t)}{\sum_{i}^{N}m_{i}(t)} \tag{9}\]
\[\mu=\frac{\sum_{t}^{T}\mu(t)}{T} \tag{10}\]
Where \(t=1,2,3,...,T\). \(c_{ij}\) indicates whether there is a synapse connecting presynaptic neuron \(i\) and postsynaptic neuron \(j\). \(m_{i}(t)\) represents the firing situation of neuron \(i\) at time \(t\) (\(m_{i}(t)\) has only two values of 0 or 1, which represents not firing or firing respectively). Therefore, Eq. 10 is the ratio of the sum of the postsynaptic neuron spikes to the sum of the presynaptic neuron spikes during the simulated time \(T\). Studies have shown that the closer the value of \(\mu\) is to 1, the closer the dynamics of the network are to the critical state [43; 44; 45]. Therefore, the quantitative criticality is calculated as:
\[\mu=|\mu-1| \tag{11}\]
#### 2.2.2 Initialization
In a population of \(N_{c}\) chromosomes to be initialized, each chromosome representing the liquid connection pattern \(C\) of a \(N\) liquid neurons reservoir, as shown in Fig. 2a. We use a binary encoding method, each chromosome locus \(c_{ij}\) has two alleles of 0 or 1, indicating whether there is a synapse \(i\) connecting the presynaptic neuron and a post-synaptic neuron \(j\) (\(0<i,j<N\)). At the beginning of neuroevolution, each chromosome's connectivity is limited to be sparse, and has only \(N*N*\rho_{init}\) synaptic connections inside (the initial liquid density is is recorded as \(\rho_{init}\)).
We first generate a random matrix \(R\), the values of all elements \(r_{ij}\) in \(R\) are between 0 to 1. The Boolean matrix \(C\) is obtained by calculating the result of \(R<\rho_{init}\), as Eq. 12:
\[c_{ij}=\begin{cases}1,&\text{ if }r_{ij}\leq\rho_{init}\\ 0,&\text{ if }r_{ij}>\rho_{init}\end{cases} \tag{12}\]
#### 2.2.3 Evaluation
Since there is more than one fitness function, we will fully consider the physical topology characteristics and dynamic changes of the network when selecting excellent chromosomes as the parents of the next generation. According to the fitness function proposed in the section 2.2.1, the larger the small-world coefficient \(\lambda\), the more the model structure can reflect the characteristics of the small-world. The smaller \(\mu\) is, the closer the model dynamics are to the critical state. Therefore, this multi-objective optimization problem (MOP) according to Eq. 4 and Eq. 10 can be described as \(\mathbf{F}:\mathbf{\Omega}\rightarrow\mathbb{R}\):
\[\underset{C\in\Omega}{\mathrm{argmin}}\mathbf{F}(C)=\{f_{1}(C),f_{2}(C)\} \tag{13}\]
\[s.t.\rho_{1}\leq\rho(C)\leq\rho_{2}\]
\(\rho(C)\) is the density of the liquid layer, defined as the ratio of the number of liquid layer connections to \(N*N\). In order to keep the liquid density \(\rho(C)\) stable during the evolution process, the range of which is limited to \(\rho_{1}\) and \(\rho_{2}\). The evolution content of the two objectives is shown in Fig. 2c. The first optimization goal \(f_{1}(C)\) is to maximize the small-world coefficient:
\[f_{1}(C)=\min(-\lambda(C)) \tag{14}\]
where \(\lambda\) is calculated as Eq. 4, measuring the static LSM topology properties.
The second optimization objective \(f_{2}(C)\) is to minimize the criticality coefficient, which is formulated by:
\[f_{2}(C)=\min\mu(C) \tag{15}\]
where \(\mu\) is calculated as Eq. 11, measuring criticality of LSM dynamics.
#### 2.2.4 Selection
The elitism approach and nondominated sorting strategy of the NSGA-II algorithm [46] is used here to generate mating pools of size \(N_{offs}\) and next
generation individuals.
#### 2.2.5 Crossover and Mutation
**Crossover.** Assuming that chromosome \(C_{1}\) crosses with \(C_{2}\) to generate \(C_{3}\) and \(C_{4}\), we apply the k-point crossover operator rule [47] to select \(k\) genes as crossover points:
\[c_{a_{1}b_{1}},c_{a_{2}b_{2}},...,c_{a_{k}b_{k}} \tag{16}\]
where \(0<a_{k},b_{k}<N\). According to the selected crossover points, each gene can be divided into \(k+1\) segments:
\[\frac{(c_{0,0},c_{a_{1}b_{1}})}{S_{1}},\frac{(c_{a_{1}b_{1}+1},c_{a_{2}b_{2}}) }{S_{2}},...,\frac{(c_{a_{k}b_{k}+1},c_{NN})}{S_{k+1}} \tag{17}\]
We get the matrix \(E\) and \(D\) where \(E_{s_{1}}\), \(E_{s_{3}}\),..., \(D_{s_{2}}\), \(D_{s_{4}}\),..., are set to 1. The remaining elements in \(E\) and \(D\) are set to 0. Therefore:
\[C_{3} =C_{1}*E+C_{2}*D \tag{18}\] \[C_{4} =C_{1}*D+C_{2}*E \tag{19}\]
**Mutation.** Assuming that \(C_{1}\) is mutated into \(C_{5}\), we perform flip bit mutation [48] on \(n_{m}\) genes of \(C_{1}\). Let the mutation probability be \(m_{rate}\), generate a random number \(m_{rand}\) for each offspring after crossover, if \(m_{rand}<m_{rate}\), the mutation is accepted. Each mutation is to select a chromosome locus \(c_{ij}\) in \(C_{1}\) for inversion:
\[c_{ij}=\neg c_{ij} \tag{20}\]
Parents \(P(g)\) and population formed by crossover and mutation \(Q(g)\) and are merged, and the selection operator is applied to generate the next generation \(P(g+1)\). The whole crossover mutation and selection process is shown in Fig. 2b.
#### 2.2.6 Next Generation
The above neuroevolution process of evaluation, selection, crossover, and mutation is repeated for \(G_{th}\) generations as shown in Fig. 2b. Finally, all individuals in the last generation are trained for 100 epochs, and the individual with the highest classification accuracy \(C_{opt}\) is selected as the result of evolution.
### Datasets and Parameter Settings
Our model is validated on MNIST [49], NMNIST [50] and Fashion-MNIST [51] datasets to prove the effectiveness of the algorithm.
**MNIST.** The handwritten dataset MNIST is one of the classic machine learning datasets, which contains 70,000 grayscale images of handwritten digits 0-9, of which 60,000 examples are used for training, and the others are used for testing. The size of each image is \(28*28\) pixels.
**Fashion-MNIST.** The Fashion-MNIST dataset consists of 60,000 training samples and 10,000 testing samples, with a total of ten categories. Each sample is a \(28*28\) grayscale image.
**NMNIST.** NMNIST is a neuromorphic version of MNIST converted to MNIST images by an actuated pan-tilt camera platform. After 300 ms of signal acquisition, 60,000 training images and 10,000 test images are generated, each with a size of \(34*34*2\) (2 is the number of channels, 34 is the size of the image after offset). A preprocessing ensemble method [52] is adopted to convert the event stream into frame stream, which is then fed into the model for classification. The two channels are combined into one by summing in our experiments.
All parameter settings of the evolutionary algorithm in the experiment are shown in Table 1. All network weights, including weights from input to liquid layer, weights from liquid layer to output, and weights inside the liquid layer (the value of the weight rather than the connection mode, which is obtained by the neuroevolution) are initialized randomly. The batch size of all datasets is set to 100. The Adam optimizer [53] is adpoted to optimize the weight of the readout layer, whose learning rate's decay rate (every 50 epochs) is set to 0.0001.
In order to demonstrate the effectiveness of neuroevolution, we established four comparison models: 1) baseline LSM model with randomly generated liquid layers (marked as RLSM), 2) evolved small-world topologies LSM model after 1000 generations (marked as ESLSM), 3) evolved criticality LSM model after 1000 generations (marked as ECLSM), 4) evolved multi-object LSM model after 1000 generations (the proposed model, marked as ELSM) for the ablation studies
of fitness functions on MNIST, NMNIST and Fashion-MNIST. The evolved \(C_{opt}\) is trained as a liquid layer connection mode for 5000 epochs, and multiple random seeds are replaced to perform multiple evolutions to obtain the results in Table 3 and Fig. 3. Table 2 shows the best result among multiple evolutions.
During training, except for the liquid layer (the best individual), the other settings of the comparison model are the same, including the weight of the liquid layer, the connection between the input and the liquid layer, and the connection between the liquid layer and the readout layer to ensure the fairness of the comparison experiment.
NAS-LSM [35] on NMNIST by 4.73% as shown in Table 2. On MNIST, ELSM surpasses other evolved-architecture LSM, LSM-SHADE [36] and Multi-liquid LSM [37] by 3.55% and 2.55% respectively. Overall, ELSM exhibits better performance than all other LSM models (except NMNIST) and comparable results to deep SNN on all datasets.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & Model & Structure & Layers & Accuracy (\%) \\ \hline \multirow{8}{*}{MNIST} & Unsupervised-SNN [54] & Hierarchical SNN & 2 & 95 \\ & LIF-BA [55] & Hierarchical SNN & 3 & 97.09 \\ & Temporal SNN [56] & Hierarchical SNN & 2 & 97.2 \\ & STIDi-BP [57] & Hierarchical SNN & 2 & 97.4 \\ & SN [58] & Hierarchical SNN & 3 & 97.93 \\ & BPSNN [59] & Hierarchical SNN & 3 & 98.88 \\ \cline{2-5} & CMA-ES-LSM [60] & LSM & 2 & 92.6 \\ & LSM-SHADE [36] & LSM & 2 & 94.5 \\ & Multi-liquid LSM [37] & LSM & 2 & 95.5 \\ & NALSM [45] & LSM & 2 & 97.61 \\ & **ELSM** & **LSM** & **2** & **98.05** \\ \hline \multirow{8}{*}{NMNIST} & DECOLLE [61] & Hierarchical SNN & 2 & 96 \\ & AER-SNN [62] & Hierarchical SNN & 2 & 96.3 \\ & BPSNN [59] & Hierarchical SNN & 3 & 98.74 \\ & SLAYER [63] & Hierarchical SNN & 3 & 98.89 \\ \cline{2-5} & Ionic LSM [64] & LSM & 2 & 91.48 \\ & NAS-LSM [35] & LSM & 2 & 92.5 \\ & **ELSM** & **LSM** & **2** & **97.23** \\ & NALSM [45] & LSM & 2 & 97.51 \\ \hline \multirow{8}{*}{Fashion-MNIST} & SL-SNN [65] & Hierarchical SNN & 3 & 85.3 \\ & Unsupervised-SNN [54] & Hierarchical SNN & 2 & 85.31 \\ \cline{1-1} & BS4NN [66] & Hierarchical SNN & 2 & 87.3 \\ \cline{1-1} \cline{2-5} & NALSM [45] & LSM & 2 & 85.84 \\ \cline{1-1} & **ELSM** & **LSM** & **2** & **88.81** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparative performance of different LSM and SNN models on MNIST,NMNIST, and Fashion-MNIST datasets.
The multi-objective neuroevolution process we designed does not directly use the classification accuracy as a criterion for evaluating fitness, but guides the evolutionary algorithm towards an efficient direction from the perspective of physical topology and network dynamics in the brain. Surprisingly, as the evolution proceeds, individuals not only have the brain-inspired properties of small-world and critical state, but also have significantly improved classification accuracy, especially compared with other LSM models, and achieved performance comparable to other hierarchical models with lower energy consumption (minimum number of layers) as shown in Table 2.
### Ablation Study
To explore the effect of different evolution goals on individual performance, we conduct ablation experiments on each dataset using four models: RLSM, ECLSM, ESLSM and ELSM. The specific accuracy of the four models on each dataset is shown in Table 3. Randomly generated LSMs have low accuracy and large variance. ECLSM performs 97.51 \(\pm\) 0.06%, 96.7 \(\pm\) 0.24% and 88.54 \(\pm\) 0.1% on MNIST, NMNIST and Fashion-MNIST, which is significantly better than all RLSMs by 1.62%, 6.6% and 3.96%. ESLSM achieves 97.88 \(\pm\) 0.12%, 96.65 \(\pm\) 0.23% and 88.24 \(\pm\) 0.13% on MNIST, NMNIST and FMNIST, outperforming RLSMs by 1.99%, 6.55% and 3.66% respectively. Evolutionary models perform better than random LSM, among which ELSM with small-world and criticality as evolution goals performs the best on each dataset. It can be seen from Table 3 that ELSM has better accuracy and smaller tolerance than ESLSM and ECLSM, reaching 98.02 \(\pm\) 0.03%, 97 \(\pm\) 0.23% and 88.78 \(\pm\) 0.04%performance on MNIST, NMNIST and Fashion-MNIST respectively.
The comparison of ESLSM, ECLSM and ELSM on different datasets is shown in Fig. 3. In terms of criticality, different ECLSM and ELSM have been evolved for multiple times (different random seeds) for different datasets. The small-world property is independent of data, so the result of once evolution can be used for all datasets while the selected individual \(C_{opt}\) may be different for each dataset, since it is based on the training accuracy after 100 epochs.
Figure 3: Comparison of ESLSM, ECLSM and ELSM on different datasets. **a-c.** Results on evolving criticality on MNIST, NMNIST and Fashion-MNIST. The horizontal axis represents the distance between the criticality of the individual and 1, as shown in Eq. 11. **d-f.** Results on evolving small-world properties on MNIST, NMNIST and Fashion-MNIST. The horizontal axis represents the small-world coefficient of the individual as shown in Eq. 4. The green dot represents the result of 5000 epochs training of \(C_{opt}\) (the one with the highest classification accuracy after training for 100 epochs) selected in each 100 generations, and is fitted by a polynomial (blue line). The green marks with variance indicate the final single-object evolution results ECLSM and ESLSM, and the red mark indicates the multi-object evolution result ELSM.
For Fig. 3a-c, the smaller the value on x-axis, the stronger the criticality. For Fig. 3d-f, the larger the value on x-axis, the more obvious the small-world characteristics. From the results of polynomial fitting (shown in blue), it can be seen that as the evolution proceeds, the fitness of \(C_{opt}\) of each generation increases continuously, and the classification accuracy also increases. It shows a certain degree of positive correlation between the indirect time-saving evolution goal of ELSM and the classification accuracy. In Fig. 3a-f, red marks are always at the top, visualizing that ELSM exhibits better performance than single-objective evolution while ensuring maximized two evolutionary objectives.
## 3 Discussion
Current work on the performance-oriented evolution of SNN architectures is often time-consuming and fails to fully incorporate the topological properties observed in biological brains. Here, we proposes an evolutionary recurrent SNN model ELSM, which indirectly takes the more brain-inspired static small-world topological characteristics and dynamic criticality as evolution goals, replacing the time-consuming performance-oriented fitness function. ELSM achieves clas
\begin{table}
\begin{tabular}{c c c} \hline \hline Dataset & Model & Accuracy (\%) \\ \hline \multirow{4}{*}{MNIST} & RLSM & \(95.89\pm 0.97\) \\ & ESLSM & \(97.88\pm 0.12\) \\ & ECLSM & \(97.51\pm 0.06\) \\ & ELSM & \(98.02\pm 0.03\) \\ \hline \multirow{4}{*}{MNIST} & RLSM & \(90.1\pm 29.94\) \\ & ESLSM & \(96.65\pm 0.23\) \\ & ECLSM & \(96.7\pm 0.24\) \\ & ELSM & \(97\pm 0.23\) \\ \hline \multirow{4}{*}{Fashion-MNIST} & RLSM & \(84.58\pm 1.48\) \\ & ESLSM & \(88.24\pm 0.13\) \\ \cline{1-1} & ECLSM & \(88.54\pm 0.1\) \\ \cline{1-1} & ELSM & \(88.78\pm 0.04\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Final performance of models with different evolution goals on all datasets.
sification accuracies of 98.05%, 97.23% and 88.81% on MNIST, NMNIST and Fashion-MNIST respectively, and outperforms the best LSM models reported so far by 0.44% and 2.97% on MNIST and Fashion-MNIST, surpassing many deep SNN models and comparable to the best performance. On NMNIST, ELSM also achieved comparable results to deep SNNs and the best LSM model. The ablation experiments confirmed that the above two evolutionary goals have a certain degree of positive correlation with the classification accuracy, and the performance of the evolved model far exceeds that of the random LSM. At the same time, the multi-objective evolution model (the proposed ELSM) performs better than the single-objective model ESLSM and ECLSM.
Compared with other deep SNNs, ELSM achieves performance surpassing many deep models with the fewest layers (2 layers). We further analyze more structural brain-inspired topological features as follows:
### Hourglass Structure and Sparse Coding in Drosophila Mushroom Body
Some studies have found an hourglass-like mapping relationship in the mushroom body module of the _Drosophila_ brain: the nervous system converges from the ultra-high-dimensional signals provided by sensory cells to a small num
Figure 4: The evolvable LSM inspired by the _Drosophila_ mushroom body. Dashed box marks the hourglass structure found in the _Drosophila_ mushroom body, consistent with LSM.
ber of projection neurons (PNs), and conduct sparse encoding through a large number of Kenyon cells (KCs). Finally, lower-dimensional signals are extracted to characterize the real world. This is similar to the way that LSM processes information, as shown in Fig. 4. The dotted box marks the hourglass structure similar to that in the mushroom body of _Drosophila_, which is also the difference between LSM and hierarchical neural network.
### Emergence of Structural Properties Exist in the Brain
To study the effect of evolution on the brain-inspired topology of the model, we counted the changes in properties such as clustering coefficient, community and criticality, as shown in Table 4. Clustering coefficient is used to measure the degree of node aggregation. Communities counts the number of communities with a size of 5 in the network that can communicate through 4 common nodes.
While increasing the clustering coefficient, the shortest path length of the network is also reduced (from 0.196 to 0.195), but the total number of connections does not increase but decreases: the connection density of the random network is about 1%, while the connection density of the evolutionary network is individual 0.8%, proving that evolution does not achieve fast and efficient information transfer by increasing connections, but can reduce cost consumption (fewer connections). After evolution, due to the increase of hub nodes, the clustering coefficient of the network increases significantly, and more overlapping communities are also found, implying that the connections between sub-networks are more intricate and highly interconnected. Under limited connectivity, larger clustering coefficients, closer community connections, shorter
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & Clustering coefficient & Communities & Criticality & Density \\ & (H) & (k=4) & (\(|\mu-1|\)) & (\(\rho\)) \\ \hline Random & 285.15 & 2 & 0.265 & 1\% \\
**Evolved** & **319.65** & **1255** & **0.096** & **0.8\%** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Changes in topology properties of ELSM before and after neuroevolution.
shortest paths, and more critical connectivity patterns prove that our proposed multi-objective evolutionary algorithm can optimize LSM from both static and dynamic perspectives, the evolved network architecture is more in line with the core structural characteristics found in the human brain.
The change of the network degree distribution before and after evolution is shown in Fig. 5. Fig. 5a shows the degree distribution of random LSM, following a normal distribution, and the degrees of all nodes are concentrated between 100 and 200. After 1000 generations of evolution, the small-world properties of the network become obvious, and the degree shows an obvious long-tail distribution, as shown in Fig. 5b. A small number of hub nodes appeared: the degree of most nodes among the 8000 neurons is concentrated between 0 and 100. The greater the degree, the fewer nodes, and the maximum node degree is 1525.
Overall, evolved with more biologically plausible small-world coefficient and criticality, ELSM surpasses the best reported LSM models to date on MNIST and Fashion-MNIST, and outperforms many deep SNN models on all datasets. Through analysis, it can be found that the evolved model also has many topological structures similar to brain networks, such as hub nodes, communities, and short path lengths.
Figure 5: Degree distribution comparison. a. Degree distributions of liquid layers of random structures. b. Degree distribution of the evolved individual.
## 4 Limitations of the study
In the future, with the advancement of neuroscience, we will further in-depth explore more topological properties found in brain networks, hoping to discover more effective and energy-saving brain-inspired modular structural features on SNNs, which can be used to guide efficient evolution. In terms of application, SNN networks of various architectures (not limited to LSM or other deep SNNs) can be used to constitute multi-brain areas in the form of global self-organization and co-evolution, realize a variety of advanced cognitive functions and be applied to the exploration of transfer learning, continuous learning and other issues, not limited to classification tasks.
## 5 Resource availability
### Lead contact
Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Yi Zeng ([email protected]).
### Materials availability
This study did not generate new unique reagents.
### Data and code availability
The Python scripts of ELSM can be downloaded from the GitHub repository:
[https://github.com/BrainCog-X/Brain-Cog/tree/dev/examples/Structural_Development/](https://github.com/BrainCog-X/Brain-Cog/tree/dev/examples/Structural_Development/)
## 6 Acknowledgments
This work is supported by the National Key Research and Development Program (Grant No. 2020AAA0107800), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB32070100), the National Natural Science Foundation of China (Grant No. 62106261).
## 7 Author contributions
W.Pan, F.Zhao, B.Han and Y.Dong designed the study under the supervision of Y.Zeng. W.Pan, F.Zhao, B.Han Y.Zeng and Y.Dong performed the experiments and the analyses. W.Pan, F.Zhao and Y.Zeng wrote the paper.
## 8 Declaration of interests
The authors declare that they have no competing interests.
|
2308.11782 | Resource Allocation in Cloud Computing Using Genetic Algorithm and
Neural Network | Cloud computing is one of the most used distributed systems for data
processing and data storage. Due to the continuous increase in the size of the
data processed by cloud computing, scheduling multiple tasks to maintain
efficiency while reducing idle becomes more and more challenging. Efficient
cloud-based scheduling is also highly sought by modern transportation systems
to improve their security. In this paper, we propose a hybrid algorithm that
leverages genetic algorithms and neural networks to improve scheduling. Our
method classifies tasks with the Neural Network Task Classification (N2TC) and
sends the selected tasks to the Genetic Algorithm Task Assignment (GATA) to
allocate resources. It is fairness aware to prevent starvation and considers
the execution time, response time, cost, and system efficiency. Evaluations
show that our approach outperforms the state-of-the-art method by 3.2% at
execution time, 13.3% in costs, and 12.1% at response time. | Mahdi Manavi, Yunpeng Zhang, Guoning Chen | 2023-08-22T20:41:13Z | http://arxiv.org/abs/2308.11782v1 | # Resource Allocation in Cloud Computing Using Genetic Algorithm and Neural Network
###### Abstract
Cloud computing is one of the most used distributed systems for data processing and data storage. Due to the continuous increase in the size of the data processed by cloud computing, scheduling multiple tasks to maintain efficiency while reducing idle becomes more and more challenging. Efficient cloud-based scheduling is also highly sought by modern transportation systems to improve their security. In this paper, we propose a hybrid algorithm that leverages genetic algorithms and neural networks to improve scheduling. Our method classifies tasks with the Neural Network Task Classification (N2TC) and sends the selected tasks to the Genetic Algorithm Task Assignment (GATA) to allocate resources. It is fairness aware to prevent starvation and considers the execution time, response time, cost, and system efficiency. Evaluations show that our approach outperforms the state-of-the-art method by 3.2% at execution time, 13.3% in costs, and 12.1% at response time.
Cloud Computing, Scheduling, Resource Allocation, Neural networks, Genetic Algorithm
## I Introduction
Cloud computing is a model that enables demand-based network access for sharing a set of configured resources, including network, server, storage location, applications, and services while minimizing latency and reducing the need for management and interaction with the service provider [1]. Cloud computing enables distributed and parallel computing [2], making it a common choice for big data processing that single machines with limited RAM cannot handle [3].
When performing big data processing using cloud computing, consumers always wish to conduct their work in a short amount of time with the minimum cost. On the other hand, service providers aim to maximize their resource efficiency and their profits. One of the main challenges here is to optimize resource allocation in cloud computing. It is becoming more and more critical due to the growth of cloud computing consumers and the need to meet the computing demands of modern technology [4]. Recently, the rising vehicle traffic and extensive cloud applications pose a cybersecurity challenge for vehicular networks due to limited space and computing capacity on vehicular devices [7][8]. This limitation increases the network's vulnerability to potential cyber threats, such as Distributed Denial of Service (DDoS) attacks and data breaches. Applications like that demand an efficient cloud-based scheduling to enhance their security.
Methods based on greedy algorithms [5] and genetic algorithms [6] have been proposed to schedule tasks for neural network applications, providing optimal solutions. However, one notable limitation of these approaches is the high execution time required by the genetic algorithm. To reduce the computation cost of task scheduling, we propose to combine neural networks and genetic algorithms to develop a comprehensive solution for effective resource allocation. This integrated approach proves particularly effective in addressing big data problems, optimizing both time and space utilization [9]. Our method optimizes system performance and improves resource utilization across diverse computing paradigms.
Resource allocation can indirectly affect other challenges, such as performance and load balancing. In this paper, we focus on resource allocation and scheduling. The purpose of scheduling is to assign tasks to limited resources in an appropriate way [10]. The parameters to be considered include:
* Fairness: It means that all tasks should equally use the resources, or the resources are assigned to them based on the weight given to them.
* Optimal energy consumption: This means turning off a number of servers and hosts to reduce energy waste on cloud computing.
* Make span: It is the shorter length of interval that causes the tasks to be accomplished sooner.
* Load balancing: This means that tasks are allocated to resources in a way that prevents some resources from being idle while others are overloaded.
* Cost: Total cost is acquired from cloud consumers for the services that they need. This parameter can include different parts, including the cost of processors, the cost of data storage, and the cost of transferring data in the network.
* System efficiency: it is the maximum use of resources with the minimum amount of waste resources and time.
Among the above parameters, the fairness parameter for tasks to prevent starvation was not considered by the previous studies on resource allocation in cloud computing. Some studies have concentrated on a small number of parameters, while others have ignored some that have an impact on the system's overall efficiency. For example, Godhrawala et
al. [11] focused on quality of service but not the cost of execution for each task. Additionally, in numerous papers that present a combined solution using a genetic algorithm and a neural network, the parameters of the neural network are determined using the genetic algorithm [12]. This can lead to a dependency between the two methods, and mistakes in the genetic algorithm can result in an incorrect configuration of the neural network; consequently, the results may fall short of expectations.
To address the above issues, our work makes the following contributions
* We propose a novel scheduling method for cloud computing. Our method combines genetic algorithms with neural network techniques. Different from the previous methods, we use a neural network to select tasks to be sent to the genetic algorithm for scheduling. Our approach is customizable and can be adapted to different cloud computing environments and requirements, allowing for dynamic changes in resource allocation requirements in terms of the weight of each parameter. The approach can adapt to changing conditions in cloud computing environments, ensuring that resources are allocated optimally.
* Our model can be configured to consider different factors such as execution time, response time, utilization, and cost. It is also fairness aware and prevents starvation for any task to allocate resources optimally.
* Due to using a trained model for the classification and selection of a set of optimum tasks, we introduce a new model which improves scalability by efficiently allocating resources to meet the increasing demand for cloud computing resources. The approach can adapt to changes in the workload and allocate resources accordingly, ensuring that applications have access to the required resources.
* Our Neural Network Task Classification (N2TC) and Genetic Algorithm Task Assignment (GATA) can be used to gain important insights on choosing how to allocate resources. Cloud providers allocate resources more intelligently, improving overall performance and reducing costs, by studying previous data and forecasting future resource requests using our methodology.
Compared to the state-of-the-art methods, our approach leads to 3.2% reduction in execution time, 13.3% reduction in cost, and 12.1% improvement in response time.
## II Related Work
In this section, we briefly review the works that are closely related to the proposed method. We have classified these related works into two categories: metaheuristic-based resource scheduling and dynamic resource allocation.
### _Metaheuristic-based resource scheduling_
Alkayal et al. [13] proposed a Particle Swarm Optimization (PSO) algorithm to optimize cloud computing resource scheduling for increased efficiency. The system prioritizes tasks based on length and assigns them to virtual machines that are mapped to physical machines in the data center. Mezmaza et al. [14] used the parallel hybrid genetic algorithm to find the optimal set. They used the island model to migrate the tasks. Their method is energy aware and reduces the makespan parameter. Their cloud model is implemented in a data center that is composed of heterogeneous machines. Their model has also been implemented using ParadisEO. Mocanu et al. [15] proposed a genetic algorithm that uses the roulette wheel to select chromosomes. That method uses elitism in choosing chromosomes and considers a threshold level of 20 to create generations. The goal is to minimize the execution time. The fitness function focused on utilization. It is computed by dividing the total assigned input sizes by the max span. Geetha et al. [16] proposed an integrated neural network and genetic algorithm for scheduling. They reduced the context switching in the processor to save energy. Their approach handles unlimited requests in a parallel and distributed system. They also focused on a federated cloud. Zhou et al. [17] presented a Growable Genetic Algorithm (GGA) using a Heuristic-based Local Search Algorithm (HLSA) and a Random Multi-Weight-based algorithm. Their method introduces a growth stage to the genetic algorithm, resulting in GGA, which allows individuals to evolve through different growth routes. Ajak et al. [18] introduced a Directed Acyclic Graph(DAG) scheduling model aimed at optimizing the quality of service parameters in the cloud computing platform. Their primary objective is to achieve makespan optimization through the appropriate allocation of tasks to nodes and arranging the execution sequence of jobs/tasks. To achieve near-optimal solutions, the proposed model leverages resource provisioning and heuristic techniques.
### _Dynamic resource allocation_
Praveenchandar et al. [19] proposed a dynamic resource allocation that is energy aware and considers the size of a task and an inter arrival time. The method can improve response time, resource utilization, task completion ratio, and makespan. It also improves the efficiency of the dynamic resource allocation process. The authors used Cloudsim to simulate the method and they compared the model with first come first served and round-robin. In the model, they used a dynamic resource table updating method. Semmoud el al. [20] introduced a technique to achieve load balancing on their network and minimize idle time and make span. The authors limited task migrations when the load of VMs is greater than the starvation threshold and used task priority level for the quality of service in cloud computing. They used Cloudsim for the simulation. In the simulation part, sixteen data centers are considered which are located in different regions, and each of them has five physical machines. Shin et al. [21] proposed a multiple adaptive resource allocation with a real-time supervisor scheme. They used hybrid cloud services for the industrial internet of things to implement their model. To improve response time and reduce cost, they provided the optimal number of virtual machines. In addition, they used karush-kuhn-tucker optimization that is applied to continuous
time Markov chain, and all resources in public and private cloud are fully considered.
In [13][14][15][16][17][18][19][21], fairness is not considered, thus, there is a possibility of starvation for tasks. Moreover, due to the use of penalty functions in [18][20][21], the computation overhead can be high. More importantly, some approaches pay little attention to balancing critical parameters in resource allocation, leading to sub-optimal scheduling. Our method aims to address these issues.
## III Proposed Approach
In this section, we provide a detailed description of our approach, including the architecture and the involved algorithms of our approach. We used a hybrid cloud to implement our model which combines both public and private cloud services.
### _Architecture_
Task scheduling is essential in cloud computing. Since the cloud provider has to deal with many user applications, task scheduling can no longer be handled by traditional schedulers [22]. Figure 1 illustrates the architecture of our proposed system, which consists of 5 components:
* Scheduler: It consists of 2 components, Neural Network Task Classification (N2TC) and Genetic Algorithm Task Assignment (GATA), which is the main module for scheduling and all computation processes are performed on it.
* Dispatcher: It sends the tasks to the resources based on the situation of resources and binds the tasks to resources.
* Resources: All tools and applications in server-side that are used to respond clients requests, such as fetch a file or computation request.
* Resource table: It shows the current status of resources and the number of tasks that each resource is running right now. The resource table is necessary so that we can keep track of the resources' status and use any that are not in use during the current scheduling period.
* Clients: An entity that wants to use cloud computing.
In our architecture, at first, tasks are sent by clients to the RAM. We use online mode to send a task. Depending on which resources the task requires, RAM sends a request to the resource table module to determine the status of the intended resources. According to the number of available resources, the appropriate tasks are selected by GATA and N2TC, then transmitted to the dispatcher along with the free resource specification. Dispatcher then transfers the received tasks to the resources. The role of the dispatcher is to send tasks to the desired resources and to ensure those tasks are received by the resources (e.g., via the acknowledgments of resources). Next, the status of resources in the resource table is updated, and it waits for the next task to be assigned by the schedule. The resource table is intended to monitor the resources status of the network. When the scheduler is aware of the current state of resources on the network, it can perform scheduling more efficiently. Also, the resources in this architecture have a number of virtual machines to increase the speed of task execution. The proposed architecture increases the accuracy of selecting the appropriate tasks as well as the convergence towards the optimal tasks by classifying the tasks and then selecting the optimal set that is performed in the scheduler.
### _Mathematical Model_
In this section, we present the definitions, methods, and equations used in N2TC and GATA that are the essential steps of RAM module shown in Figure 1. In particular, we provide the necessary formulas for weighing the tasks and their classification based on the desired parameters and a fitness measurement. Table I lists the abbreviations used in this paper.
Equation (1) computes the weight for a task \(i\) based on the parameters of the execution time, cost, and system efficiency. We assign the initial weight to each parameter based on our system condition dynamically. We estimate the parameter values for a new task that is added to the network, using limited historical data [23].
\[TW_{[i]}=[WP(ET)\times ET_{[i]}]+[WP(C)\times C_{[i]}]+[(WP(SE)\times SE_{[i]}] \tag{1}\]
Fig. 1: High-level architecture consisting of a scheduler, clients, dispatcher, resource table, and resources.
where WP(ET) is the weight determined for execution time. WP(C) is the weight of cost, and WP(SE) is the weight of system efficiency. \(ET_{[i]}\) is the execution time, \(C_{[i]}\) is the cost, and \(SE_{[i]}\) is the system efficiency of task \(i\).
In Equation (2), the weight of a task is compared with the average weights of different classes of tasks. The tasks with similar weights are grouped into a class. \(TW_{[i]}\) is the total weight of task \(i\) (Equation (1)), \(CW_{[r]}\) is the average weight of class r, and \(TC_{[r]}\) represents the tasks in the class r.
\[\forall i\varepsilon[1..n],|TW_{[i]}-CW_{[r]}|<\epsilon\to TC_{[r]}=i \bigcup TC_{[r]} \tag{2}\]
Equation (3) determines the value of the fitness for each task based on response time and cost. The goal is to minimize the value of the fitness function. \(SC_{[r]}\) is the size of class r, q is the class number, and p is the task number. \(TC_{[q,p]}\) denotes task p in class q. F is the parameter of Fairness.
\[Min[\sum_{q=1}^{r}\sum_{p=1}^{SC[r]}RT(TC_{[q,p]}+C(TC_{[q,p]})\times F] \tag{3}\]
If a task \(i\) is waiting in the queue \(Q\) from the previous scheduling, using Equation (4) the value of the fitness function will be improved (i.e., multiplied by a factor of 0.9 to reduce the fitness value). This method is used for increasing the chance of tasks that are waiting in the queue to be executed in the next iteration.
\[\begin{cases}F_{i}=0.9&\text{for }i\varepsilon Q\\ F_{i}=1&\text{otherwise}\end{cases} \tag{4}\]
### _Proposed Scheduling Algorithm_
Algorithm 1 describes the process of the proposed scheduling given all the input tasks. First, based on the weight attributes of each task, it is added to the desired class. We implement 3 classes. In the next step, if the task has been waiting in the queue from the previous scheduling, we improve the rank of the class that the task is classified to. For example, if the class of the task is 2 and it has been waiting in the queue from the previous step, the task is transferred to class 1. When all tasks are placed in the appropriate classes, the number of idle resources in the network is compared with the number of tasks in class 1. If the number of idle resources is less than or equal to the number of tasks in class 1, then tasks of class 1 are sent to the genetic algorithm; otherwise, the tasks of class 2 are also sent to the genetic algorithm. If the idle resources are still available after sending all tasks in classes 1 and 2, we move forward to sending the tasks in class 3. The amount of idle resources is important for us, as resources are limited. Depending on how many tasks are active on the resources at any given time, there may be a different amount of idle resources throughout execution.
```
0: List of tasks
0: Optimum set of tasks
1:for\(i=1\) to \(\mathbf{do}\)
2:for\(j=1\) to \(3\)do
3:if (Task i is similar to a set of class j) then
4:
5:if (Task i in waiting queue and \(j>\)1)then
6: Add task i to class(j-1)
7:else
8: Add task i to class j
9:endif
10:endif
11:endfor
12:endfor
13:endfor
14:while (\(NumberOfIdEResources>\)set of tasks)do
15: Add jobs of class 2 or class 3 to a set of tasks for scheduling
16:endwhile
17: Initial population(set of tasks)
18:if (Each gene in waiting queue)then
19: Fitness of gene improve 0.1
20:endif
21:Sort chromosome by DESC
22:while ((!Feasible solution) OR (Iteration!=Max))do
23: Select parents by Elitism
24: Apply two-point crossover
25: Gene of a chromosome is muted
26: Local search in the gene muted for finding a better gene to replace
27: Apply mutation
28:endwhile
29:return\(SetofTasks\)
```
**Algorithm 1** Proposed Algorithm
Next, the selected tasks are fed to the genetic algorithm. The initial population is constructed randomly, and the value of the fitness function for each chromosome is calculated. If there is any task waiting in the queue from the previous scheduling, the value of the fitness function of that gene will be increased by 10%. Then the chromosomes are sorted according to their fitness values, and the higher chromosomes are used for the offspring generation. We use the two-point method, and the intersection operator is applied to them. To apply the mutation operator, the mutation is initially performed on the desired gene. Then, a local search is performed around the mutated gene so that if there is a gene with a higher value of the fitness function, it will be selected. This process is performed until the optimal set is found or the iteration number of the process exceeds the predetermined number of iterations. The time complexity analysis of our algorithm is as follows. The first nested loop contributes \(O(n)\) to the overall time complexity, where \(n\) is the number of tasks. The while loop depends on the number of idle resources \(m\), and its time complexity is \(O(m)\). The initial population setup has a time complexity of \(O(n)\). The sorting process takes \(O(n\log n)\) time. The last while loop has a time complexity of \(O(kn)\), where \(k\) represents the number of iterations until a feasible solution is found. The overall time complexity is \(O(n\log n+kn+m)\). If \(k\) and \(m\) are much smaller than \(n\), the overall time complexity simplifies to \(O(n\log n)\).
### _N2tc_
N2TC is used to classify the input tasks that are entered into the cloud computing, which operates on the basis of the neural network. We use a feed-forward back propagation neural network for our model.
#### Iii-D1 Data Preparation
In the first step, data are partitioned. 70% of the data is used for training, 15% for network validation, and 15% for network testing. Although there is no set ratio, 70:30 is typically regarded as the norm [25]. The
data for training, validation, and testing are selected randomly so that the performance of training, validation, and testing is enhanced.
#### Iii-D2 Transform
Since the sigmoid logarithmic transfer function is a derivative function, it is commonly used in multi-layer networks that are trained by the back propagation algorithm. An example of this function is based on Equation (5):
\[A=\frac{1}{1+e^{-net}}\ where,\ net=\sum_{i=1}^{n}(W_{[i]}*X_{[i]}) \tag{5}\]
\(X_{[i]}\) is the input of the function and \(W_{[i]}\) is the weight of the \(X_{[i]}\).
#### Iii-D3 Training
The scaled conjugate gradient method is used to update the weight and bias of the data, and training is terminated if:
* Maximum number of epochs has been created
* The time exceeds the maximum level
* The network's performance falls below a threshold
* The gradient of the performance graph is below the minimum
* Validity confirmation performance has decreased since the last time.
#### Iii-D4 Performance Function
In Equation (6), the efficiency function is applied. This function calculates the average squares of the errors between the output and the target. One of the cases of stopping earlier before the completion of the network training is the reduced network performance. If the difference between the output and target increases, the network training will be stopped.
\[Performance=\frac{1}{n}\sum_{i=1}^{n}(Y^{*}(i)-Y(i))^{2} \tag{6}\]
Another feature considered for this network is memory reduction, which speeds up network execution. The higher number of these layers enhances the network accuracy but increases the run-time. The number of hidden layers of the neural network is set as 20 which produces the best results based on our evaluation.
The tasks in N2TC are divided into three categories and the high-priority tasks are sent to the GATA. The task classification criteria in N2TC include the execution time, cost, and system efficiency. The tasks from the previous periods in the waiting queue are improved by one level to give them an opportunity to run and prevent starvation for the tasks. Also, if the number of tasks with the first priority is lower than the resources available in the cloud computing, tasks with a lower priority will be sent to GATA to have the maximum resources available on the network.
### _Gata_
GATA is designed to select tasks using the cloud computing resources based on the genetic algorithm. In our proposed approach the method of decimal is used to represent the chromosomes because the binary display method increases the amount of data storage. All resources have a decimal number and each gene in the chromosome stores this number.
#### Iii-E1 Initial Population
The first stage of the genetic algorithm is the production of the initial population. To prevent early convergence, the initial population is randomly selected to cover a wide range of data. The fitness of the chromosomes is based on their gene fitness; the initial population is 500.
#### Iii-E2 Selection Function
In this operator, from the chromosomes in a population, a number of chromosomes are selected for reproduction. The elitism method is used to select the parent chromosomes to produce the children so that the chromosomes are originally arranged on the basis of fitness value and then the chromosomes with the highest fitness value are prepared for the child generation stage. This method increases the convergence rate to achieve the optimal response.
#### Iii-E3 Crossover
As part of the integration process, parts of the chromosomes are replaced randomly. This makes the children have a combination of their parents' characteristics while do not exactly resemble their parents. In our model, we use the 2-point crossover approach, and various parts of the parent chromosomes are selected for the production of children.
#### Iii-E4 Mutation
After completing the crossover, the mutation operator is performed. This operator randomly selects a gene from the chromosome and changes the content of that gene. The mutation operator is used to avoid getting stuck in a local maximum or minimum. The probability of mutation in our model is 5%.
#### Iii-E5 Fitness
To solve the problem using the genetic algorithm, an appropriate fitness function must be developed for that problem. If an appropriate function is selected, higher convergence is obtained, the algorithm runs faster and the optimal answer is selected. As seen in Equation (7), we considered the response time and cost in the fitness function.
\[\begin{split} fitness=\begin{cases}\sum_{i=1}^{n}(RT(i)+MR(i))& \text{Task}_{[i]}\notin\text{Q}\\ 0.9\times[\sum_{i=1}^{n}(RT(i)+MR(i))]&\text{otherwise}\end{cases}\end{split} \tag{7}\]
In the fitness function, the minimum level of fitness value represents the optimality of the chromosome. MR(i) is the number of resources required for the task(i). Since the number of resources needed to do it is lower, it is more desirable in terms of cost. RT(i) indicates the response time for task(i) which should be minimized. After determining the fitness
Fig. 2: This illustrates the function (Equation (5)) for transform.
function for each gene, the fairness parameter is raised by asking whether task(i) has remained in the queue from the previous scheduling. If it is in the queue, its fitness value will be increased by 10% to give it an opportunity to obtain resources to prevent starvation.
## IV Evaluation
We use Google cluster-traces v3 dataset [27] for evaluation. The Google dataset concentrates on resource requests and usage, without any information about end users and their data or storage systems, and so on. It consists of 405894 rows.
Table 2 shows the hardware system used to run the proposed approach. Three assumptions are considered for the proposed approach, including:
* The tasks are independent from each other-i.e., to do the task i there is no need to do the task j before it.
* Tasks do not have a deadline.
* Tasks in our network are non-pre-emptible, the resources will not release until the task is completed.
We use MATLAB to implement and evaluate our proposed approach. All tasks have been entered into the network and are awaiting scheduling. During the scheduling, we choose 10 tasks for execution based on the specified parameters. In our model, the evaluation is conducted using a set of 10 tasks, which we have determined to be sufficient for assessing the performance and effectiveness of our approach. In our genetic algorithm, each gene in the chromosome corresponds to a specific task. It is common for genetic algorithms to utilize a relatively small number of genes in each chromosome, as observed in studies such as [28] and [29], where the size of the chromosomes typically ranges from 8 to 12 genes. However, it is important to note that as the size of the chromosome increases, the computational complexity of the algorithm also grows [30]. This poses a significant constraint, as the computational demands escalate with larger chromosomes. The response time, execution time, cost, and performance of particular tasks vary. In this section, we demonstrate that the set of tasks chosen by our model is superior to the set chosen by other approaches. As we mentioned earlier, N2TC receives all tasks and classifies them based on metrics for execution time, cost, and system efficiency.
Figure 3 presents the network performance graph. In Figure 3, the network performance is finished with 27 epochs, and it has the best performance in epoch 21. As the curvature of the test graph is higher, the probability of over-fitting in the network is greater. The descending trend of the graph indicates good network performance. Here the results of GATA are addressed. The most important part of the genetic algorithm is the fitness function.
In the next step, tasks classified in class 1, in some situations tasks classified in class 2, are sent to the GATA to find an optimal set of tasks for execution. The length of the optimal set is 10, which means 10 tasks are selected for execution.
Figure 4 presents the value of the fitness function during finding the optimal set of tasks.
Due to the fact that a set of tasks is optimal when the fitness value of that chromosome is minimized, the descending trend of the graph indicates the suitability of the GATA configuration.
Next, We compare the results of the proposed approach with two widely used algorithms in this field, namely First In First Out (FIFO) and Shortest Job First (SJF), as well as two related works (Mezmaz et al. [14] and Mocanu et al. [15]) that are considered as the best methods in this domain. These related works provide valuable insights and serve as benchmarks for comparison, as they have achieved significant advances in resource allocation techniques.
In Figure 5, the execution times of the tasks with the five methods are shown, respectively. Among the 10 selected tasks to run in each of the five methods, the proposed solution has
Fig. 4: The fitness value of selection tasks in each generation.
Fig. 3: Performance of the network during the training, validation, and testing processes. We use cross-Entropy to check the condition of our model in each epoch. In our model, we have 27 epochs.
a lower execution time in general and SJF has the longest execution time.
The system utilization rate is shown in Figure 6. An ideal utilization rate is achieved if the available resources on the network are used maximally, and the idle resources are minimized. In other words, the higher the utilization rate, the better the scheduling is. At this point, the solution provided by Mezmaz et al. [14] has the worst utilization rate, while our method has the second-best utilization rate.
Figure 7 compares the costs of executing the tasks with the five methods, respectively. We see that the lowest cost of completing all 10 tasks (i.e., adding all costs for the 10 individual tasks) is with the proposed approach.
Figure 8 shows the response time graph of the 5 scheduling methods. This measure shows the time interval between sending the task to the cloud computing and receiving the first response from the network to the user. From the comparison, we see that our method has the shortest response time for most tasks except for task 10.
Generally, according to the presented graphs, it can be concluded that the proposed approach has the best performance among the five solutions, and it can be used in a wide range of applications.
Table 3 presents a summary of the performance of the stated strategies, which indicates the total average of the ten selected tasks in each solution. We converted the value of each task between the 0 and 1 range to simplify the results. Compared to the average performance of the existing state-of-the-art methods, the proposed method has improved by about 3.2% at execution time, 13.3% in costs, and 12.1% at response time.
In particular, the proposed method has the best response time for nine out of ten tasks among all methods and the best costs for executing all 10 tasks. Our method also has the second-best performance in utilization rate and the best overall performance to improve the execution time. The results from the graphs indicate that the proposed model not only prevents
Fig. 5: The execution times of the 10 tasks scheduled with five different methods.
Fig. 8: Response times of executing the 10 tasks with the five scheduling methods.
Fig. 6: System utilization rate with the five scheduling methods.
Fig. 7: Costs of executing the 10 tasks with five different methods.
task starvation but also has a positive impact on the best task selection and the improvement of the aforementioned parameters. For this reason, the proposed approach outperforms the methods already mentioned.
## V Conclusion and Future Work
Resource allocation is considered one of the major challenges in cloud computing. Many efforts have been made in this field. Since the heuristic methods have better results in large environments, these methods are more popular. In the proposed approach we used the combination of genetic algorithms and neural networks to solve the problem of scheduling and selection of resources to get the optimal answers for resource assignment in cloud computing. In the future, the load balancing issue can be included in the defined parameters. Also, for tasks that have a deadline, some priorities will be considered so that they can be run at the right time. In addition, this method can be extended to tasks that are dependent on each other. Furthermore, the proposed efficient cloud-based scheduling can be applied by transportation systems to improve cybersecurity, which enables immediate responses to potential threats and consolidates security monitoring. By harnessing the flexibility and effectiveness of cloud computing, transportation systems can strengthen their cybersecurity measures and enhance their ability to withstand cyber threats. We will explore this in the future.
## Acknowledgment
This work is funded by the US Department of Transportation (USDOT) Tier-1 University Transportation Center (UTC) Transportation Cybersecurity Center for Advanced Research and Education (CYBER-CARE). (Grant No. 69A3552348332), and Theorizing Connected Vehicle (CV) Based Advanced Traffic Management System (ATMS) Vulnerability Analysis and Strategizing for Cyber Security (Grant No. I0509667)
|
2305.09125 | A deep learning method for multi-material diffusion problems based on
physics-informed neural networks | Given the facts of the extensiveness of multi-material diffusion problems and
the inability of the standard PINN(Physics-Informed Neural Networks) method for
such problems, in this paper we present a novel PINN method that can accurately
solve the multi-material diffusion equation. The new method applies continuity
conditions at the material interface derived from the property of the diffusion
equation, and combines the distinctive spatial separation strategy and the loss
term normalization strategy to solve the problem that the residual points
cannot be arranged at the material interface, the problem that it is difficult
to express non-smooth functions with a single neural network, and the problem
that the neural network is difficult to optimize the loss function with
different magnitudes of loss terms, which finally provides the available
prediction function for a class of multi-material diffusion problems. Numerical
experiments verify the robustness and effectiveness of the new method. | Yanzhong Yao, Jiawei Guo, Tongxiang Gu | 2023-05-16T03:11:13Z | http://arxiv.org/abs/2305.09125v1 | A deep learning method for multi-material diffusion problems based on physics-informed neural networks
###### Abstract
Given the facts of the extensiveness of multi-material diffusion problems and the inability of the standard PINN(Physics-Informed Neural Networks) method for such problems, in this paper we present a novel PINN method that can accurately solve the multi-material diffusion equation. The new method applies continuity conditions at the material interface derived from the property of the diffusion equation, and combines the distinctive spatial separation strategy and the loss term normalization strategy to solve the problem that the residual points cannot be arranged at the material interface, the problem that it is difficult to express non-smooth functions with a single neural network, and the problem that the neural network is difficult to optimize the loss function with different magnitudes of loss terms, which finally provides the available prediction function for a class of multi-material diffusion problems. Numerical experiments verify the robustness and effectiveness of the new method.
keywords: multi-material diffusion equation, deep learning method, physics-informed neural networks, flux continuity condition, domain separation strategy.
## 1 Introduction
Diffusion equations are an important class of partial differential equations that need to be studied in many applications such as groundwater seepage [1], oil reservoir simulation [2], nuclear reactions [3], etc. They can formulate both the diffusion process of material concentration in space, and the energy transfer process in radiative heat conduction problems [4; 5]. Their numerical solution methods have been a hot research topic in the field of scientific and engineering computation.
In recent years, the technology of deep learning for solving partial differential equations has developed rapidly. Researchers have developed a new method
for solving partial differential equations by introducing Physical Information into Neural Networks, which is known as the PINN method [6; 7; 8; 9]. The (continuous) PINN method described in Ref. [7] will be referred to as _the standard PINN method_ in the remainder of this paper. The PINN method takes the definite solution condition as the supervised learning part, and combines the automatic differential technology to take the approximation degree of the governing equation as the residual part. Two parts together form the loss function as the training objective of the network prediction. This technique makes the network output obtained by the optimization algorithm not only satisfy the definite solution condition, but also satisfy the governing equation.
Compared with the traditional numerical methods, including the finite element method (FEM) and the finite volume method, the PINN method (FVM) has advantages in some aspects, such as unnecessary grid generation, adaptation to high-dimensional problems, and so on. However, the PINN method still faces many challenges in practical applications, one of which is how to efficiently solve heterogeneous diffusion equations on the computational domain involving multiple materials.
For the multi-material diffusion problem, there will be material interfaces, and because the materials on both sides of the interface have different physical properties, their diffusion coefficients or heat conduction coefficients will have large differences, and they will have jumps at the interface, and such jumps will cause the derivatives of the solution function to necessarily have jumps at the interface, i.e., the solution function is not continuously differentiable at the interface, and the second-order partial derivatives of the solution function will not exist at all. This problem poses two major difficulties for the standard PINN method: _(1) the PINN method generally produces a smooth prediction function, so it is difficult for the standard PINN method to obtain a prediction function that is not continuously differentiable [10]; (2) since the solution function does not have second-order partial derivatives at the interface, the standard PINN method based on the automatic differentiation technique cannot incorporate the sampled points at the interface as residual points in the training of a single neural network [11]. This inevitably leads to an inaccurate solution near the interface._ However, it is well known that the structure of the solution near the interface is generally very complex, and the computational accuracy for this position can seriously affect the overall computational accuracy of the computational model. Obtaining high accuracy numerical solutions near the interface is a very challenging but necessary task for any numerical method.
To solve the equation with a non-smooth solution at interfaces using the PINN method, the following two strategies are most commonly used. One is to accept the fact that the neural network will make an incorrect prediction near the interface. To get useful predictions in the part away from the interface, the points at the interface are not sampled, or their residuals are weighted so that the loss near the interface tends to 0. In Ref. [12], Xie et al. designed a weighted function to handle the possible jump of the diffusion coefficient across the material interface. The other idea is to use different neural networks for each subdomain, so that the outputs of the multiple networks can express the
functions with non-smoothness at the interface. In Ref. [11], He et al. pointed out that it is inefficient to use only one neural network structure to capture the interface, and they proposed a piece-wise neural network structure and an adaptive resampling strategy to solve the elliptic interface problems. As a result, the PINN method in combination with domain decomposition techniques has received increasing attention. For solving nonlinear PDEs in complex geometry domains, extended PINNs (XPINNs) based on domain decomposition techniques have been proposed in Ref. [13]. In Ref. [14], a Distributor-PINN method (DPINN) was proposed, which decomposes the computational domain into several disjoint subdomains and trains several sub-networks simultaneously by minimizing the loss function, and each sub-network is used to approximate the solution on a subdomain. Deep DDM in [15] is another PINN method based on domain decomposition techniques to solve two-dimensional elliptic interface problems. In Ref.[16], the author showed that the treatments in the above domain decomposition-based PINN methods could also suffer from convergence issues or have the drawback of low accuracy by the experiments, and they developed a dynamic weighting strategy that adaptively assigns appropriate weights to different terms in the loss function based on the multi-gradient descent algorithm [17].
In view of the facts of the extensiveness of multi-material diffusion problems and the inability of the standard PINN for such problems, this paper attempts to propose effective strategies to overcome the above two difficulties, including that a single neural network is difficult to express the function with different derivatives on two sides of the interface, and the standard PINN method cannot arrange effective sampling points on the interface, so as to construct a novel PINN method named as _DS-PINN_, which can accurately solve the multi-material diffusion equation using a single neural network. In addition, we develop a normalization strategy for the loss terms and present the _nDS-PINN_ method, which further improves the prediction accuracy.
The rest of this paper is organized as follows: in section 2, we do some preliminary work by giving the governing equation of the multi-material diffusion problem and its standard PINN form; in section 3, we discuss in detail how to improve the standard PINN to obtain an available prediction function for the multi-material diffusion problem; in section 4, we give several numerical examples to verify the effectiveness of our method; the conclusion about the new PINN method is drawn in the last section.
## 2 Preliminaries
In this section, we first describe a class of multi-material diffusion problems and then give the standard PINN method for solving them.
### Physical model and governing equation
A class of linear diffusion problems on the domain containing different materials can be formulated as follows:
\[-\nabla\cdot\kappa(X)\nabla u=Q\left(X\right),X\in\Omega, \tag{2.1}\]
with the Dirichlet boundary condition
\[u(X)=g(X),X\in\partial\Omega, \tag{2.2}\]
where \(u=u(X)\) is the function to be solved, \(\Omega\) is an open domain in \(\mathbb{R}^{d}\) with the boundary \(\partial\Omega\). For multi-material problems, \(\Omega\) is composed of several subdomains containing different materials. The source term \(Q(X)\) and the boundary condition \(g(X)\) are bounded in their domains of definition. The boundary condition can also be of Robin or mixed type. Note that, the diffusion coefficient
\[\kappa(X)=\kappa_{i}(X),\text{ for }X\in\Omega_{i},\text{ }i=1,2,\cdots,N, \tag{2.3}\]
where \(\Omega_{i}\) denotes a subdomain containing a certain material, \(\kappa_{i}(X)\) is the diffusion coefficient on that subdomain, and \(N\) denotes the number of material type. \(\kappa_{i}(X),i=1,\ldots,N\) are smooth functions on their respective subdomains, but they may not be equal at the material interface.
To simplify the description, this paper mainly discusses two-dimensional problems, and the same idea can also be applied to three-dimensional problems.
Figure 2.1 shows a 2D domain containing two materials, and \(\Gamma\) is the material interface. If \(\kappa_{1}(X)\neq\kappa_{2}(X),X\in\Gamma\), then according to Eq. (2.1), one can get
\[\nabla u(X)|_{\Gamma^{-}}\neq\nabla u(X)|_{\Gamma^{+}}. \tag{2.4}\]
Thus, the solution \(u\) is not continuously differentiable at the material interface \(\Gamma\), and its second-order partial derivatives are absent.
### Standard PINN method for diffusion equations
The PINN method produces the prediction function \(u_{\theta}\) as an approximation of the solution of Eqs. (2.1)-(2.3), where \(\theta\) denotes the neural network parameters, including the weights and biases of the neural networks. Later in this paper, we refer to \(u_{\theta}\) as _the prediction_.
Figure 2.1: A 2D domain with two materials.
For the standard PINN method, \(\theta\) are obtained by optimising the loss function, and the loss function consists of two parts as follows:
\[\mathcal{L}(\theta;\Sigma)=w_{b}\mathcal{L}_{b}(\theta;\tau_{b})+w_{r}\mathcal{L }_{r}(\theta;\tau_{r}), \tag{2.5}\]
where \(w_{b}\) and \(w_{r}\) are the weights for the two parts of the loss function, respectively, and the _supervised loss term_ and the _residual loss term_ are
\[\mathcal{L}_{b}(\theta;\tau_{b}) =\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left|u_{\theta}(X_{i})-g(X_{i} )\right|^{2}, \tag{2.6}\] \[\mathcal{L}_{r}(\theta;\tau_{r}) =\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\left|-\nabla\cdot\kappa(X_{i} )\nabla u(X_{i})-Q\left(X_{i}\right)\right|^{2}. \tag{2.7}\]
\(\Sigma=\{\tau_{b},\tau_{r}\}\) denotes the training data set, where \(\tau_{b}=\{(X_{i},g(X_{i}))\left|X_{i}\in\partial\Omega\right\}_{i=1}^{N_{b}}\) is the labeled data set, and \(\tau_{r}=\{X_{i}\in\Omega\}_{i=1}^{N_{r}}\) is the residual data set. \(N_{b}\) and \(N_{r}\) denote the number of boundary sampling points on \(\partial\Omega\) and the number of inner sampling points in \(\Omega\), respectively.
Suppose
\[\bar{\theta}=\arg\min_{\theta}\mathcal{L}(\theta;\Sigma), \tag{2.8}\]
which can be obtained by some optimization methods, and then \(u_{\bar{\theta}}\) is the approximation of the unknown function \(u\).
The partial differential operators, such as \(u_{x}\) and \(u_{xx}\), can be implemented using automatic differentiation (AD). This can be easily realized in the deep learning framework like the PyTorch [18] or Tensorflow [19].
**Remark 2.1**.: _For unsteady diffusion problems_
\[\begin{cases}u_{t}-\nabla\cdot\kappa(t,X)\nabla u=Q\left(t,X\right),t\in(0,T ],&X\in\Omega,\\ u(t,X)=g(t,X),t\in(0,T],&X\in\partial\Omega,\\ u(0,X)=\phi(X),&X\in\Omega,\end{cases} \tag{2.9}\]
_the method to be discussed in this paper is also applicable. At this case, the loss function requires adding a loss term to reflect the degree of approximation of the initial condition._
## 3 An improved PINN method for solving multi-material diffusion problems
In this section, we investigate a deep learning method for solving the multi-material diffusion equations (2.1)-(2.3), and present an improved PINN using interface connection conditions and the domain separation strategy, which is called DS-PINN. At the end of this section, we further improve the performance of the DS-PINN method by introducing the normalization strategy, which is denoted as nD-PINN.
It is well known that the standard PINN method, under the assumption that the solution of the equation is sufficiently smooth, uses automatic differentiation techniques to compute the residual loss term. Since the diffusion coefficients of Eq. (2.1) are discontinuous at the material interface, \(u\) is not continuously differentiable and its second-order derivatives do not exist at the interface, which means that one cannot sample residual points at the interface, and thus the equation information at the interface is lost. If this issue is not properly addressed, the prediction obtained from the standard PINN will have a large uncertainty at the material interface, resulting in an unreliable result. The following subsections provide several strategies for dealing with this problem.
### Introducing material interface continuity conditions into the standard PINN
Since the second derivative of the function \(u\) does not exist at the material interface, the residual error of the sampling point at the interface cannot be calculated according to Eq. (2.7). To compensate for this deficiency, we can add new loss terms to the loss function according to the properties that Eq. (2.1) satisfies at the interface, so that the prediction function of the neural network can reasonably reflect the behavior of the solution at the interface.
According to the property of the diffusion equation, the following two conditions should be satisfied at the interface \(\Gamma\):
\[u(X)|_{\Gamma^{-}} =\left.u(X)\right|_{\Gamma^{+}}, \tag{3.1}\] \[-\kappa_{1}(X)\nabla u(X)|_{\Gamma^{-}}\cdot\mathbf{n}_{1} =-\left(\left.-\kappa_{2}(X)\nabla u(X)\right|_{\Gamma^{+}}\cdot \mathbf{n}_{2}\right), \tag{3.2}\]
where \(\left.\cdot\right|_{\Gamma^{-}}\) and \(\left.\cdot\right|_{\Gamma^{+}}\) represent the corresponding function values of approaching any point \(X\) on the interface \(\Gamma\) from \(\Omega_{1}\) and \(\Omega_{2}\), respectively. \(\mathbf{n}_{1}\) and \(\mathbf{n}_{2}\) are the outer normal directions of the corresponding subdomain, as shown in the Figure 2.1.
Eqs. (3.1)-(3.2) are called the continuity conditions, which include the solution continuity and the flux continuity.
Define \(\llbracket\mathcal{F}(X)\rrbracket_{\Gamma}:=\mathcal{F}(X)|_{\Gamma^{+}}- \mathcal{F}(X)|_{\Gamma^{-}}\) to denote the jump of \(\mathcal{F}(X)\) across the material interface. Then the continuity conditions (3.1)-(3.2) can be rewritten as follows:
\[\llbracket u(X)\rrbracket_{\Gamma} =0, \tag{3.3}\] \[\llbracket-\kappa(X)\nabla u(X)\cdot\mathbf{n}\rrbracket_{\Gamma} =0. \tag{3.4}\]
In fact, the continuity conditions of the material interface are the hypothetical conditions for the derivation of Eq. (2.1). However, these two conditions can also be obtained from Eq. (2.1). The solution continuity condition (3.1) is the assumed condition, which is a necessary condition for the governing equation (2.1) to hold. Next, we give the derivation of the flux continuity condition (3.2).
Suppose that \(V\) is an arbitrary control volume containing part of the interface \(\Gamma\) which divides it into two parts \(V_{1}\) and \(V_{2}\), as shown in Figure 3.1.
Integrating Eq. (2.1) over the volume \(V_{1}\), we obtain
\[\int_{V_{1}}\left(-\nabla\cdot\kappa_{1}(X)\nabla u\right)\mathrm{d}V=\int_{V _{1}}Q(X)\mathrm{d}V. \tag{3.5}\]
According to the divergence theorem, Eq. (3.5) can be rewritten as follows:
\[\int_{V_{1}}\left(-\nabla\cdot\kappa_{1}(X)\nabla u\right)\mathrm{d }V=\oint_{\partial V_{1}}\left(-\kappa_{1}(X)\nabla u\right)\cdot\mathbf{n}_{1} \mathrm{d}S\\ =\int_{\overrightarrow{ABC}+\overrightarrow{CA}}\left(-\kappa_{1 }(X)\nabla u\right)\cdot\mathbf{n}_{1}\mathrm{d}S=\int_{V_{1}}Q(X)\mathrm{d}V. \tag{3.6}\]
Analogously, we obtain the similar result for \(V_{2}\)
\[\int_{\overrightarrow{CDA}+\overrightarrow{AC}}\left(-\kappa_{2}(X)\nabla u \right)\cdot\mathbf{n}_{2}\mathrm{d}S=\int_{V_{2}}Q(X)\mathrm{d}V. \tag{3.7}\]
Integrating Eq. (2.1) over the entire control volume \(V\) and using the divergence theorem, we have
\[\int_{\overrightarrow{ABC}+\overrightarrow{CDA}}\left(-\kappa(X)\nabla u \right)\cdot\mathbf{n}\mathrm{d}S=\int_{V}Q(X)\mathrm{d}V. \tag{3.8}\]
By combining Eqs. (3.6), (3.7) and (3.8), we get the flux continuity formula
\[\int_{\overrightarrow{CA}}\left(-\kappa_{1}(X)\nabla u\right)\cdot\mathbf{n}_{1} \mathrm{d}S+\int_{\overrightarrow{AC}}\left(-\kappa_{2}(X)\nabla u\right) \cdot\mathbf{n}_{2}\mathrm{d}S=0. \tag{3.9}\]
Given the arbitrariness of \(V\), we can obtain the flux continuity condition (3.2).
Eq.3.9 is used in many papers constructing discrete schemes for heterogeneous diffusion equations, such as [20, 21, 22].
**Remark 3.1**.: _The equations should satisfy the continuity condition at the interface, so we want to reflect this property of the predicted solution near the interface by adding new loss terms to the loss function. However, these two conditions cannot be directly applied to the training of the PINN. The reason is that the prediction function obtained from a single PINN training is continuously differentiable, i.e., there is only one unique derivative at each position, so the solution continuity condition is naturally satisfied, while the flow continuity condition cannot be achieved._
Figure 3.1: A control volume \(V(V_{1}\cup V_{2})\) containing part of the interface \(\Gamma\).
### Applying domain separation strategy to compute derivatives on both sides of the material interface
To characterize the derivative discontinuity property at the interface, a natural idea is to train this model using two sets of neural networks linked by the interface connection conditions. However, this strategy faces some difficulties: _(1) it requires multiple sets of networks for the multi-material model; (2) it is more difficult to design and implement optimization algorithms; and (3) it generally requires iteration between neural networks and the convergence is difficult to guarantee._
Under the premise of using only one neural network to obtain different derivative values on both sides of the interface, an intuitive idea is to separate the two domains divided by the interface by a certain distance, and the material interface becomes the boundaries of two sub-domains with a certain interval, so that each point on the interface becomes two points belonging to different locations in space, so it is logical that they can have different derivative values.
Figure 3.2 shows the respective domain separation strategies for the two types of material interfaces. It is easy to generalize this strategy to the case of multiple materials.
With regard to the domain separation strategy, the following points should be noted:
* The subdomains cannot overlap because only one neural network is used.
* There are no strict limitations on the distance and direction of the separation. If the difference of diffusion coefficients between the two sides of the interface is large, then we should choose a larger distance. In Sect. 4, the performance is tested with different separation distances \(d\).
* After implementing the separation strategy, the material interface changes from \(\Gamma\) to two spatially separated boundaries \(\Gamma_{1}\) and \(\Gamma_{2}\), on which the interface continuity conditions (3.1),(3.2) are imposed.
* The training points are sampled based on all subdomains after separation, and the sampling points on the material interface should be consis
Figure 3.2: Two ways of separating domains.
tent between the matching subdomains to impose the interface continuity condition.
Next, we give a brief analysis of the effectiveness of the separation strategy.
Taking a 2D model as an example, assume that \(u_{1}\) and \(u_{2}\) are functions on the subdomains \(\Omega_{1}\) and \(\Omega_{2}\), respectively, satisfying Eqs. (2.1) and (2.2), and that the diffusion coefficient \(\kappa(x,y)\), the source term \(Q(x,y)\) and the boundary condition \(g(x,y)\) are equal to the values of the corresponding position after moving the computational domain from \(\Omega_{1}\) to \(\Omega_{2}\), as shown in Figure 3.3, i.e.,
\[\kappa(x,y) =\kappa(x+\Delta x,y+\Delta y),\quad(x,y)\in\Omega_{1},(x+\Delta x,y+\Delta y)\in\Omega_{2}, \tag{3.10}\] \[Q(x,y) =Q(x+\Delta x,y+\Delta y),\quad(x,y)\in\Omega_{1},(x+\Delta x,y+ \Delta y)\in\Omega_{2},\] \[g(x,y) =g(x+\Delta x,y+\Delta y),\quad(x,y)\in\partial\Omega_{1},(x+ \Delta x,y+\Delta y)\in\partial\Omega_{2}.\]
Then we have
\[u_{1}(x,y)=u_{2}(x+\Delta x,y+\Delta y),\quad(x,y)\in\Omega_{1}. \tag{3.11}\]
This is an obvious result, and we can also give a simple proof.
Let
\[w(x,y)=u_{1}(x,y)-u_{2}(x+\Delta x,y+\Delta y), \tag{3.12}\]
and then \(w(x,y)\) satisfies
\[\begin{cases}-\nabla\cdot\kappa(x,y)\nabla w(x,y)=0,\quad(x,y)\in\Omega_{1}, \\ w(x,y)|_{(x,y)\in\partial\Omega_{1}}=0.\end{cases} \tag{3.13}\]
According the extremum principle, we get \(w(x,y)\equiv 0\).
The above analysis shows that after translating the computational domain \(\Omega_{1}\) to a new location \(\Omega_{2}\), if the condition (3.10) is satisfied, the solutions of both domains have the same structure for Eqs. (2.1)- (2.2).
**Remark 3.2**.: _Domain separation is a constructive strategy that makes it possible to express a class of non-smooth functions with a single neural network, fully exploiting the mesh-free advantage of the PINN method. Compared to the
conventional method of using multiple neural networks to solve multi-material diffusion problems, the DS-PINN method using this strategy is not only easy to implement, but also does not require iteration between networks, resulting in relatively high computational efficiency._
### Adding the special term representing the interface connection condition to the loss function.
For multi-material diffusion problems, the solution at the interface is critical, and obtaining a highly accurate numerical solution near the interface is a very challenging task for any kind of numerical method. Since \(u\) has no second-order partial derivatives at the interface, the standard PINN method based on the automatic differentiation technique cannot incorporate the sampled points at the interface as residuals in the training of the neural network, which inevitably leads to inaccurate solutions near the interface. Sect. 3.1 gives continuity conditions that the solution at the interface should satisfy. Introducing this connection condition into PINN can fill the gap of missing information at the interface. Furthermore, by introducing a domain separation strategy, Sect. 3.2 overcomes the problem of two derivative values at one location caused by \(u\) not being continuously differentiable at the interface, which is another hurdle for the standard PINN method to solve the heterogeneous diffusion problem.
Based on the work in the previous two sections, we can easily improve the standard PINN by adding the interface continuity condition to the loss function, so that the interface information is introduced into the neural network. In return, the prediction function we obtain can give very accurate predictions near the interface. For simplicity, we will only discuss the case of a single interface for two materials. However, the case of multiple interfaces for multiple materials can be treated in a similar manner.
By adding the _interface loss term_ in the loss function, the loss function of the DS-PINN is given by
\[\mathcal{L}(\theta;\Sigma)=w_{b}\mathcal{L}_{b}(\theta;\tau_{b})+w_{r} \mathcal{L}_{r}(\theta;\tau_{r})+w_{\Gamma}\mathcal{L}_{\Gamma}(\theta;\tau_{ \Gamma}), \tag{3.14}\]
where
\[\mathcal{L}_{b}(\theta;\tau_{b})= \frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left|u_{\theta}(X_{i}^{\prime} )-g(X_{i})\right|^{2}, \tag{3.15}\]
\[\mathcal{L}_{r}(\theta;\tau_{r})= \frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\left|-\nabla\cdot\kappa(X_{i}) \nabla u(X_{i}^{\prime})-Q\left(X_{i}\right)\right|^{2}, \tag{3.16}\]
\[\mathcal{L}_{\Gamma}(\theta;\tau_{r})= \frac{1}{N_{\Gamma}}\sum_{i=1}^{N_{\Gamma}}\left(\left|u_{\theta} (X_{i}^{(1)})-u_{\theta}(X_{i}^{(2)})\right|^{2}+\right.\] \[\left.\left|-\kappa_{1}(X_{i}^{(1)})\nabla u(X_{i}^{(1)})\cdot \boldsymbol{n}_{1}-\kappa_{2}(X_{i}^{(2)})\nabla u(X_{i}^{(2)})\cdot \boldsymbol{n}_{2}\right|^{2}\right). \tag{3.17}\]
The training data set \(\Sigma\left(\tau_{b},\tau_{r},\tau_{\Gamma}\right)\) is as follows:
\[\tau_{b}=\{(X_{i},g(X_{i}))\,|X_{i}\in\partial\Omega\}_{i=1}^{N_{b}}, \tag{3.18}\] \[\tau_{r}=\{X_{i}\in\Omega\}_{i=1}^{N_{r}},\] (3.19) \[\tau_{\Gamma}=\{X_{i}|X_{i}\in\Gamma\}_{i=1}^{N_{\Gamma}}. \tag{3.20}\]
In Eqs. (3.15) and (3.16), \(X_{i}\) represents a sampling point on the boundary of the original domain \(\partial\Omega\) or in the original domain \(\Omega\), and \(X_{i}^{\prime}\) is the matching point of \(X_{i}\). If the subdomain containing \(X_{i}\) has not moved, then \(X_{i}^{\prime}=X_{i}\); if the subdomain containing \(X_{i}\) has moved by a distance of \(\Delta L\), then \(X_{i}^{\prime}=X_{i}+\Delta L\).
In Eqs. (3.17) and (3.20), \(X_{i}\) represents a sampling point on the material interface \(\Gamma\) in the original domain \(\Omega\), and \(\Gamma\) are referred to as \(\Gamma_{1}\) and \(\Gamma_{2}\) in two neighboring subdomains \(\Omega_{1}\) and \(\Omega_{2}\), respectively. \(X_{i}^{(1)}\) and \(X_{i}^{(2)}\) are two matching points of \(X_{i}\) belonging to \(\Gamma_{1}\) and \(\Gamma_{2}\). If the subdomain containing \(X_{i}^{(k)}\) has not moved, then \(X_{i}^{(k)}=X_{i}\), k=1 or 2; if the subdomain containing \(X_{i}^{(k)}\) has moved by a distance of \(\Delta L\), then \(X_{i}^{(k)}=X_{i}+\Delta L\), k=1 or 2.
Note that, according to Eq. (3.10), we use \(g(X_{i})\), \(\kappa(X_{i})\) and \(Q(X_{i})\) to replace with \(g(X_{i}^{\prime})\),\(\kappa(X_{i}^{\prime})\) and \(Q(X_{i}^{\prime})\) in Eqs. (3.15) and (3.16). In addition, if higher accuracy is needed at the material interface, one can increase \(\omega_{\Gamma}\).
So far, we have obtained the loss function for the multi-material diffusion equation, and by optimizing this loss function, we can get the prediction function. The last task is to map the prediction function of the subdomain shifted by a certain distance back to its original position using the following formula
\[u_{\theta}(X)=u_{\theta}(X^{\prime}),\quad X\in\bar{\Omega}. \tag{3.21}\]
Similarly, \(X^{\prime}\) is the matching point of \(X\). Depending on whether the subdomain containing \(X\) is moved, \(X^{\prime}=X\) or \(X^{\prime}=X+\Delta L\).
### Normalizing loss terms to improve the training performance
The loss function of the standard PINN method contains two main terms, the supervised loss term and the residual loss term, whose variables usually have different physical meanings and are of different magnitudes, so that combining them for optimization will generally result in the numerically smaller term not being reasonably optimized, leading to the final prediction deviating from the reference solution. The question of how to balance the different terms in the loss function plays a key role in the PINN method, and some researchers have made important progress [11; 12; 16; 23].
It should be noted that normalizing each loss term in the loss function according to the characteristics of the equation not only facilitates the implementation of the optimization algorithm, but also helps to balance the importance of each loss term, eliminates poor training results due to different orders of magnitude, and improves the computational accuracy of the prediction function. Based on the governing equation (2.1) and its boundary condition (2.2), a strategy for normalizing the supervised term (3.15) and the residual term (3.16) is given below.
Considering that \(g(X)\) and \(Q(X)\) may reflect the magnitude of \(u\) and \(u_{xx}+u_{yy}\), respectively, let the normalization factors \(\zeta_{b}\) and \(\zeta_{r}\) are as follows
\[\zeta_{b} =\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left|g(X_{i})\right|^{2}, \tag{3.22}\] \[\zeta_{r} =\frac{1}{N_{r}}\sum_{i=1}^{N_{r}}\left|Q\left(X_{i}\right)\right| ^{2}. \tag{3.23}\]
Next, we define the new loss terms
\[\tilde{\mathcal{L}}_{b}(\theta;\tau_{b}) =\begin{cases}\frac{1}{\zeta_{b}}\mathcal{L}_{b}(\theta;\tau_{b}),&\text{ if }\zeta_{b}\neq 0,\\ \mathcal{L}_{b}(\theta;\tau_{b}),&\text{ if }\zeta_{b}=0.\end{cases} \tag{3.24}\] \[\tilde{\mathcal{L}}_{r}(\theta;\tau_{r}) =\begin{cases}\frac{1}{\zeta_{r}}\mathcal{L}_{r}(\theta;\tau_{r} ),&\text{ if }\zeta_{r}\neq 0,\\ \mathcal{L}_{r}(\theta;\tau_{r}),&\text{ if }\zeta_{r}=0.\end{cases} \tag{3.25}\]
Note that, although the normalization factors cannot normalize the value of each loss term to \([0,1]\), it is able to largely eliminate the effect of the magnitude. Now, the loss function can be rewritten as follow
\[\mathcal{L}(\theta;\Sigma)=w_{b}\tilde{\mathcal{L}}_{b}(\theta;\tau_{b})+w_{r }\tilde{\mathcal{L}}_{r}(\theta;\tau_{r})+w_{\Gamma}\mathcal{L}_{\Gamma}( \theta;\tau_{\Gamma}), \tag{3.26}\]
and we call the PINN method using this loss function the _normalized DS-PINN_, denoted by _nDS-PINN_.
**Remark 3.3**.: _The interface loss term \(\mathcal{L}_{\Gamma}(\theta;\tau_{\Gamma})\) isn't normalized because it reflects the continuity conditions (3.3)-(3.4). This is similar to the case where \(\zeta_{b}=0\) or \(\zeta_{r}=0\) in Eq. (3.24) or Eq. (3.25). The interface connection condition can be generalized to the general form_
\[\llbracket u(X)\rrbracket_{\Gamma} =\Phi(X), \tag{3.27}\] \[\llbracket-\kappa(X)\nabla u(X)\cdot\boldsymbol{n}\rrbracket_{\Gamma} =\Psi(X). \tag{3.28}\]
_The continuity conditions (3.3) and (3.4) are considered as a special case where \(\Phi(X)=\Psi(X)=0\). For a class of interface problems with jump conditions, i.e., for the case where \(\Phi(X)\neq 0\) and \(\Psi(X)\neq 0\), we can use \(\Phi(X)\) and \(\Psi(X)\) to normalize \(\mathcal{L}_{\Gamma}(\theta;\tau_{\Gamma})\) in the same way as Eqs. (3.22)-(3.25)._
## 4 Numerical experiments
In this section, we give several numerical experiments to demonstrate the performance of the proposed method in this work. In Sect. 4.1, we test the performance of the new methods by solving a typical two-material diffusion equation, and we show the results based on different separation distances. In Sect. 4.2, we solve a multi-material diffusion equation with 4 subdomains. In
Sect. 4.3, we present a diffusion model with a special computational domain. The computational domain contains different materials within a circular subdomain at its center. In Sect. 4.4, we test the ability of the new method for diffusion problems with jump conditions at the interface.
We use the deep learning framework TensorFlow(version 1.5) to develop the code. The datatype of all variables is _float32_. For all the numerical experiments, we use the Adam optimizer to run 2000 iterations and then switch to the L-BFGS optimizer until convergence. All parameters and termination criteria of the L-BFGS optimizer are considered as proposed in Ref. [24]. Before training, the parameters of the neural network are randomly initialized by using the Xavier scheme [25]. All numerical experiments use the deep neural network with 5 hidden layers of 50 neurons each.
The accuracy of the trained model is evaluated using the relative \(\mathbb{L}_{2}\) error, which is defined as follows:
\[\left\|e\right\|_{\mathbb{L}_{2}}=\frac{\sqrt{\sum_{i=1}^{N}\left|u_{\theta}( X_{i})-u^{*}(X_{i})\right|^{2}}}{\sqrt{\sum_{i=1}^{N}\left|u^{*}(X_{i})\right|^{2 }}}, \tag{4.1}\]
where we suppose \(\sum_{i=1}^{N}\left|u^{*}(X_{i})\right|^{2}\neq 0\), \(u^{*}(X_{i})\) is the exact solution or the reference solution, and \(u_{\theta}(X_{i})\) is the neural network prediction for \(N=10000\) test points which are uniformly distributed over the computational domain. For the number of training points in this section, we set \(N_{f}=5000\) for each subdomain, \(N_{b}=500\) for each boundary, and \(N_{\Gamma}=2000\) for each interface.
**Remark 4.1**.: _Different hyperparameters, such as the model architecture, the size of the training data, the weights, the optimizer, etc., can cause the PINN method to produce different computational results. To test the robustness and effectiveness of our method, we use the same set of parameters for all numerical examples in this paper. We believe that finer tuning of the parameters could yield even better results._
### Typical two-material diffusion problems
Consider the following diffusion problem with two materials in the computational domain:
\[\begin{cases}-\nabla\cdot(\kappa(x,y)\nabla u)=Q(x,y),&(x,y)\in\Omega=(0,1) \times(0,1),\\ u(x,y)=0,&(x,y)\in\partial\Omega,\end{cases} \tag{4.2}\]
where
\[\kappa(x,y)=\begin{cases}4,\ (x,y)\in\left(0,\frac{2}{3}\right]\times(0,1),\\ 1,\ (x,y)\in\left(\frac{2}{3},1\right)\times(0,1),\end{cases} \tag{4.3}\]
and
\[Q(x,y)=\begin{cases}20\pi^{2}\sin\pi x\sin 2\pi y,\ (x,y)\in\left(0,\frac{2}{3} \right]\times(0,1),\\ 20\pi^{2}\sin 4\pi x\sin 2\pi y,\ (x,y)\in\left(\frac{2}{3},1\right)\times(0,1). \end{cases} \tag{4.4}\]
The exact solution of this equation is
\[u(x,y)=\begin{cases}\sin\pi x\sin 2\pi y,\ (x,y)\in\left(0,\frac{2}{3}\right] \times(0,1),\\ \sin 4\pi x\sin 2\pi y,\ (x,y)\in\left(\frac{2}{3},1\right]\times(0,1).\end{cases} \tag{4.5}\]
Figure 4.1 shows the exact solution of this example, expressed in z-coordinate value and color, respectively. The solution \(u(x,t)\) is continuous at the interface \(x=\frac{2}{3}\), but its partial derivative \(u_{x}\) is discontinuous at the interface.
For this model, we set the domain separation distance \(d=0.1\). Table 4.1 shows the relative \(\mathbb{L}_{2}\) error of different methods. Figure 4.2 shows the the prediction and the point-wise error of different PINN methods. It can be seen that the standard PINN method gives a smooth prediction, and due to the lack of interface information, its prediction is of low accuracy. However, the DS-PINN method provides a highly accurate prediction, and by normalizing the
Figure 4.1: The exact solution of Sect. 4.1.
Figure 4.2: Predictions and point-wise errors of three PINN methods for Example 4.1
residual terms, the nDS-PINN method further improves the prediction accuracy by more than an order of magnitude.
How does the separation distance between subdomains affect the solution accuracy? To investigate this question, we use different separation distances to test this computational model, and the figure 4.3 shows the effect of different \(d\) on the prediction accuracy.
As can be seen from the Figure 4.3, if \(d\) is too small (\(d<10^{-2}\)), the error is very large. The reason is that, for a small \(d\), when the diffusion coefficients on both sides of the material interface differ greatly, it is equivalent to solving a problem with a large variation of the derivative over a narrow interval, which is computationally very difficult and therefore yields inaccurate results without special tricks. Conversely, if \(d\) is too large (\(d>10\)), then the range expressed by the neural network increases significantly, and moreover, the gaps between the subdomains do not assign the sampling points and do not participate in the training of the neural network, and so they belong to the invalid computational domain. Obviously, if the proportion of invalid regions is too large, the accuracy of the prediction function will inevitably be low.
**Remark 4.2**.: _Regarding the standard PINN method, although one can get a higher accuracy by changing some hyper-parameters, its prediction accuracy near the material interface does not change significantly. This remark is also adapted to the discussion of the standard PINN method in the later examples._
**Remark 4.3**.: _Choosing an appropriate separation distance is beneficial for improving the computational accuracy of the model. We believe that \(d\) should be
Figure 4.3: \(\left\|e\right\|_{\mathbb{L}_{2}}\) errors of the two new methods with different separation distances(obtained from 5 independently repeated experiments).
chosen based on the fact that it should be proportional to \(|\kappa_{1}-\kappa_{2}|\), while keeping the percentage of added invalid computational area as small as possible. Here \(\kappa_{1}\) and \(\kappa_{2}\) represent the diffusion coefficients on both sides of the interface._
### Multi-material diffusion problems
In this subsection, we examine a multi-material diffusion example and the separation distance \(d=0.1\). The governing equation is the same as Eq. (4.2). Suppose that the computational domain \(\Omega\) consists of 4 subdomains with different diffusion coefficients,
\[\kappa(x,y)=\begin{cases}4,&(x,y)\in(-1,0]\times(-1,0],\\ 1,&(x,y)\in(0,1)\times(-1,0],\\ 2,&(x,y)\in(0,1)\times(0,1),\\ 1,&(x,y)\in(-1,0]\times(0,1).\end{cases} \tag{4.6}\]
We also assume that this problem has the exact solution as follows:
\[u(x,y)=\begin{cases}\sin\pi x\sin\pi y,&(x,y)\in[-1,0]\times[-1,0],\\ 4\sin\pi x\sin\pi y,&(x,y)\in(0,1]\times[-1,0],\\ 2\sin\pi x\sin\pi y,&(x,y)\in(0,1]\times(0,1],\\ 4\sin\pi x\sin\pi y,&(x,y)\in[-1,0]\times(0,1].\end{cases} \tag{4.7}\]
To test the performance of the new methods, we solve this model using DS-PINN and nDS-PINN. The source term \(Q(x,y)\) and the boundary conditions are derived from the exact solution (4.7).
Figure 4.4 shows a schematic diagram of the sampling of training points in different subdomains, and the training points consist of three types, including
Figure 4.4: Schematic diagram of domain separation and training point layout of Sect. 4.2.
residual points located inside the subdomains, supervised points located at the boundaries, and interface points located at the material interfaces.
This example is somewhat complicated. If multiple neural networks were conventionally used to handle the material interface, this problem would require four neural networks, which would not only be difficult to implement, but also computationally inefficient. With our methods, this problem can be easily solved with only a single neural network.
Table 4.2 shows the relative \(\mathbb{L}_{2}\) error of different methods, and Figure 4.5 shows the the prediction and the point-wise error of different PINN methods. Similar to the results of the previous example, the standard PINN gives a poor prediction for this model, while the DS-PINN gives a satisfactory prediction and the nDS-PINN gives an accurate prediction.
### The diffusion problem with the heterogeneous material located inside the computational domain
In practical applications such as heat transfer and oil reservoir simulation, it is common for a material to be completely enveloped by another material. The purpose of this section is to test the ability of our method to handle this case.
\begin{table}
\begin{tabular}{c|c} \hline Method & \(\left\|e\right\|_{\mathbb{L}_{2}}\) \\ \hline Standard PINN & \(8.96\pm 0.03\times 10^{-1}\) \\ DS-PINN (this work) & \(1.5\pm 0.2\times 10^{-3}\) \\ nDS-PINN (this work) & \(5.2\pm 1.6\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 4.2: \(\left\|e\right\|_{\mathbb{L}_{2}}\) errors of three PINN methods for Sect. 4.2.
Figure 4.5: Predictions and point-wise errors of three PINN methods for Sect. 4.2
Consider the problem as follows:
\[\begin{cases}-\nabla\cdot(\kappa(x,y)\nabla u)=Q(x,y),&(x,y)\in\Omega=(-2,2)\times( -2,2),\\ u(x,\pm 2)=\sin(\frac{\pi}{4}(x^{2}+3)),&x\in[-2,2],\\ u(\pm 2,y)=\sin(\frac{\pi}{4}(y^{2}+3)),&y\in[-2,2],\end{cases} \tag{4.8}\]
where
\[\kappa(x,y)=\begin{cases}1,&(x,y)\in\Omega_{1}=\{(x,y)|x^{2}+y^{2}<1\},\\ 4,&(x,y)\in\Omega\backslash\Omega_{1},\end{cases}\]
and the source term
\[Q(x,y)=\begin{cases}-4\pi\cos\left(\pi(x^{2}+y^{2}-1)\right)+4\pi^{2}(x^{2}+y^ {2})\sin\left(\pi(x^{2}+y^{2}-1)\right),\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
Figure 4.6: The exact solution of Sect. 4.3 and the two predictions from the standard PINN and nDS-PINN.
Figure 4.7: Point-wise errors of three PINN methods for Sect. 4.3.
accuracy, both in terms of the relative \(\mathbb{L}_{2}\) error and the point-wise error, which is one order of magnitude smaller than that of the DS-PINN method, showing excellent performance. Unlike the previous two examples, the boundary condition of this example is not zero, so not only the residual term is normalized, but also the supervised term is normalized.
In Figure 4.8, the left image shows the schematic diagram of domain separation and it also gives the distribution of training points; the right image shows the DS-PINN prediction for the whole extended domain after implementing the domain separation strategy. This is a very interesting picture, and we can see that the prediction on \(\Omega_{1}\) and \(\Omega\backslash\Omega_{1}\) match well with the exact solution at the corresponding locations.
### The diffusion problems with jump conditions at the interface
In this paper we are mainly concerned with a class of multi-material diffusion problems formulated by Eqs. (2.1)-(2.3), for which the continuity conditions (3.3) and (3.4) should be satisfied at the material interface.
However, there is a special class of heterogeneous diffusion problems, such as the heat conduction problem with a thin insulating layer or with a phase change at the material interface, and the percolation problem with a filter membrane, which also receive much attention. The solutions and fluxes of these problems are discontinuous at the material interface and they satisfy some jump conditions. Such problems, which are also studied in Refs. [16; 26], can be formulated by the following equations:
\[\begin{cases}-\nabla\cdot(\kappa(x,y)\nabla u(x,y))=Q(x,y),&(x,y)\in\ \Omega=(-1,1)\times(-1,1),\\ \llbracket u(x,y)\rrbracket_{\Gamma}=\Phi(x,y),&(x,y)\in\ \Gamma,\\ \llbracket\kappa(x,y)\nabla u(x,y)\cdot\boldsymbol{n}\rrbracket_{\Gamma}= \Psi(x,y),&(x,y)\in\ \Gamma,\\ u(x,\pm 1)=\ln(1+x^{2}),&x\in[-1,1],\\ u(\pm 1,y)=\ln(1+y^{2}),&y\in[-1,1],\end{cases} \tag{4.10}\]
Figure 4.8: Left: Schematic diagram of domain separation (\(d=3.5\)) and training point layout of Sect. 4.3; Right: The prediction on the extended computational domain.
where the coefficient \(\kappa(x,y)\) and the source term \(f(x,y)\) are as follows:
\[\kappa(x,y)=\begin{cases}\cos(x+y)+2,&(x,y)\in\Omega_{1}=\{(x,y)|x^{2 }+y^{2}<0.5^{2}\},\\ \sin(x+y)+2,&(x,y)\in\Omega\backslash\Omega_{1},\end{cases} \tag{4.11}\] \[Q(x,y)=\begin{cases}4(\cos(x+y)+1)\sin(x+y),&(x,y)\in\Omega_{1}, \\ -2\cos(x+y)\frac{x+y}{x^{2}+y^{2}},&(x,y)\in\Omega\backslash\Omega_{1}.\end{cases} \tag{4.12}\]
In Eq. (4.10), the interface \(\Gamma\) is a circle with a radius of \(0.5\) and centered at \((0,0)\). Note that \(\llbracket\mu\rrbracket_{\Gamma}:=\mu|_{\Gamma^{+}}-\mu|_{\Gamma^{-}}\) denotes the jump of \(\mu\) across the interface. \(\Phi(x,y)\) and \(\Psi(x,y)\) can be derived from the exact solution below.
The exact solution of this case is
\[u(x,y)=\begin{cases}\sin(x+y),&(x,y)\in\Omega_{1},\\ \ln(x^{2}+y^{2}),&(x,y)\in\Omega\backslash\Omega_{1}.\end{cases} \tag{4.13}\]
For this model, our methods are fully applicable, with only a slight modification of the loss term \(\mathcal{L}_{\Gamma}(\theta;\tau_{\Gamma})\) by replacing the continuity conditions with the jump conditions. The computational results using different PINN methods are shown in Figure 4.9 and Table 4.4. It can be seen that the standard PINN method is powerless for such a model, while our methods DS-PINN and nDS-PINN (with separation distance \(d=2\)) can solve this model exactly, especially the nDS-PINN method gives extremely accurate computational results.
It should be emphasized that for the nDS-PINN method, since \(\Phi(x,y)\) and \(\Psi(x,y)\) in the jump condition are known functions, we used them to normalize the interface loss term \(\mathcal{L}_{\Gamma}(\theta;\tau_{\Gamma})\). The result of our nDS-PINN method is consistent with that of the INN method, which is taken from Ref. [16].
Figure 4.9: Predictions and point-wise errors of three PINN methods for Sect. 4.4.
## 5 Conclusions
For a class of multi-material diffusion problems, this paper first analyzed the reasons why the standard PINN cannot be applied; then derived two continuity conditions that should be satisfied at the material interface, the use of which can effectively fill in the missing information at the interface; further, we designed a domain separation strategy to overcome the problem that the solution function cannot be expressed by a single neural network due to the discontinuity of its derivatives at the interface. Finally, by combining the above two works, we improved the standard PINN by adding special terms to the loss function so that the interface conditions are accurately represented in a single neural network, which makes the obtained prediction function fully reflect the characteristics of the solution at the interface, giving very accurate predictions near the interface. In addition, we design a problem-adapted normalization method for the loss term, which can further significantly improve the accuracy of the prediction. Various numerical experiments verify the effectiveness of our method. The new method perfectly solves the problem that the standard PINN cannot be adapted to the multi-material diffusion model. We believe that this work provides a novel idea for PINN to solve partial differential equations with non-smooth solutions, and it is a useful development of the standard PINN.
Note that the methods in this paper are only for linear multi-material diffusion equations. It is our future work to study PINN methods for solving nonlinear multi-material diffusion problems.
## Code availability
The code of this work is publicly available online via [https://doi.org/10.5281/zenodo.7927544](https://doi.org/10.5281/zenodo.7927544).
## Acknowledgements
The work is supported by the National Science Foundation of China under Grant No.12271055, the Foundation of CAEP (CX20210044), the Natural Science Foundation of Shandong Province No.ZR2021MA092, and the Foundation of Computational Physics Laboratory. |
2304.13532 | SCV-GNN: Sparse Compressed Vector-based Graph Neural Network Aggregation | Graph neural networks (GNNs) have emerged as a powerful tool to process
graph-based data in fields like communication networks, molecular interactions,
chemistry, social networks, and neuroscience. GNNs are characterized by the
ultra-sparse nature of their adjacency matrix that necessitates the development
of dedicated hardware beyond general-purpose sparse matrix multipliers. While
there has been extensive research on designing dedicated hardware accelerators
for GNNs, few have extensively explored the impact of the sparse storage format
on the efficiency of the GNN accelerators. This paper proposes SCV-GNN with the
novel sparse compressed vectors (SCV) format optimized for the aggregation
operation. We use Z-Morton ordering to derive a data-locality-based computation
ordering and partitioning scheme. The paper also presents how the proposed
SCV-GNN is scalable on a vector processing system. Experimental results over
various datasets show that the proposed method achieves a geometric mean
speedup of $7.96\times$ and $7.04\times$ over CSC and CSR aggregation
operations, respectively. The proposed method also reduces the memory traffic
by a factor of $3.29\times$ and $4.37\times$ over compressed sparse column
(CSC) and compressed sparse row (CSR), respectively. Thus, the proposed novel
aggregation format reduces the latency and memory access for GNN inference. | Nanda K. Unnikrishnan, Joe Gould, Keshab K. Parhi | 2023-04-26T13:07:42Z | http://arxiv.org/abs/2304.13532v1 | # SCV-GNN: Sparse Compressed Vector-based Graph Neural Network Aggregation
###### Abstract
Graph neural networks (GNNs) have emerged as a powerful tool to process graph-based data in fields like communication networks, molecular interactions, chemistry, social networks, and neuroscience. GNNs are characterized by the ultra-sparse nature of their adjacency matrix that necessitates the development of dedicated hardware beyond general-purpose sparse matrix multipliers. While there has been extensive research on designing dedicated hardware accelerators for GNNs, few have extensively explored the impact of the sparse storage format on the efficiency of the GNN accelerators. This paper proposes SCV-GNN with the novel sparse compressed vectors (SCV) format optimized for the aggregation operation. We use Z-Morton ordering to derive a data-locality-based computation ordering and partitioning scheme. The paper also presents how the proposed SCV-GNN is scalable on a vector processing system. Experimental results over various datasets show that the proposed method achieves a geometric mean speedup of \(7.96\times\) and \(7.04\times\) over CSC and CSR aggregation operations, respectively. The proposed method also reduces the memory traffic by a factor of \(3.29\times\) and \(4.37\times\) over compressed sparse column (CSC) and compressed sparse row (CSR), respectively. Thus, the proposed novel aggregation format reduces the latency and memory access for GNN inference.
Neural Network Inference, Accelerator Architectures, Graph neural networks, Aggregation.
## I Introduction
Deep neural networks (DNNs) are brain-inspired models and have permeated into everyday facets of our lives [1]. These models have shown significant promise in the domains of image recognition [2, 3, 4, 5], speech [6], language [7, 8] and medical diagnosis [9]. Recently there has been a keen interest in exploring applications where the data is highly structured in the form of graphs [10, 11, 12, 13, 14] using graph neural networks (GNNs). This has wide-ranging applications from the performance of communication networks [15], molecular interactions in chemistry [16], human relations through social media networks [17], and brain function and disease analysis in neuroscience [18]. GNNs have shown significant promise as they are able to exploit the dependencies encoded in the graph structure across multiple layers, allowing for an effective relational inductive bias from the neural network design.
There are several challenges introduced by graph computing. First, as graph sizes continue to grow exponentially, storing the data within the local memory becomes prohibitive. This necessitates further research into efficient methods to perform inference to enable further adoption. Secondly, GNN computations are highly memory and communication-intensive, requiring irregular memory access patterns and leading to high memory latency. Lastly, GNN adjacency matrices have a high degree of nonuniform sparsity (\(\geq 99.9\%\)), where most nodes contain very few edges and a few nodes contain the majority of edges. This results in poor data locality and heavy imbalances in the processing element workloads. This makes GNN workloads unsuited for general purpose processors [19, 20, 21], layer pipelining [22, 23, 24], DNN accelerators [25, 26, 27, 28], and sparse matrix multiplication accelerators [29, 30, 31, 32]. The unique challenges above of GNNs have led to a plethora of solutions in both software [33, 34], accelerator [35, 36, 37, 38, 39, 40], and HW/SW co-design [41] space. Most hardware solutions have largely focused on helping sparse accelerators mitigate the above challenges [42] rather than tailoring the solution to the characteristics of GNNs. This is inadequate to handle the irregular patterns of the aggregation operation. To mitigate the load balancing issue, GNN accelerators often employ some form of prepossessing of either the clever tiling strategies [43, 44, 45], reordering of the nodes [46, 39, 47], or feature reuse [48, 39]. While these preprocessing techniques can significantly help with workload balancing, they are impractical in real-time applications when each input graph is unique, and the preprocessing step is a recurring cost. The above hardware solutions also mitigate the load balancing issue by including complex queues, network-on-chips, and accumulators to distribute the workloads across PEs and collect the results. However, these changes bring significant overhead to the accelerator design, limiting its applications and generalizability.
Given these challenges, we propose a novel software-hardware co-design solution, _sparse compressed vectors_ (SCV), that optimizes the sparse format for maximizing hardware efficiency, and we develop an optimized hardware platform to exploit the proposed new format. The proposed format uses fixed-sized column vectors stored in a row-major format. The nature of storing the values within a block opposite of the order of the blocks themselves improves memory efficiency by balancing the input and output matrix priorities. From an architectural perspective, SCV has two improvements over standard sparse formats. First, the column-based vectors maximize the reuse of the input matrix for all the non-zeros in the array while allowing hazard-free parallelism. Furthermore, as the format implicitly stores the location of non-zero columns, it allows for efficient prefetching of the
input matrix. Second, the row-major block processing order improves output matrix performance by accessing partial sums multiple times before evicting. The proposed block format allows exploiting existing cache blocking and data-locality strategies like Morton-Z-ordering [49] (SCV-Z) to enhance memory efficiency further. The proposed data-locality-based ordering enables improved scalability with multiple processors through efficient partitioning. Given the new processing format, we design a queue-based general-purpose vector processor. The proposed architecture demonstrates SCV processing while also functioning as a general-purpose vector processor.
The main contributions of the paper are as follows.
* We propose SCV, a novel format, that significantly reduces the random access patterns during aggregation.
* We develop a novel processing order that prioritizes computation parallelism without the need for complex preprocessing or introducing workload imbalances.
* We introduce a blocking strategy that allows exploiting data locality strategies in the memory hierarchy, like Z-order, and improves the scalability of the design.
* We map the proposed format to a generalized queue-based vector processor that shows the simplicity in supporting the proposed format.
The remainder of the paper is organized as follows. Section II focuses on the key GNN equations and computations and the benefits and limitations of existing sparse formats. Section III describes the proposed SCV format and how it can be used for processing GNNs. Section IV describes a general vector processor to support the proposed format. Section V evaluates the proposed methodology. Section VI describes the related work in the field. Finally, in Section VII, we summarize the paper's main conclusions.
## II Graph Neural Networks and Sparse Formats
### _Graph Neural Networks_
Traditional neural networks like convolutional neural networks (CNNs) are designed to work with spatial or temporal data like images, speech, and videos [50, 1]. GNNs are a generalization of these neural networks to work on graph data structures that aim to collect features of each node from its K-hop neighbors. The success GNNs have had has led to the rise of various GNN architectures [35, 36, 37, 38, 46, 47, 51]. To effectively describe GNNs, we employ the general message passing paradigm [33, 16] as defined below.
Consider the \(h_{v}\in\mathcal{R}^{d_{t}}\), the feature for the node \(v\) and \(w_{e}\in\mathcal{R}^{d_{e}}\), the feature for an edge \(e:u\to v\). The set of all feature vectors is defined as \(\mathbf{H}^{(t)}\) for layer \(t\). We can derive the message passing paradigm as follows [33]:
\[Edges:m_{e}^{(t+1)} =\phi(h_{v}^{(t)},h_{u}^{(t)},w_{e}^{(t)}),(u,v,e)\in\mathcal{E} \tag{1}\] \[Nodes:h_{v}^{(t+1)} =\psi(h_{v}^{(t)},\rho(\{m_{e}^{(t+1)}:(u,v,e)\in\mathcal{E}\}))\]
where \(\mathcal{E}\) is the set of neighbor edges of node \(v\), \(\phi\) is the message function that combines incident edge and node features (_combination function_), \(\psi\) is an update function, and \(\rho\) is the aggregating reduction function (_aggregation function_).
While node and edge-based definitions are beneficial, we can better understand the mapping of the above operations to hardware by defining the operations in terms of standard matrix/vector operations as shown below:
\[\mathbf{Z}^{(t)} =\mathbf{H}^{(t)}\mathbf{W}^{(t)} \tag{2}\] \[\mathbf{H}^{(t+1)} =\sigma(\mathbf{\hat{A}}\mathbf{Z}^{(t)}) \tag{3}\]
where \(\mathbf{Z}^{(t)}\) is the combined feature matrix, \(\mathbf{\hat{A}}\) is the weighted adjacency matrix for layer \(t\), and \(\sigma\) is a nonlinear operation that represents an activation function or pooling. Eq. (2) is a matrix version of the edge operation where the output \(\mathbf{Z}^{(t)}\) maps to \(m_{e}^{(t+1)}\). Eq. (3) is the matrix version of \(h_{v}^{(t+1)}\), where the messages from adjacent nodes are aggregated. \(\mathbf{\hat{A}}\) for special graph networks such GCN [10], GraphSAGE [13], GIN [12], and GAT [11] are described in these references respectively.
Eq. (2) represents the combination operation between the previous layer's output and the weight matrix, and Eq. (3) represents the weighted aggregation step. Thus, defining GNNs in this form allows for the targeted development of accelerators for the combination and aggregation step.
### _Sparse Formats_
The adjacency matrices used for aggregation are often ultra-sparse, with sparsities \(\geq 99\%\). Additionally, the number of nodes in the graph is significant to the order of thousands to millions of nodes. The simplest way to store and process the sparse matrices is in the coordinate (COO) format, where we store each non-zero as a 3-element tuple of row index (row_id), column index (col_id), and its value. While the COO format is simple, it is not very efficient for storing or processing very large or ultra-sparse matrices. This warrants an exploration of sparse formats suited for aggregation. The aggregation step processes the output of the combination step, Eq. (2). We refer to the output of the combination step as the combined feature matrix, \(\mathbf{Z}\).
From Eq. (3), we can analyze the memory access patterns for the aggregation step. Processing computations in a row of the adjacency matrix map to load the corresponding row of the output matrix. Similarly, processing elements in a column of the adjacency matrix map to a single row of the \(\mathbf{Z}\) matrix. This can be used to analyze the effectiveness of the proposed sparse formats. We describe four baseline formats: compressed sparse column (CSC), compressed sparse row (CSR), block compressed sparse row (BSCR), and multipass (MP).
#### Ii-B1 Compressed Sparse Column (CSC)
Fig. 1(a) shows the CSC representation of a sample sparse \(4\times 4\) matrix. The non-zeros are stored in an array in column-major format, the _values_ array. The corresponding row values for each non-zero are stored in the _row id_ array. The _col ptr_ array stores the starting location for each column in the values array with the last entry pointing to the end of the values array. Each color represents one column of the matrix. Fig. 2(a) shows the computation order for a CSC matrix multiplication operation. The operation iterates through each column of the adjacency matrix, loading all the non-zero elements. Each column also loads a single row of the \(\mathbf{Z}\) matrix, and each non-zero loads the
corresponding row of the output matrix's partial sums (\(PS\)). From an architectural perspective, CSC increases the reuse of the combined feature matrix, \(\mathbf{Z}\), ensuring it is utilized before moving to the next row. However, this comes with irregular access to the \(PS\) matrix. This is highlighted by the span of the \(\mathbf{Z}\) and \(PS\) shown in the top left of Fig. 2(a).
#### Ii-B2 Compressed Sparse Row (CSR)
Fig. 1(b) shows the CSR representation of a sample sparse \(4\times 4\) matrix. The non-zeros are stored in a linear array in row-major format, as shown by the _values_ array. The corresponding column values for each non-zero are stored in the _col id_ array. The _row ptr_ array stores the starting location for each row in the _values_ array with the last value pointing to the end of the values array. Each color represents one row of the matrix. Fig. 2(b) shows the computation order for a CSR matrix multiplication operation. The operation iterates through each adjacency matrix row, loading all the non-zero elements. Each row also loads a single partial sum matrix row, and each non-zero loads the corresponding row of the \(\mathbf{Z}\) matrix. From an architectural perspective, CSR increases the output matrix reuse, \(PS\), ensuring it is computed before moving on to the next row. However, this comes at expense of irregular accesses of the combined feature matrix \(\mathbf{Z}\). This is highlighted by the span of \(\mathbf{Z}\) and \(PS\) shown in the top left of Fig. 2(b).
#### Ii-B3 Block Compressed Sparse Row (BCSR)
Fig. 1(c) shows the BCSR representation of a sample sparse \(4\times 4\) matrix. This is a blocked version of the CSR format, with each value replaced with a dense 2D block. The non-zero blocks are each flattened in a row-major format. Each block is then stored contiguously in a linear array in a row-major format, as shown by the _values_ array. The corresponding column values for each non-zero block are stored in the _col id_ array. This is analogous to the CSR array, where each entry tracks a block instead of a single value. For this example, there are two possible col ids, 0 and 1, as there are two columns of blocks. The _row ptr_ array stores the starting location for each non-zero block in the _values_ array. This is analogous to the CSR array, where each entry points to the starting location of a row of blocks. For this example, the pointers are 0, 1, and 3, as one and two blocks are in the two rows, respectively. As each block is stored as a dense matrix, the two pointers only refer to the block locations rather than the individual values. Here each color represents one block of the matrix. The format trades off the benefits of reusing the \(PS\) and \(\mathbf{Z}\) matrices, allowing for regular access to both. The matrix multiplication operation iterates through each non-zero block of the adjacency matrix, as shown in Fig. 2(c). Each step loads the corresponding rows of the \(\mathbf{Z}\) and partial sum matrices. One further advantage of this approach is that since each block is stored independently, we can exploit tiling and tile order to optimize memory efficiency. However, this comes at the cost of additional storage and memory access requirements, as non-zero blocks are always stored densely. For example, the block highlighted in red has a single value but is stored as a dense 3x3 requiring loading all of the corresponding rows of \(Z\) and \(PS\).
#### Ii-B4 Multipass (MP)
As seen with CSR, CSC, and BCSR, the order of operation can significantly impact the number of memory accesses. One approach to eliminate the influence of access order is to perform a memory-centric approach. The aggregation step can be seen as a scatter-gather operation between different nodes, thus, we can exploit existing enhancements for such workloads. A multiple-pass approach, or Multipass [52, 53], iterates through the matrix multiple times, only performing the aggregation if all the dependent variables are already loaded into memory or cache. This significantly reduces the number of misses in the cache and allows for a more regular access pattern to the DRAM. The data loaded into the cache is kept until it is completely processed or is evicted based on some predetermined thresholds. This approach trades off memory access regularity for increased computation workload. Specifically, the matrix multiplication must complete multiple rounds or passes over the input data until all the nodes have been processed. Furthermore, as data can be used for multiple rounds, intermediate results must be held locally for longer.
## III Sparse Compressed Vectors
### _Improvements to Existing Sparse Formats_
The baseline formats were designed with general sparse matrix multiplication in mind. However, improvements can
Fig. 1: Various sparse representations with the pointer, index, and values arrays for a) CSC, b) CSR, c) BCSR, and d) SCV.
still be tailored toward the ultra-sparse models of graph neural networks. The goal of the proposed method is to take advantage of the BCSR format while reducing its liabilities. The proposed method targets three specific improvements.
* Reducing the overhead of the format by storing the internal blocks as sparse.
* Exploring format options to maximize GNN computational efficiency.
* Exploiting sparsity for efficient memory accesses.
First, the major shortcoming of BCSR is the dense manner in which it stores its internal blocks. One possible solution is the use of compressed sparse blocks [54] format, which modifies the internal blocks to be stored in a sparse format. The CSB format stores the information in fixed size square blocks, usually a power of 2. The non-zero blocks are each flattened in a row-major format. Each block is stored contiguously in a linear _values_ array. The corresponding row and column values for each non-zero are stored in the _row id_ and _col id_ arrays. The main difference between this and a COO format is that the row id and col id store the relative addresses with respect to the block and not the entire matrix. Thus this requires \(\log_{2}B\) bits, where \(B\) is the block size. This is significantly lower than the \(\log_{2}N\) bits required in the COO format as \(B<<N\). The block pointer array, _blk ptr_, stores the starting location for each non-zero block in the values array. The user can determine the block order based on the application. The format incorporates the benefits of BCSR while reducing the memory requirement. The operation iterates through each row of non-zero blocks in the adjacency matrix. Each step loads the corresponding rows of the \(\mathbf{Z}\) and partial sum matrices. It also exploits tiling and tile order to optimize memory efficiency. This significantly improves the adjacency matrix performance but has little impact on the \(\mathbf{Z}\) and partial sum matrices.
Second, to improve computational GNN efficiency, we modify the operation order within the blocks themselves. CSB stores the non-zero values contiguously but does not give preference to columns or rows in their internal COO format. We propose using a column-major storage format that leads to a two-fold improvement. Column-based processing allows for parallelizing the computations within the block without creating imbalances. If the column is sufficiently large enough, multiple rows in the partial sum array can be processed in parallel without conflict. Also, column-based processing allows for regular accesses to the \(\mathbf{Z}\) matrix, as it adopts a sequential row-by-row access pattern.
Third, we explore how to prioritize reads from memory to improve memory access efficiency. For example, if an entire row within a sparse adjacency block is zero, we need not load the corresponding partial sum row. Similarly, a zero column in the adjacency block implies that the corresponding row of the \(\mathbf{Z}\) is not required. A row-centric ordering would negate the earlier parallelizing benefits and is suboptimal. Therefore to maximize efficiency, rather than working directly with square tiles like CSB, we propose further dividing the block into column vectors, and test this in Section V. Thus, we introduce the proposed _Sparse Compressed Vector_ (SCV) format.
### _Sparse Compressed Vectors (SCV)_
Fig. 1(d) shows the SCV representation of a sample sparse \(4\times 4\) matrix. The matrix is divided into a series of fixed-size block column vectors (2 in this example). The vector's contents are stored continuously, and each non-zero column vector is stored in a row-major format, as shown by the _values_ array. The corresponding location values within a vector for
Fig. 2: Comparisons of the matrix multiplication processing order on the adjacency matrix \(\hat{\mathbf{A}}\) of the sparse formats: a) CSC, b) CSR, c) BCSR, d) SCV, and e) SCV with Z ordering (SCV-Z). The blocks highlighted in red and blue represent two blocks within the adjacency matrix and the corresponding \(\mathbf{Z}\) and \(PS\) matrices rows loaded corresponding to the matrix multiplication computation.
each non-zero are stored in the _blk id_ array. In this example, the blk id associated with a value can be either 0 or 1 depending on its location within the column vector. The _blk ptr_ array stores the starting location for each vector in the values array. Each column vector in the matrix has a corresponding blk ptr, 8 column vectors of size two are present in this example. Here each color represents one column vector of the matrix. The SCV format can be interpreted as using the CSB format with a block width of 1 column. Fig. 2(d) shows the computation order for an SCV matrix multiplication operation. The matrix multiplication operation iterates through each vector of the adjacency matrix, loading all the non-zero elements. Each vector also loads a single row of the \(\mathbf{Z}\) matrix. \(PS\) matrix rows are loaded corresponding to the rows of the block vector. From an architectural perspective, there are two improvements from using SCV. First, the column-based vectors maximize the \(\mathbf{Z}\) matrix reuse for all non-zeros in the array. Furthermore, as the format implicitly stores non-zero columns locations, which allows for prefetching the \(\mathbf{Z}\) matrix efficiently. Second, the row-major processing order improves \(PS\) matrix performance as the fetched \(PS\) rows are reused multiple times before being evicted. This is highlighted by the span of the \(\mathbf{Z}\) and \(PS\) shown in the top left of Fig. 2(d).
It is important to distinguish SCV from tiled CSC and CSR operations. Tiling CSC or CSR merely breaks existing rows or columns into tiles without changing the processing order of the computation. While these changes allow for improving parallelization, they do not improve memory access efficiency. Specifically, tiled CSC and CSR are still inefficient with respect to the \(PS\) and \(\mathbf{Z}\) matrices, respectively. SCV changes the processing order of the computation such that it processes vectors orthogonal to its storage order. These changes retain the benefits of a tiled CSC operation while reducing the inefficiency with the \(PS\) matrix.
### _Z-Ordering for Improved Cache Efficiency (SCV-Z)_
Thus far, we have not addressed the system's optimal ordering of SCV blocks. Though we initially describe SCV with a row-major block order, in principle, SCV can support any user-specific order based on the application. We explore this possibility with a modified Z-Morton ordering [49]. Z-Morton is a storage format that can map multidimensional data to a single dimension while prioritizing the locality of the data. It is a recursive storage format that first stores all elements in the top-left quadrant, then the top-right, bottom-left, and bottom-right quadrants. The same layout is used recursively within each quadrant. There were multiple choices for choosing the operation order beyond Z-Morton, such as U-Morton and Hilbert layouts. We chose the Z-Morton layout due to its simplicity of encoding and limited preprocessing. However, as SCV uses vectors instead of square tiles, we use a modified version of Z-Morton ordering that considers a set of column vectors as a single block. For simplicity, we choose the set size as the number of rows of the column vector. Fig. 2(e) shows a sample processing of matrix multiplication operation with SCV and a fixed block-sized Z-order (SCV-Z). The processing is broken up recursively into graph tiles. These tiles are then processed using the SCV format in the Z-Morton order preserving the locality of the different tiles in the memory hierarchy. Though we only show a 2-dimensional Z-order tiling of the adjacency matrix, this can easily be extended to a 3-dimensional Z-order by including the tiling of the combined feature matrix.
The proposed approach allows for easy processor scalability by virtue of its processing order. Using the processing line marked in purple in Fig. 2(e), we can arrive at a new storing order for the _blk ptr_, _blk id_, and _values_ arrays that are shown in Fig. 1(d). Thus, blocks between different processors can be mapped evenly, efficiently distributing the non-zeros of the matrix when stored in the new order. This has two advantages over baseline architectures. First, any subsequence from the processing order also preserves data locality, and second, the smaller block sizes allow for fine-grain partitioning compared to row or column-based partitioning. The proposed format can be easily statically generated from the COO format and is nearly equivalent to creating a CSR or CSC matrix.
## IV Architecture
### _Graph Processor_
We design and develop a multi-purpose configurable _processing element_ (PE) that can support various floating-point operations. Fig. 3 (a) shows the overall architecture of the proposed PE. The PE consists of three input ports (\(a\), \(b\), and \(c\)), an output port (\(r\)), a local floating-point register (\(M\)), five multiplexers, a floating-point adder, and a floating-point multiplier. The PE has two modes of operation: a _vector mode_ that only uses the input and output ports, and a _memory mode_ that uses the internal memory register. The list of supported floating-point operations for the PE is shown in the table in Fig. 3 (b). The PE is purposely designed for general-purpose processing as a fast parallel _vector processor engine_ (VPE) for single instruction multiple data (SIMD) operations. The proposed VPE is shown in Fig. 3 (c). The VPE accepts a single command that is broadcast to all of its PEs. Each PE accepts three floating-point values from the respective queues for its three input ports. The number of PEs, \(N_{PE}\) in a VPE, determines the size of the vector for SIMD processing. This allows for the VPE to support a variety of instructions beyond the traditional operations required for aggregation and combination. Subsequent sections of the architecture will highlight how these operations support the required functionality. Finally, the overall architecture of the _graph processor_ is shown in Fig. 3 (d). Each processor consists of \(N_{VPE}\) VPEs, each with its own queue for instructions and data.
The processor uses multiple controllers for the data flow's different aspects. First, the _cmd/addr generation_ block takes the input data stream and performs a command and address translation. The input commands are mapped to the PE commands shown in Fig. 3 (b) and map the input data to the local address within the _banked local shared memory_. Second, The arbiter and distributor distribute the workload among the different VPEs based on availability and hazards. Last, the memory controller interfaces with the local memory to monitor bank or memory conflicts and dynamically stalls the processor.
While we do propose a new accelerator, care was taken to ensure that the multi-level processor would be as simple as possible, with the main improvements and novelty coming from the proposed SCV-Z format. This was done to ensure that the proposed format could work efficiently in a general vector processor to maximize its utility.
### _Queues: Distribution and Hazard Handling_
The design uses data queues to create an asynchronous interface to the data stream. This helps reduce stalls by creating a buffer against data conflicts in the design. Fig. 3 (c) shows a close-up of one sample queue architecture. Each queue consists of four internal queues, a command queue containing SIMD, and three data queues corresponding to the three inputs of the VPE. The command queue has a depth of \(D\) and \(4\) bits width to support all instruction types. Each data queue has a depth \(D\) and width of \(Wd_{addr}\) bits, where \(Wd_{addr}\) is the width of the local address containing the required floating-point data. Each data queue can operate in two modes by loading either a scalar value that is broadcast to all \(N_{PE}\) PEs of the VPE or a sequential set of \(N_{PE}\) values corresponding to a vector. The data that is loaded from memory is mapped to the corresponding PE as shown in Fig. 3 (c). The queues are designed as asynchronous FIFOs to allow for easier data flow control without the need to introduce stalls.
As each queue handles data from different addresses, it is important to properly account for hazards. The memory model assumes that there is a two-cycle latency from when a result is written to a memory location and when it can be read back by any VPE. As such, locations accessed at least three cycles apart do not cause hazards, and in Fig. 4, are highlighted as green outlined boxes. We look at the nature of the aggregation and combination operations to analyze hazards. At its core, all operations can be written in the form of \(\mathcal{C}=\mathcal{A}\times\mathcal{B}+\mathcal{C}\), where \(\mathcal{A}\) and \(\mathcal{B}\) are read-only matrices. Therefore the architecture must only account for _read-after-write_ (RAW) hazards if the output \(\mathcal{C}\) is required for subsequent computations. We handle these hazards with the following modifications. First, RAW hazards occur if the output address, \(\mathcal{C}\), conflicts with the output address of any of the other parallel queues. Fig. 4 shows this as highlighted in red outlined boxes. To avoid this type of hazard, we perform a RAW hazard check and ensure that data mapped to the same output address are mapped to the same VPE or stalled. Second, within the same queue, there are three scenarios: the conflict is one cycle away (blue outlined boxes on PE Queue 1), the conflict is two cycles away (purple outlined boxes on PE Queue 1), or the conflict is three cycles away (green outlined boxes on PE Queue 1). For the one-cycle scenario, we handle the hazard by replacing the initial _Vector-Multiply-Add_ VPE command with a _Vector-Multiply-Accumulate_ command. This allows for the result to be stored for the immediate next cycle. In the two-cycle scenario, we
Fig. 4: Potential RAW hazards for the input to PE queue 1 and their mitigation strategy in the proposed architecture. The red boxes highlight cross-queue RAW hazards. The blue and purple boxes highlight potential hazards within the same queue and how they can be mitigated. The green boxes highlight the location for which there are no hazards and beyond.
Fig. 3: Architecture of the proposed queue-based vector processor. a) Structure of a single floating-point processing element. b) Supported set of operations for the PE. c) A vector processing element consisting of as many PEs as the width of the vector and its corresponding input queue. d) Complete vector processor architecture consisting of multiple VPEs, local memory, and control logic.
implement data forwarding within the PE as an output buffer. When this scenario is detected, the data is forwarded to the input of the VPE, bypassing the memory block. Last, data three cycles or more away from a conflict create no hazard.
The arbiter and distributor block assigns the incoming data stream into the respective queues. The block first resolves cross-queue RAW hazards by assigning conflicting data to the same queue. The block then resolves within queue hazards with alterations to the command. The arbiter is designed to operate at a higher throughput than the queues and PE so that it can always keep the queues full.
The above arbitration scheme was designed to not place any undue burden on the vector processor. Possible ways to improve the RAW hazard handling include dynamic workload distribution between VPEs and support for cross-VPE accumulation and reduction. Also, it is worth noting that with the proposed system, stalls are reduced significantly, and additional improvements may not be significant.
### _Banked Local Shared Memory_
The proposed dataflow requires access to high bandwidth local memory as each cycle, we require \(3\times N_{VPE}\) data reads and \(N_{VPE}\) writes, where \(N_{VPE}\) is the number of VPEs within a processor. Within the dataflow also, there are differing requirements for each data type in the computation \(\mathcal{C}=\mathcal{A}\times\mathcal{B}+\mathcal{C}\). The inputs \(\mathcal{A}\) and \(\mathcal{B}\) require only the ability to perform \(N_{VPE}\) parallel reads each, and \(\mathcal{C}\) requires \(N_{VPE}\) parallel read and writes. We use four enhancements to the local memory system to support these requirements. First, we segregate the data within the local system into dedicated memories for \(\mathcal{A}\), \(\mathcal{B}\), and \(\mathcal{C}\), allowing for tailored access based on the requirement. Second, the memories are designed to support a limited broadcast capability if there are parallel reads with the same address. This is useful when the weighted adjacency matrix (\(\mathbf{\hat{A}}\)) or the combined feature matrix (\(\mathbf{Z}\)) is reused over multiple computations. Third, we make use of four-port SRAM modules [55, 56] to increase simultaneous read and write accesses. These allow for four parallel reads or writes access to the memory. The \(\mathcal{A}\) and \(\mathcal{B}\) matrices would configure the memory to allow four parallel reads each. Similarly, the \(\mathcal{C}\) matrix would be configured for two reads and two writes. Fourth, when the \(N_{VPE}\) is greater than the data memory bandwidth, we employ memory banking [57]. Banked memories divide memory locations among multiple SRAMs (banks), allowing each additional bank to provide an additional two reads and two write ports for the \(\mathcal{C}\) matrix or four read ports for the \(\mathcal{A}\) and \(\mathcal{B}\) matrices. However, if the number of requests exceeds a bank's bandwidth, this could cause a bank conflict. The memory processor resolves bank conflicts by stalling the requesting VPE.
### _Aggregation Operation_
For the aggregation operation, we treat it as a sparse matrix multiplication between the ultra-sparse adjacency matrix \(\mathbf{\hat{A}}\) and the dense combined feature matrix \(\mathbf{Z}\) as shown in Eq. (3). In a direct aggregation case, as in the case of GCN, the rows of the feature matrix are added together based on the presence of ones in row of \(\mathbf{\hat{A}}\). Other GNN models can be considered special cases of weighted aggregation where the ones of the adjacency matrix are replaced with appropriate weights, the degree of the nodes for GCNs, or the attention values for GATs. Fig. 5 (a) shows the processing order for the weighted adjacency matrix while performing aggregation. In this example, the number of VPEs is 2 and the size of the column vector is 6. The arbiter fills non-zero data into the queues while prioritizing HAV hazards. For each non-zero, the address of the rows of the combined feature matrix, \(\mathbf{Z}\) (corresponding to the column of the adjacency matrix) and address of the output partial sums matrix, \(PS\) (corresponding to the row of the adjacency matrix), are loaded into the queues. The resulting partial sums are written back into the same shared memory location. The \(PS\) matrix is loaded into the shared memory once at the beginning and only ejected when moving to a new set of rows. The rows of the \(\mathbf{Z}\) can easily be prefetched by observing the set of non-zero blocks in the SCV format. Similarly, as the data is stored in the queue in increasing order of the \(\mathbf{Z}\) rows, a row may be evicted as soon as it is no longer addressed in the queues. When processing from the queues, the VPE reads the address for the \(PS\), \(\mathbf{\hat{A}}\), and \(\mathbf{Z}\) and loads the values from the shared memory. The results are written back to the same \(PS\) address. This process is repeated until all the elements of the weighted adjacency matrix have been processed.
### _Combination Operation_
Though the proposed format does not directly improve the combination operation, the proposed architecture is general enough to support combination. We treat them as a sparse matrix multiplication between the sparse feature matrix, \(\mathbf{H}\), and the dense weight matrix, \(\mathbf{W}\) to create the combined feature matrix \(\mathbf{Z}\) as shown in Eq. (2). Fig. 5 (b) shows the processing order for the \(\mathbf{H}\) matrix while performing combination. The processor is first pre-loaded with the partial sums, \(PS\) of \(\mathbf{Z}\) of size \(N_{VPE}\times N_{PE}\). The processor performs an output stationary matrix multiplication operation where the row of \(\mathbf{W}\) is broadcast to each VPE, and each scalar in the column
Fig. 5: Mapping the aggregation and combination operations to multiple VPEs. a) The aggregation operation iterates through the list of non-zeros and assigns them greedily to the queues after appropriate hazard checks. b) The combination operation iterates through the non-zero column vectors and assigns them with the corresponding weight row the the queue.
vector of \(\mathbf{H}\) is sent to one VPE and internally broadcast to all PEs. The addresses of the rows and column vectors are loaded into the queues for processing. The \(PS\) matrix is loaded into the shared memory once at the beginning and only ejected when moving to a new output block. The module can skip a computation if all the elements of the \(\mathbf{H}\) vector are zero. The rows of the \(\mathbf{W}\) can be prefetched sequentially as we process the columns of the \(\mathbf{H}\) matrix. The \(\mathbf{W}\) matrix can be evicted once it is completely consumed in the queues. When processing from the queues, the VPE reads the addresses for the \(\mathbf{H}\) and \(\mathbf{W}\) and loads the values from the shared memory. This process is repeated until all the output blocks have been processed. The proposed architecture is not limited to the above data flow and can easily be modified to support a weight or input stationary data flow.
## V Evaluation
### _Methodology_
We evaluate the advantages of the proposed method for GCNs applied to various small and large datasets. The datasets vary in node density as shown in Fig. 6. Each experiment on these datasets varies the number of neurons in each layer of the GCN and the number of processing elements. We sweep the configuration and hyperparameters taken directly from a number of GNN models, such as GCN [10], GraphSAGE [13], GIN [12], and GAT [11], to generate aggregated results. We developed a simulation tool to model the computational aspects for aggregation and combination to evaluate the proposed method. The tool calculates statistics such as the number of cycles and on-chip SRAM memory access for inference. The tool supports both traditional dataflows as well as the proposed dataflow. We evaluate the proposed method and compare it with three baselines: a compressed sparse column (CSC) approach, a compressed sparse row approach (CSR), and a multiple pass (MP) approach. The tool also models the shared memory system, generating a memory access trace. We use this memory access trace to evaluate the cache and DRAM performance with Ramulator [58]. Ramulator is configured with default HBM settings, 1Gb/s bandwidth.
For evaluation of the performance of SCV-GNN, we used the benchmark graph datasets listed in Fig. 6 a). The datasets are distributed by density, categorizing them into ultra-sparse and highly-sparse datasets. Fig. 6 b) characterizes the sparsity with respect to the size of the graph, further categorizing them as large and small graphs. The datasets are summarized in Table I. Note that the ogbn prefix is omitted for space in subsequent references.
The tool is designed to model the proposed data flow as follows. The data for aggregation and combination is streamed into the processor as shown in Section IV. This is processed cycle-wise to determine the number of multiply and accumulate operations (MACs), internal register reads/writes, inter-PE communication and on-chip SRAM reads/writes from the array. All comparisons to baselines are iso-MAC and iso-memory to ensure a fair comparison. The results report memory accesses and memory access savings pertain to both SRAM and DRAM, as the proposed method is designed to target data locality within both their access patterns. The reuse of variables from the proposed sparse format ensures maximum utilization at all levels of hierarchy.
We model the memory hierarchy in two steps. First, the local shared memory bank is modeled within the simulator, keeping track of all the variables loaded into the processor. Second, if the data is not present in the shared local memory or needs to be written out, the simulator generates a memory trace file of all access going to and from the lower memory levels. We use this trace file in Ramulator to model a complete memory hierarchy to test the lower levels of memory. The local shared memory is modeled as partitioned between 64kB for the adjacency matrix, 64kB for the combined feature matrix, and 256kB for the output matrix for a processor memory of 384kB. Ramulator models the shared memory, a 2MB cache,
\begin{table}
\begin{tabular}{l|r|r|r|r} \hline Dataset & Nodes & Edges & Feature & Adjacency \\ & & & size & density\(\%\) \\ \hline ogbn-mag & 1939743 & 21111007 & 128 & 5.61E-06 \\ ogbn-products & 2449029 & 61859140 & 100 & 1.03E-05 \\ ogbn-axiv & 169343 & 1166243 & 128 & 4.07E-05 \\ Pubmed & 19717 & 88651 & 500 & 2.28E-04 \\ Cora & 19793 & 126842 & 8710 & 3.24E-04 \\ Citeseer & 3327 & 9228 & 3703 & 8.34E-04 \\ Reddit & 232965 & 114615892 & 602 & 2.11E-03 \\ ogbn-proteins & 132534 & 39561252 & 8 & 2.25E-03 \\ Amazon CoBuy & 13752 & 491722 & 767 & 2.60E-03 \\ Computer & & & & \\ Amazon CoBuy & 7650 & 238163 & 745 & 4.07E-03 \\ Photo & & & & \\ \hline \end{tabular}
\end{table} TABLE I: Characteristics of the datasets used for evaluation.
Fig. 6: Characteristic of the datasets used for evaluation. a) The datasets are sorted by their sparsity. The datasets are divided into ultra-sparse and highly-sparse based on their characteristics during evaluation. b) Size of the graph versus the sparsity of the graph. The datasets are further divided into large and small graphs.
and the DRAM with 4GB. We simulate our architecture with 8 VPEs of 64 PEs each, for 512 floating-point MACs in total. We give the baselines we simulate against an identical MAC configuration. Our architecture's VPE queues use a depth of 16. We choose an SCV vector size of 512 for our evaluation unless otherwise noted. Datasets missing from the results are due to the memory limitations of the simulation hardware and software available. However, the proposed architecture is still expected to run aggregation in these cases.
We only present results with respect to aggregation, as only the adjacency matrix can be assumed to have significant sparsity for general GNN processing. However, for combination, the proposed architecture attains latency that is at least as good as systolic arrays [59], which are often used for the dense combination step [44]. This is because our VPE architecture generally supports a higher communication bandwidth with shared memory, eliminating systolic array warm-up and cool-down times that cause underutilization.
### _Computation Cycles_
We explore the effect the various sparse formats have on the number of computation cycles. The results do not include latency from memory which is discussed later. Fig. 7 shows the comparative analysis of SCV-GNN versus traditional sparse formats in terms of computational cycles. All the results show the relative speedup of SCV over CSC, CSR, and MP. Column-based processing, such as CSC and SCV, leads to significant improvements in the number of computation cycles, especially over row-based approaches such as CSR. As shown in the figure for the ultra-sparse datasets, SCV-GNN has a significant speedup over CSR with a geometric mean of \(5.03\times\) speedup. The speedup in the highly-sparse datasets is lower, with a geometric mean speedup of \(26\%\). Compared to CSC, the proposed method has a geometric mean speedup of \(36\%\).
There are several reasons why there is an improvement in computational performance. First, column-based approaches maximize parallelism allowing different VPEs to work in parallel on different output rows and for large intervals before re-addressing the same output row. Second, the CSC and CSR approaches map a fixed set of output rows to a PE, which limits performance and leads to significant workload imbalances resulting in idle cycles. The significant speedups over CSR can be attributed to the reduction in idle cycles, as shown in Fig. 8. The proposed method achieves a geometric mean of \(327\times\) reduction in the ultra-sparse datasets and a \(1.65\times\) reduction in highly-sparse datasets. This also explains the difference in the performance of the ultra-sparse and highly-sparse datasets.
### _Memory Access_
Fig. 9 shows the comparative analysis of SCV-GNN versus traditional sparse formats in terms of memory access from the processor. All the results show the relative reduction in the number of memory accesses of SCV/SCV-Z over CSC, CSR, and MP. The results are normalized by dividing the total memory traffic of the baseline divided by the total memory traffic of SCV/SCV-Z to show the improvement factor.
As shown in the figure, for the highly-sparse datasets, SCV-Z has a significant reduction over CSR and CSC with a geometric mean of \(4.37\times\) and \(3.29\times\) improvement, respectively. The method improve over the ultra-sparse datasets with CSR and CSC to a geometric mean reduction in memory accesses of \(13\%\) and \(34\%\), respectively. This can be attributed to the better utilization of the data within the processor due to the column-wise processing as well as the limited partial sum range. The Z-order for SCV allows for efficient memory management, significantly reducing the number of data accesses.
### _DRAM Mean Access Time_
Fig. 10 shows the comparative analysis of SCV-GNN versus traditional sparse formats in terms of mean DRAM latency
Fig. 8: Reduction in the number of idle cycles normalized to CSR. Ultra-sparse datasets show higher reduction when compared to highly-sparse datasets.
Fig. 7: Speedup in computations cycles normalized to other formats without memory-induced stalls. Ultra-sparse datasets show higher speedups with the proposed method compared to highly-sparse datasets.
Fig. 9: Reduction in the overall memory traffic to the cache. Each column shows the improvement factor of the proposed SCV/SCV-Z over baseline sparse processing formats. SCV outperforms the baselines for all test cases.
and the _mean access time_ (MAT). All the results show the relative reduction in the memory traffic to the DRAM of SCV-Z compared to CSC and CSR. MAT is measured as DRAM active cycles/number of requests from Ramulator. Each column is normalized to show the improvement over CSR (MAT of CSR/MAT of selected format). The figure shows for the highly-sparse datasets, SCV-Z has a significant reduction with a geometric mean improvement of \(2.48\times\) over CSR. The method does not significantly reduce the ultra-sparse datasets with a geometric mean of \(4\%\) over CSR. Overall, SCV outperforms CSC by a factor of \(24\%\) and \(2.88\times\) for ultra-sparse and highly-sparse datasets, respectively. This is from better data utilization within the processor due to the column-wise processing and the limited partial sum range. As we are only measuring the MAT, most of the memory benefits have been captured at the output of the shared local memory.
### _Overall Aggregation Performance_
Using Ramulator, we measure the mean access time(MAT) of the lower levels of memory, the cache, and the DRAM. For our simulations, we have modeled a Cache and a DRAM as defined in the methodology. Ramulator takes the memory traces from our simulator and internally measures the CPU latency and MAT that is fed back into our simulator. This MAT reflects the average time the underlying memory subsystem takes to respond to compute engine for the given trace. We feed this MAT back into our simulator to accurately estimate the overall performance of the proposed methods and the baselines. When the simulator is measuring the time to access data in the local scratchpad, it can be retrieved in a single cycle if it is a hit (data is present in the scratchpad). However, it if is a miss, we use MAT to estimate the average time to retrieve the data from the memory subsystem. On a miss, the corresponding VPE is considered stalled for the duration. Thus, we summarize the overall performance improvement for the aggregation in Fig. 11. The figure shows that the proposed SCV-Z format significantly outperforms all sparse formats for all datasets achieving a geometric mean speedup of \(7.96\times\), \(7.04\times\), and \(6.51\times\) over CSC, CSR, and MP, respectively, with \(10\times\sim 29\times\) improvement on some of the larger datasets.
Fig. 11: Overall speedup of the proposed SCV-Z method compared to other sparse formats, including memory-induced stall. The speedups are shown in a logarithmic scale.
Fig. 12: Speedup of various SCV vector heights compared to a height of 128.
Fig. 10: Reduction in the mean access time (MAT). Each column is normalized to show the improvement over CSR (MAT of CSR / MAT of selected format). The results are shown separately for the ultra-sparse (top) and highly-sparse (bottom) datasets. SCV outperforms the baselines for all test cases.
Fig. 13: Speedup of SCV-like formats with multiple columns compared to SCV to a width of 1. Results are graphed using a logarithmic scale. The number of columns is swept from 1 to 64 in powers of 2. The results are normalized to SCV with a column width of 1. All formats use a column height of 64.
### _SCV Parameter Sweep_
Thus far, we chose the block height of 512 based on the amount of data that would fit within the local memory. However, exploring how the choice of SCV vector height impacts total latency across the various datasets is of interest. To ensure a fair comparison, we fix the system's total memory and number of PEs while changing the number of rows in an SCV column. Increasing the height of the SCV column increases the span of the _PS_ matrix accessed, requiring more memory to store at once, but increases input feature reuse. To ensure an iso-memory comparison, the feature dimension of the partial sums stored is reduced to compensate for the increased span of _PS_, keeping the total memory allocation unchanged. Fig. 12 shows the effect on latency by varying the column height from \(128\) to \(2048\) in powers of 2. It is shown that for each dataset, an optimal height exists where the maximum overall reuse of the adjacency, input feature, and partial sum matrices is achieved. However, this point appears to be dependent on the sparsity and size of a given dataset, as there is no single optimal size for all datasets. From Fig. 12, we show that \(512\) and \(1024\) are the most performant choices with a geometric mean speedup of 7.1% and 5.5% over \(128\), respectively, validating our earlier results.
We further explore alternatives to the SCV format that does not limit tiling the adjacency matrix to single-column wide vectors. To explore this further, we investigate the impact on latency by sweeping the number of column vectors from \(1\) to \(64\) in powers of 2. Because increasing the column vectors only affects what data is accessed, varying it does not change the amount of on-chip memory. Fig. 13 summarizes the effect of tile width on the latency of the system. From the figure, it is shown that the performance of the system deteriorates as the number of columns in a tile increases. This can be attributed to the more efficient reuse of SCV over CSB or similar multiple-column tiles. Increasing the columns in a tile increases the number of non-zero tiles while distributing the non-zeros across multiple columns. The issue lies in the fact that even if just a single non-zero is present within a tile, all the corresponding rows in the combined feature matrix must be accessed, even if there are no non-zeros within its corresponding column. This decreases the granularity in which zeros within the adjacency can be skipped. This affects denser datasets less, as SCV has fewer opportunities for zero skipping, but ultra-sparse datasets see a significant slowdown.
### _Scalability Analysis_
We also study the effect of increasing the number of assigned processors on overall latency. We scale the system by increasing the number of processors and their caches but keep the DRAM bandwidth fixed. We statically split the workload using the proposed Z access order of the adjacency, combined feature, and output matrices so that each processor handles roughly an equal number of adjacency non-zeros. The proposed SCV method allows for fine-grain partitioning at the vector level, unlike complete columns or rows in CSR or CSC approaches. Furthermore, the data-locality preserving nature of the Z-order ensures that the cache efficiency of the newly introduced caches does not significantly deteriorate.
As multiple processors can introduce data hazards, we use a controller to detect when two processors are working on the same output tile. When this occurs, the controller redirects the second processor to use a buffer region of the memory for a partial sum read and write to avoid a RAW hazard. At the end of processing the entire aggregation, multiple results for the same output are merged before continuing to the next task. This requires additional memory access and computation time overheads. There are no additional communication overheads as all inter-processor data transfer happens through the shared memory system. The simulator is modified to account for these overheads in the multi-processor scenario.
We sweep the number of processors from 2 to 64 in powers of 2, and the speedup shown is normalized to a single processor as shown in Fig. 14. The points marked by diamonds show the absolute speedup from increasing the number of processors, and the bars show the speed-up after accounting for the overheads on latency from merging the results.
From Fig. 14, it is shown that in ultra-sparse datasets, the speedup increases as we increase the number of processors up to 8 or 16 processors, after which adding more processors is detrimental. Further analysis shows that the latency is primarily limited by their memory access time. Initially, when the number of processors increase, the extra on-chip caches increase cache-to-processor bandwidth, reducing the average access time. However, it also decreases the reuse within each cache, as accesses to the same addresses become scattered across multiple processors. This becomes dominant with more than 16 processors, outweighing the benefit from the increased bandwidth. Denser datasets see significant reuse even with smaller chunks and still see significant speedup due to reduced computation time and increased cache bandwidth.
From Fig. 14, we show that the actual speedup is fairly close to that without mergers. Increasing the number of processors increases the latency penalty as there are more opportunities for conflicts, requiring mitigation. However, these conflicts only occur when processors work on the same output at the
Fig. 14: Speedup from increasing processors 2 to 64 in powers of 2. Diamond-shaped points represent the highest obtainable speedup if mergers of results from multiple processors weren’t required, while bars represent the actual speedup.
same time. The chance of a conflict decreases with larger datasets, as the time between accessing the same output tile becomes longer. Thus the proposed method is able to achieve speedups near the maximum afforded by the memory system.
### _Comparison With Previous Work_
We compare the performance of SCV-Z against current accelerators for the aggregation operation. The three accelerators we compare against are GPU [60], AWB-GCN [47], and GCNAX [61]. GPUs use BSCR for computing the SpMM operation in aggregation. As explained in Section III, this is not suited for the ultra-sparse nature of GNN aggregation, and the large memory overhead of storing blocks densely is worse for larger block sizes. However, GPUs are designed to perform more efficiently at larger block sizes and typically use sizes 16 or 32. We sweep block sizes from 4 to 64 and show the speedup of SCV-Z against the GPU in Fig. 15. AWB-GCN is an accelerator that uses CSC for storing the adjacency matrix and computes the outer product for SpMM processing. It performs efficient load balancing but suffers in partial sum memory accesses incurred by CSC and the outer product. GCNAX utilizes a well-known optimization for consecutive matrix multiplications, where tiles' processing order can be changed to reduce the number of memory accesses. Their system supports dynamically choosing to use this modified processing order based on the input matrices. However, their system is configured with non-columnar adjacency tiles, leading to inefficient feature matrix accesses. Because combination is orthogonal to the proposed method, we only compare it against their aggregation step. The speedups across different datasets versus each accelerator are shown in Fig. 16. We ensure that for each comparison, both the SCV-Z and baseline accelerators have an equivalent number of MACs and are allocated equal on-chip memories overall. For GPU, we use block size 16 BSCR as our comparison point. Across all datasets, we see geometric mean improvements of \(68.5\times\), \(8.2\times\), and \(8.1\times\) for GPU, AWB-GCN, and GCNAX, respectively. Though we emulate the function of the other accelerators to the best of our ability, these speedups are estimates based on the processing order of the SpMM operation.
## VI Related Work
Though many AI/ML hardware accelerators exist, few works target GNNs [62]. Most DNN accelerators have been targeted toward CNNs or transformers [29, 25, 26, 27], which do not translate well to the ultra sparse nature of graph neural networks. Additionally, graph-specific processors like [63] explore scatter-gather graph systems but do not explore them in the context of graph neural networks or graph tiling.
The unique challenges of GNNs have resulted in GNN-specific accelerators. One challenge is the extreme sparsity and uneven edge distribution within a graph. AWB-GCN [47] addresses this through vector queues and fine-grain workload balancing. This reduces under-utilization but necessitates strict mapping of PEs to rows of the adjacency. They do not explore flexible mapping strategies and use a CSC-based tiling strategy, which we have shown is suboptimal. Similar issues can be seen following the same mapping and tiling strategy for systolic arrays [38].
One approach to solving the irregularity problem for GNNs is through partitioning or reordering schemes [64, 65, 66]. The aim is to separate the graph into highly connected clusters, increasing the efficiency of the fetch and compute. These subgraphs can either be parsed densely or combined with sparse multiplication operations to improve efficiency. While these approaches lead to significant benefits, they are orthogonal to the proposed improvements and can be combined with SCV to improve its efficiency further.
HyGCN [44] is another GNN accelerator that tackles the sparsity in aggregation through adaptive tiling with sliding and shrinking windows, reducing the number of redundant accesses. However, each tile can still contain multiple columns of non-zeros, must be dynamically computed, and their column-centric tiling strategy, like tiled CSC, is shown to be suboptimal. Other adaptive tiling approaches [67, 39, 68] have similar inefficiencies that our proposed method overcomes. Multipass approaches, like [46], avoid the effect of tiling ordering on the lower levels of the memory at the expense of additional computations and control. However, our experiments show that proper tiling and order, like the proposed method, can outperform multipass approaches on aggregate.
Cambricon-G [45] explores the data locality and tiling aspect of GNNs by envisioning the operations as cuboids. The
Fig. 16: Speedup of SCV-Z over GPU, AWB-GCN, and GCNAX for aggregation. GPU is modeled as BCSR with block size 16.
Fig. 15: Speedup of SCV-Z over various block sizes of BSCR.
use of cuboids allows for an efficient transfer of information between adjacent vector processing units. However, the fixed nature of the mapping and tiling has similar issues as blocked approaches like CSB and requires additional overhead for workload balancing and prefetching. In particular, due to the use of a tiled CSR approach for internal storage, the cuboid must have additional processing to ensure effective prefetching. SCV inherently stores this information locally, and the processing order is better optimized for workload balancing. While the work in [69] explored tiling strategies for GNNs, their analysis for aggregation was limited to CSR-based tiling which we have shown to be suboptimal. None of the existing tiling works have explored both the vector-based approach and locality-driven ordering of SCV-Z.
## VII Conclusion
This paper proposes a novel sparse format, SCV/SCV-Z, that is designed to maximize GNN aggregation efficiency in inference. The proposed format maximizes parallelism while minimizing memory access during aggregation. The proposed method outperforms baseline sparse formats by an average factor of \(6.51\times\sim 7.96\times\) over a variety of GNN datasets. While the primary focus of the paper is the new sparse format for the aggregation step, any improvements for combination will be orthogonal to the proposed method and can be combined with the proposed method. We will explore further optimizations in future work. Future work will be directed towards (i) adapting this work to enhance the training of GNNs, (ii) designing a unified hardware accelerator to support all GNN operations during training and inference, and (iii) further co-design of the hardware and sparse format to improve performance.
|
2305.14174 | Improving Stability and Performance of Spiking Neural Networks through
Enhancing Temporal Consistency | Spiking neural networks have gained significant attention due to their
brain-like information processing capabilities. The use of surrogate gradients
has made it possible to train spiking neural networks with backpropagation,
leading to impressive performance in various tasks. However, spiking neural
networks trained with backpropagation typically approximate actual labels using
the average output, often necessitating a larger simulation timestep to enhance
the network's performance. This delay constraint poses a challenge to the
further advancement of SNNs. Current training algorithms tend to overlook the
differences in output distribution at various timesteps. Particularly for
neuromorphic datasets, inputs at different timesteps can cause inconsistencies
in output distribution, leading to a significant deviation from the optimal
direction when combining optimization directions from different moments. To
tackle this issue, we have designed a method to enhance the temporal
consistency of outputs at different timesteps. We have conducted experiments on
static datasets such as CIFAR10, CIFAR100, and ImageNet. The results
demonstrate that our algorithm can achieve comparable performance to other
optimal SNN algorithms. Notably, our algorithm has achieved state-of-the-art
performance on neuromorphic datasets DVS-CIFAR10 and N-Caltech101, and can
achieve superior performance in the test phase with timestep T=1. | Dongcheng Zhao, Guobin Shen, Yiting Dong, Yang Li, Yi Zeng | 2023-05-23T15:50:07Z | http://arxiv.org/abs/2305.14174v1 | Improving Stability and Performance of Spiking Neural Networks through Enhancing Temporal Consistency
###### Abstract
Spiking neural networks have gained significant attention due to their brain-like information processing capabilities. The use of surrogate gradients has made it possible to train spiking neural networks with backpropagation, leading to impressive performance in various tasks. However, spiking neural networks trained with backpropagation typically approximate actual labels using the average output, often necessitating a larger simulation timestep to enhance the network's performance. This delay constraint poses a challenge to the further advancement of SNNs. Current training algorithms tend to overlook the differences in output distribution at various timesteps. Particularly for neuromorphic datasets, inputs at different timesteps can cause inconsistencies in output distribution, leading to a significant deviation from the optimal direction when combining optimization directions from different moments. To tackle this issue, we have designed a method to enhance the temporal consistency of outputs at different timesteps. We have conducted experiments on static datasets such as CIFAR10, CIFAR100, and ImageNet. The results demonstrate that our algorithm can achieve comparable performance to other optimal SNN algorithms. Notably, our algorithm has achieved state-of-the-art performance on neuromorphic datasets DVS-CIFAR10 and N-Caltech101, and can achieve superior performance in the test phase with timestep T=1.
## 1 Introduction
Spike neural networks (SNNs), inspired by biological neural systems, exhibit unique advantages in processing spatiotemporal data. SNNs represent and transmit information through sparse, discrete spike sequences, which not only improves energy efficiency but also allows for better integration with neuromorphic chips [1], attracting the attention of researchers from various fields [2]. However, the binary method of information transmission, which is similar to that of the human brain, possesses non-differentiable characteristics. This makes it challenging to directly apply the backpropagation algorithm to the training of Spiking Neural Networks (SNNs), consequently posing a significant obstacle in training deep SNNs.
Some researchers have attempted to incorporate brain-inspired learning principles into the modeling process of spiking neural networks, such as spike timing dependent plasticity (STDP) [3] and short
term plasticity [4]. Although these methods have addressed the training issues of SNNs to some extent, they still have limitations in dealing with more complex structures and tasks. Currently, there are two main strategies for achieving high-performance deep SNNs: one is the conversion-based approach [5; 6; 7; 8], and the other is the backpropagation (BP)-based approach [9; 10; 11; 12]. The conversion-based method uses the activations of a pre-trained artificial neural network (ANN) as the firing rate of the SNN, allowing for a nearly lossless conversion from an ANN to an SNN. However, since this method requires firing rates to simulate activation values in ANNs accurately, it necessitates longer simulation steps and cannot fully exploit the spatiotemporal information processing capabilities of SNNs. On the other hand, the backpropagation-based training algorithm uses surrogate functions to approximate the gradients of the spike firing function, making it possible to train deep SNNs. Nevertheless, this approach also has limitations, such as gradient vanishing or exploding when dealing with more complex structures.
In contrast to ANNs that output results directly, SNNs need to accumulate outputs over multiple timesteps to make accurate judgments. However, this requires SNNs to use longer simulation steps to model more precise information, resulting in greater computational resources when deploying on edge devices. The latency caused by large simulation length has also become an important factor hindering the development of SNNs. When SNNs receive inputs at different timesteps, especially for neuromorphic data with rich spatiotemporal characteristics, different output patterns are generated at different timesteps. Currently, the training of spiking neural networks based on the backpropagation algorithm mainly optimizes the difference between the average membrane potential or spike firing rate of the output layer and the accurate labels. Then, it uses the output at different timesteps to guide network training in the backward process, which does not consider the difference in output distribution under different timesteps. During training, the combined optimization direction of inconsistent outputs at different moments conflicts with the optimization direction of the average output value, thus preventing the network from being effectively optimized. During testing, the variance in the distribution at different moments can result in the network's test results being offset by incorrect inference results at certain moments, thereby reducing overall performance. Therefore, this paper proposes a new method to enhance the temporal consistency (ETC) at different timesteps during the training process, making the training of spiking neural networks more stable. At the same time, we have verified the method on multiple datasets, and the results show that the method has achieved outstanding performance on all datasets. The main contributions of this paper are as follows:
* Through theoretical analysis, this paper reveals the limitations of existing backpropagation methods in the training of SNNs, especially in dealing with the differences in output distribution between different moments.
* We propose a novel method that enhances the temporal consistency across different moments, which improves the stability of SNN training and significantly reduces the required timesteps.
* To validate the superiority of our proposed method, we have conducted experiments on the static datasets CIFAR10, CIFAR100, and ImageNet and the neuromorphic datasets DVS-CIFAR10 and N-Caltech101. The results show that our algorithm achieves the best performance on neuromorphic datasets and delivers competitive performance compared to other state-of-the-art algorithms on static datasets.
## 2 Related Work
Researchers mainly improve the BP-based SNNs from the following two aspects. First, they tried to enhance the information transmission capability of spiking neural networks. A series of studies [13; 14; 15; 16; 17] focused on introducing learnable parameters, such as membrane potential constants and thresholds, into spiking neurons to construct different types of spiking neurons. This approach enhances the adaptability of neural networks to various information types. NeuNorm [18], tdBN [19], and TEBN [12] incorporated various normalization techniques to enhance information transmission in deep spiking neural networks. These approaches effectively mitigate gradient vanishing and explosion issues, improving network performance and stability. In the works by [20; 21; 22], various attention mechanisms were introduced across different dimensions to guide SNNs in transmitting more task-relevant information, as a result of this enhancing their overall performance and adaptability.
Another group of researchers attempted to improve the structure of SNNs. Drawing inspiration from brain-inspired structures, LISNN [23] introduced lateral connections, while BackEISNN [24]
incorporated self-feedback connections to enhance SNN's information processing capabilities. Furthermore, SewResNet [25] designed a more suitable residual module for SNNs, providing a simple method for training deep SNNs. Spikformer [26] combined self attention with SNNs, constructing a high-performance SNN structure. AutoSNN [27] and NASSNN [28] employed the neural architecture search technique to design more optimal structures for SNNs.
However, the works above all used the average membrane potential or spike firing rate at the output layer for prediction without considering the impact of the output distribution of SNNs at different timesteps on performance. Reducing the differences in SNN output distribution at various timesteps is crucial for constructing a stable and high-performance SNNs.
## 3 Method
In this section, we first introduce the spiking neurons used and theoretically analyze the issue of uneven distribution at different timesteps. Ultimately, we introduce the enhancing temporal consistency constraint to standardize the output distribution at different timesteps.
### Spiking Neuron Model
The leaky integrate-and-fire (LIF) neuron, as the most commonly used neuron model in deep spiking neural networks, describes the complex dynamics of biological neurons with a relatively simple differential equation as shown in Eq. 1:
\[\begin{split}\tau_{m}\frac{dV}{dt}&=-V+RI\quad V \leq V_{th}\\ S&=H(V-V_{th})\end{split} \tag{1}\]
\(V\) represents the membrane potential, \(\tau_{m}=RC\) is the membrane potential time constant, \(I=\sum_{j}W_{ij}S_{j}\) denotes the input current obtained by aggregating presynaptic spikes, \(W\) denotes the connection weight from pre-synaptic to post-synaptic neurons, and \(H\) refers to the step function for spike emission. When the membrane potential surpasses the threshold \(V_{th}\), the neuron emits a spike \(S\) and resets to the resting potential \(V_{r}\). In this study, we set \(V_{th}\) to 0.5, \(\tau_{m}\) to 2, \(R\) to 1, and the resting potential \(V_{r}\) to 0. By using the first-order Euler method, we obtain the discretized representation of the above differential equation as shown in Eq. 2:
\[V_{t+1}=(1-\frac{1}{\tau_{m}})V_{t}+\frac{1}{\tau_{m}}I_{t} \tag{2}\]
Figure 1: The whole training pipeline of our model. SNNs receive input data at various timesteps and generate corresponding outputs at each timestep. The ETC constraint helps ensure output distribution consistency across different timesteps.
In order to use the backpropagation algorithm for network training, we employ surrogate gradients to approximate the gradients of the spike firing function, as follows:
\[\frac{\partial H}{\partial V_{t}}=\left\{\begin{aligned} 0,& |V_{t}-V_{th}|>\frac{1}{a}\\ -a^{2}|V_{t}-V_{th}|+a,&|V_{t}-V_{th}|\leq\frac{1}{a} \end{aligned}\right. \tag{3}\]
\(a\) is a hyperparameter used to control the shape of the surrogate gradient. In this study, we set \(a\) to 2.
### The Training and Inference Procedures compared SNNs with ANNs
Unlike traditional artificial neural networks, SNNs adjust their weights during training based on outputs at multiple timesteps. During the inference of the network, the final output is calculated by weighting the outputs at various timesteps. For deep SNNs, the most widely used approach is to employ the average membrane potential of the neurons in the last layer as the final output, resulting in more efficient and accurate inference.
We represent the average membrane potential of the last layer as \(O_{\text{mean}}=\frac{1}{T}\sum_{t=1}^{T}V_{t}\), where T is the total number of timesteps and \(V_{t}\) is the membrane potential at timestep \(t\). The softmax function is applied to convert the average membrane potential into output probabilities \(P_{\text{mean}}=\text{softmax}(O_{\text{mean}})\). During the network training process, we use the cross-entropy loss to minimize the difference between the actual outputs \(P_{\text{mean}}\) and expected outputs \(y\) as shown in Eq. 4.
\[L_{CE}=-\sum_{i=1}^{C}y_{i}\cdot\log(P_{\text{mean}}) \tag{4}\]
C is the number of categories. By applying the chain rule, we can compute the gradient of the loss function concerning the network parameters \(W\).
\[\frac{\partial L_{CE}}{\partial W}=\frac{\partial L_{CE}}{\partial O_{\text{ mean}}}\frac{\partial O_{\text{mean}}}{\partial V_{t}}\frac{\partial V_{t}}{ \partial W}=\frac{1}{T}\sum_{t=1}^{T}[P_{\text{mean}}-y]\frac{\partial V_{t}} {\partial W} \tag{5}\]
For ANNs, we use \(O\) to denote the final output and \(P\) to represent the probability distribution after the softmax operation. Using the same loss function, we can also obtain the gradient of the loss concerning the network parameters:
\[\frac{\partial L_{CE}}{\partial W}=\frac{\partial L_{CE}}{\partial O}\frac{ \partial O}{\partial W}=[P_{\text{mean}}-y]\frac{\partial O}{\partial W} \tag{6}\]
As shown in Eq. 5 and Eq. 6, the final output determines the partial derivative of the weight. In minimizing the loss, the output \(O\) of the ANN will gradually approach the actual label. At this time, using the partial derivative of \(O\) concerning W will more accurately control the optimization direction of the ANN. However, the optimization direction of the weights in SNN depends on the direction
Figure 2: The training and inference procedure compared our method with the traditional methods. For traditional methods, consistency in the distribution across different timesteps can cause discrepancies in the optimization direction and lead to misjudgments during inference.
of the partial derivative of the output \(V_{t}\) concerning the weight at each moment. SNN optimizes the distance between the average membrane potential and the actual label. As shown in Fig. 2, when the optimization directions at different moments are inconsistent or even significantly different, there will be a severe mismatch between \(\sum_{t=1}^{T}\frac{\partial V_{t}}{\partial W}\) and \(\frac{\partial\sum_{t=1}^{T}V_{t}}{\partial W}\), which significantly interferes with the optimization direction of the spiking neural network. Simultaneously, as shown on the right side of Fig. 2, during the inference phase of the network, inconsistencies in the distribution at different timesteps can lead to overall network results being skewed by erroneous results, thereby leading to a decline in performance.
### Reducing Temporal Diversity Procedure
As discussed above, the performance degradation of SNNs is due to the mismatch of output distributions at different timesteps. In order to enhance the consistency between different timesteps, we propose an enhancing temporal consistency constraint, aiming to make the distributions at each timestep as similar as possible. First, we define the output probability distribution at each timestep \(P_{t}^{i}\) as shown in Eq. 7:
\[P_{t}^{i}(V_{t};\tau)=softmax(V_{t}^{i};\tau)=\frac{exp(V_{t}^{i}/\tau)}{\sum _{j}exp(V_{t}^{j}/\tau)} \tag{7}\]
The temperature parameter \(\tau\) controls the smoothness of the model's output distribution, which is more conducive to learning the relationships between different categories [29]. Here we set \(\tau=4\). After obtaining the output distributions at different timesteps, we aim to minimize the distribution gap between the output \(P_{t}\) at time t and the outputs at other timesteps, here we use the Kullback-Leibler (KL) divergence. The loss function is shown in Eq. 8.
\[\begin{split} L_{ETC}^{t}&=\frac{1}{T-1}\sum_{m=1, m\neq t}^{T}KL(P_{m}||P_{t})=\frac{1}{T-1}\sum_{m=1,m\neq t}^{T}\sum_{i=1}^{C}P_{m}^{ i}log\frac{P_{m}^{i}}{P_{t}^{i}}\\ &=\frac{1}{T-1}\sum_{m=1,m\neq t}^{T}\sum_{i=1}^{C}(P_{m}^{i}logP _{m}^{i}-P_{m}^{i}logP_{t}^{i})\end{split} \tag{8}\]
To avoid model collapse, we do not propagate the gradient through \(P_{m}\). As a result, the final loss can be represented as shown in Eq. 9:
\[L_{ETC}^{t}=-\frac{1}{T-1}\sum_{m=1,m\neq t}^{T}\sum_{i=1}^{C}P_{m}^{i}logP_{ t}^{i}\Rightarrow L_{RTD}=-\frac{1}{T}\frac{1}{T-1}\sum_{t=1}^{T}\sum_{m=1,m \neq t}^{T}\sum_{i=1}^{C}P_{m}^{i}logP_{t}^{i} \tag{9}\]
Thus, the final loss function can be written as a dynamic combination of cross-entropy loss and ETC loss, as shown in Eq. 10.
\[L_{all}=L_{CE}+\lambda\tau^{2}L_{RTD} \tag{10}\]
\(\lambda\) is the weight constraint term to control the influence of ETC loss on overall loss. Here we set \(\lambda=1\). We can obtain the partial derivative of the total loss concerning the weights \(\frac{\partial L_{all}}{\partial W}\):
\[\begin{split}\frac{\partial L_{all}}{\partial W}&= \frac{\partial L_{CE}}{\partial W}+\lambda\tau^{2}\frac{\partial L_{ETC}}{ \partial W}\\ &=\frac{1}{T}\sum_{t=1}^{T}[P_{\text{mean}}-y]\frac{\partial V_{ t}}{\partial W}+\lambda\tau^{2}\frac{1}{T}\frac{1}{T-1}\sum_{t=1}^{T}\sum_{m=1,m \neq t}^{T}(P_{t}-P_{m})\frac{\partial V_{t}}{\partial W}\end{split} \tag{11}\]
As shown in Eq. 11, the first term ensures that the average output is close to the actual target, while the second term ensures the consistency of the output distribution at each timestep. Combining these two loss terms can make the output at each moment as accurate as possible, thus better guiding the optimization direction of the output for the weights at each moment. By using the output of different timesteps to approximate each other and correcting erroneous prediction moments, this combination of loss functions can also prevent overconfident predictions, thereby further improving the model's
generalization ability on the test dataset. Moreover, this constraint can be considered a self-distillation process of the model. It uses the dark knowledge of the model's output at other timesteps to provide soft labels for each timestep, optimizing the output at each timestep and fully utilizing the temporal information of SNNs.
## 4 Experiments
In order to demonstrate the superiority of our algorithm, we conduct experiments on multiple datasets, including static datasets such as CIFAR-10 [30], CIFAR-100 [31], and ImageNet [32], as well as neuromorphic datasets like DVS-CIFAR10 [33] and N-Caltech101 [34]. We develop the SNN code based on the open-source framework BrainCog with NVIDIA A100 graphic processing unit (GPU), employing the AdamW optimizer with a weight decay setting of 0.0001. The learning rate is set to 0.001 with the cosine annealing strategy. The batch size is set to 128. In our experiments, we set the total training epochs to 600. The experiments are repeated five times randomly. We have reported the mean and standard deviation of the corresponding performance.
As shown in Tab. 1, we conduct experiments on static datasets based on SEW-ResNet and compare them with other state-of-the-art algorithms. We use the SEW-ResNet18 structure and perform experiments on the CIFAR10 and CIFAR100 datasets under simulation step lengths of 2, 4, 6, and 8, respectively. On the CIFAR10 dataset, our model achieves the accuracy of 95.73% at the simulation step length of 6, surpassing all other algorithms and achieving the current best performance. For the CIFAR100 dataset, even with a relatively small network, we achieve performance comparable to the current best algorithm, only 0.4% lower than the performance of TEBN. For the more complex ImageNet dataset, we conduct experiments based on SEW-ResNet-18 and SEW-ResNet-34. The results show that compared with the GLIF algorithm, the performance of our model has improved by 0.6%. What's more, compared with the original SEW algorithm, our performance has improved by about 2%. This fully demonstrates the superiority of our algorithm.
### Results on Neuromorphic Datasets
Compared to static datasets, neuromorphic datasets reveal richer spatiotemporal features, thereby better highlighting the advantages of spiking neural networks. The DVS-CIFAR10 and N-Caltech101 datasets convert original image information into event information through dynamic vision sensors. In our study, we first resize the input samples to a fixed size of 48\(\times\)48 and adopt the VGGNN structure used in TET. As shown in Tab. 2, our algorithm has achieved the best performance on both datasets. On the DVS-CIFAR10 dataset, our performance improved by 2% compared to TET and by 0.8% compared to TEBN. On the N-Caltech101 dataset, we has achieved the accuracy of 85.53%, a 1.4% improvement compared to TKS. Meanwhile, our performance has significantly surpassed that of the EventMix and NDA algorithms, which use data augmentation techniques. Due to the characteristic of neuromorphic datasets where inputs at different times vary, the output distribution also changes. In such cases, introducing the method we propose in this paper can better standardize the membrane potential distribution at different timesteps, thereby significantly enhancing the network's performance.
### Ablation Studies
To verify the effectiveness of our algorithm, we conduct ablation experiments on the DVS-CIFAR10 and N-Caltech101 datasets. As shown in Tab. 3, without adding the ETC module, the accuracy of the N-Caltech101 dataset is only 78.28%. However, after adding the ETC module with \(\lambda=1\) and \(\tau=4\), the network performance improves to 83.3%, an increase of 5%. Under the same settings, the performance of the DVS-CIFAR10 dataset is also improved by 2.5%. It is worth noting that the hyperparameter \(\tau\) controls the smoothness of the output at different moments. If \(\tau\) is too large, the output will be too smooth to show significant differences. Conversely, if \(\tau\) is too small, the output will be overly confident and unable to reflect the relationships between different categories. Furthermore, the \(\lambda\) parameter controls the influence of the ETC module on the final loss. If \(\lambda\) is too large or too small, it cannot effectively guide the loss function to update the weights. To better illustrate the impact of these hyperparameters, we test the performance of the ETC algorithm under different hyperparameter settings.
### Temporal Consistency Verification
In addition to performance, latency is a crucial factor that constrains the development of SNNs. When the output distribution at each moment becomes more consistent, we can achieve higher accuracy in the testing phase using only a shorter simulation length. We have validated the results on the DVS-CIFAR10 and N-Caltech101 datasets. During the training phase, we use the simulation timestep T=10, while in the testing phase, we conduct separate experiments for different timesteps. As shown in Fig. 3, with a simulation step size of 1, the accuracy of the conventional algorithm is only 43.3% for the DVS-CIFAR10 dataset. In contrast, our ETC algorithm achieves an accuracy of 63.1%, an improvement of about 20%. At a step size of 5, the ETC algorithm already surpasses the performance of the traditional algorithm at a step size of 10, significantly reducing the network's latency. For the N-Caltech101 dataset, when the step size is 1, our algorithm has improved by 16% compared to the
\begin{table}
\begin{tabular}{c c c c c} \hline
**Dataset** & **Model** & **Architecture** & **Simulation Step** & **Accuracy** \\ \hline \multirow{10}{*}{CIFAR10} & Opt [35] & ResNet-18 & 4 & 90.43 \\ & CSTDB [36] & ResNet-20 & 250 & 92.22 \\ & Diet-SNN [37] & ResNet-20 & 10 & 92.54 \\ & NeuNorm [18] & CIFARNet & 12 & 90.53 \\ & TSSL-BP [38] & CIFARNet & 5 & 91.41 \\ & BPSTA [39] & 7-layer-CNN & 8 & 92.15 \\ & NASNN [28] & NAS & 5 & 92.73 \\ & AutoSNN [27] & NAS & 16 & 93.15 \\ & tdBN [19] & ResNet-19 & 6 & 93.16 \\ & PLIF [13] & PLIFNet & 8 & 93.5 \\ & TET [11] & ResNet-19 & 6 & 94.50 \\ & GLIF [15] & ResNet-19 & 6 & 95.03 \\ & TKS [40] & ResNet-19 & 4 & 95.3 \\ & Rec-Dis [41] & ResNet-19 & 6 & 95.55 \\ & TEBN [12] & ResNet-19 & 6 & 95.60 \\ \cline{2-5} & \multirow{4}{*}{**Our Method**} & SEW-ResNet-18 & 2 & 94.65 \(\pm\) 0.08 \\ & & SEW-ResNet-18 & 4 & 95.4 \(\pm\) 0.07 \\ & & SEW-ResNet-18 & 6 & 95.73 \(\pm\) 0.02 \\ & & SEW-ResNet-18 & 8 & 95.84 \(\pm\) 0.03 \\ \hline \multirow{10}{*}{CIFAR100} & Diet-SNN[37] & ResNet-20 & 5 & 64.07 \\ & BPSTA [39] & ResNet34 & 8 & 69.32 \\ & AutoSNN [27] & NAS & 16 & 69.16 \\ & NASNN [28] & NAS & 5 & 73.04 \\ & TET [11] & ResNet-19 & 6 & 74.72 \\ & Rec-Dis [41] & ResNet-19 & 4 & 74.10 \\ & TKS [40] & ResNet-19 & 4 & 76.2 \\ & GLIF [15] & ResNet-19 & 6 & 77.35 \\ & TEBN [12] & ResNet-19 & 6 & 78.76 \\ \cline{2-5} & \multirow{4}{*}{**Our Method**} & SEW-ResNet-18 & 2 & 75.96 \(\pm\) 0.24 \\ & & SEW-ResNet-18 & 4 & 77.65 \(\pm\) 0.13 \\ & & SEW-ResNet-18 & 6 & 78.25 \(\pm\) 0.11 \\ & & SEW-ResNet-18 & 8 & 78.32 \(\pm\) 0.07 \\ \hline \multirow{10}{*}{ImageNet} & tdBN [19] & Spiking-ResNet-34 & 6 & 63.72 \\ & SEW [25] & SEW-ResNet-34 & 4 & 67.04 \\ \cline{1-1} & Rec-Dis [41] & ResNet-34 & 6 & 67.33 \\ \cline{1-1} & TET [11] & SEW-ResNet-34 & 4 & 68.00 \\ \cline{1-1} & TEBN [12] & SEW-ResNet-34 & 4 & 68.28 \\ \cline{1-1} & GLIF [15] & ResNet-34 & 6 & 69.09 \\ \cline{1-1} & TKS [40] & SEW-ResNet-34 & 4 & 69.6 \\ \cline{1-1} \cline{2-5} & \multirow{4}{*}{**Our Method**} & SEW-ResNet-18 & 4 & 63.70 \\ \cline{1-1} & SEW-ResNet-34 & 4 & 68.54 \\ \cline{1-1} & SEW-ResNet-34 & 6 & 69.64 \\ \hline \end{tabular}
\end{table}
Table 1: Compare with existing works on static image datasets.
baseline. At a step size of 5, the accuracy of the ETC algorithm has reached 80.57%, which is 2% higher than the baseline at a step size of 10.
Meanwhile, we have visualized the outputs at different timesteps in the final layer. As shown in Fig. 4, we visualize the distribution of the outputs at different timesteps for the samples in the DVS-CIFAR10 dataset. It can be observed that the output distributions of the traditional algorithm vary significantly at different timesteps. For instance, the lsample's output distribution is accurate in the early moments, but from the sixth moment onwards, the distribution begins to fluctuate. This led to a sample that should have been categorized as 8 being incorrectly predicted as 6, 8, 1, 8, 6. In contrast, the output distribution of the ETC algorithm is much more consistent. By enhancing the consistency of the temporal distribution, the ETC algorithm has dramatically improved the accuracy of the overall distribution of outputs at each moment. This allows us to achieve high-precision predictions in the testing phase with fewer simulation steps. This characteristic greatly facilitates the deployment of SNNs on various edge devices.
## 5 Conclusion
Unlike artificial neural networks, spiking neural networks can receive and generate inputs and outputs at multiple moments. However, the inconsistency of output distributions at different moments often requires longer simulation steps to produce stable and precise outputs. This seriously affects the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Dataset** & **Model** & **Architecture** & **Simulation Step** & **Accuracy** \\ \hline \multirow{8}{*}{DVS-CIFAR10} & tdBN [19] & ResNet-19 & 10 & 67.8 \\ & LIAF [42] & LIAF-Net & 10 & 71.70 \\ & IAF [42] & LIAF-Net & 10 & 70.40 \\ & AutoSNN [27] & NAS & 16 & 72.50 \\ & BPSTA [39] & 5-layer-CNN & 16 & 78.95 \\ & Rec-Dis [41] & ResNet-19 & 10 & 72.42 \\ & TET [11] & VGGSNN & 10 & 83.17 \\ & TEBN [12] & VGGSNN & 10 & 84.90 \\ & TKS [40] & VGGSNN\({}^{*}\) & 10 & 85.3 \\ \cline{2-5} & **Our Method** & VGGSNN & 10 & 85.35 \(\pm\) 0.40 \\ \hline \multirow{8}{*}{N-Caltech101} & ConvertSNN [43] & VGG11 & 20 & 55.0 \\ & Dart [44] & N/A & N/A & 66.8 \\ \cline{1-1} & TCJA [21] & TCJAnet & 14 & 78.5 \\ \cline{1-1} & NDA [45] & ResNet-19 & 10 & 78.6 \\ \cline{1-1} & EventMix [46] & ResNet-18 & 10 & 79.5 \\ \cline{1-1} & TKS [40] & VGGSNN\({}^{*}\) & 10 & 84.1 \\ \cline{1-1} \cline{2-5} & **Our Method** & VGGSNN & 10 & 83.33 \(\pm\) 0.41 \\ \cline{1-1} \cline{2-5} & **Our Method** & VGGSNN\({}^{*}\) & 10 & 85.53 \(\pm\) 0.09 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Compare with existing works on neuromorphic datasets. \({}^{*}\) indicates using a model with tdBN [19].
\begin{table}
\begin{tabular}{c c c c c c c c} \multicolumn{8}{c}{DVS-CIFAR10} & \multicolumn{3}{c}{N-Caltech101} \\ \hline \hline \(\lambda=0\). & \multicolumn{3}{c}{82.9} & \multicolumn{3}{c}{78.28} \\ \hline \hline \(\tau\)\(\lambda\)\(\lambda\) & 0.1 & 1 & 2 & 8 & 0.1 & 1 & 2 & 8 \\ \hline
1 & 82.20 & 85.10 & 84.50 & 84.50 & 80.11 & 80.92 & 80.00 & 80.69 \\
2 & 85.70 & 85.90 & 84.60 & 84.40 & 81.84 & 82.13 & 80.57 & 80.00 \\
4 & 85.30 & 85.40 & 85.60 & 85.30 & 81.72 & 83.33 & 81.72 & 81.60 \\
8 & 85.60 & 85.00 & 85.10 & 84.40 & 81.95 & 82.03 & 80.92 & 80.34 \\
16 & 85.40 & 85.00 & 85.30 & 84.60 & 81.38 & 81.95 & 81.95 & 81.03 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation analysis on ETC loss on DVS-CIFAR10 and N-Caltech101 datasets, as well as the sensitivity analysis of different hyperparameters
performance of spiking neural networks and increases their latency. In this study, we propose a strategy to enhance temporal consistency, aiming to reduce the inconsistency in the output distributions at different timesteps. The approach significantly improves the performance of spiking neural networks on multiple datasets and effectively reduces latency. We can achieve a high accuracy in the testing phase with only short simulation steps. In biological neurons, there exist efficient coding methods that adaptively encode diverse inputs, along with decoding methods that accurately interpret the output information from different moments [47; 48; 49]. In the future, we can take more inspiration from the biologically plausible encoding and decoding method to process the information more effectively.
Figure 4: The comparison between the base method and our method on output distribution at different timesteps for the sample in DVS-CIAFR10
Figure 3: Test Accuracy in different timesteps on DVS-CIFAR10 and N-Caltech101, the model is trained at timestep 10.
## Acknowledgement
This work was supported by the National Key Research and Development Program (Grant No. 2020AAA0104305), and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB32070100).
|
2306.09844 | Wasserstein distributional robustness of neural networks | Deep neural networks are known to be vulnerable to adversarial attacks (AA).
For an image recognition task, this means that a small perturbation of the
original can result in the image being misclassified. Design of such attacks as
well as methods of adversarial training against them are subject of intense
research. We re-cast the problem using techniques of Wasserstein
distributionally robust optimization (DRO) and obtain novel contributions
leveraging recent insights from DRO sensitivity analysis. We consider a set of
distributional threat models. Unlike the traditional pointwise attacks, which
assume a uniform bound on perturbation of each input data point, distributional
threat models allow attackers to perturb inputs in a non-uniform way. We link
these more general attacks with questions of out-of-sample performance and
Knightian uncertainty. To evaluate the distributional robustness of neural
networks, we propose a first-order AA algorithm and its multi-step version. Our
attack algorithms include Fast Gradient Sign Method (FGSM) and Projected
Gradient Descent (PGD) as special cases. Furthermore, we provide a new
asymptotic estimate of the adversarial accuracy against distributional threat
models. The bound is fast to compute and first-order accurate, offering new
insights even for the pointwise AA. It also naturally yields out-of-sample
performance guarantees. We conduct numerical experiments on the CIFAR-10
dataset using DNNs on RobustBench to illustrate our theoretical results. Our
code is available at https://github.com/JanObloj/W-DRO-Adversarial-Methods. | Xingjian Bai, Guangyi He, Yifan Jiang, Jan Obloj | 2023-06-16T13:41:24Z | http://arxiv.org/abs/2306.09844v1 | # Wasserstein distributional robustness of neural networks
###### Abstract
Deep neural networks are known to be vulnerable to adversarial attacks (AA). For an image recognition task, this means that a small perturbation of the original can result in the image being misclassified. Design of such attacks as well as methods of adversarial training against them are subject of intense research. We re-cast the problem using techniques of Wasserstein distributionally robust optimization (DRO) and obtain novel contributions leveraging recent insights from DRO sensitivity analysis. We consider a set of distributional threat models. Unlike the traditional pointwise attacks, which assume a uniform bound on perturbation of each input data point, distributional threat models allow attackers to perturb inputs in a non-uniform way. We link these more general attacks with questions of out-of-sample performance and Knightian uncertainty. To evaluate the distributional robustness of neural networks, we propose a first-order AA algorithm and its multi-step version. Our attack algorithms include Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) as special cases. Furthermore, we provide a new asymptotic estimate of the adversarial accuracy against distributional threat models. The bound is fast to compute and first-order accurate, offering new insights even for the pointwise AA. It also naturally yields out-of-sample performance guarantees. We conduct numerical experiments on the CIFAR-10 dataset using DNNs on RobustBench to illustrate our theoretical results. Our code is available at [https://github.com/JanObloj/W-DRO-Adversarial-Methods](https://github.com/JanObloj/W-DRO-Adversarial-Methods).
## 1 Introduction
Model uncertainty is an ubiquitous phenomenon across different fields of science. In decision theory and economics, it is often referred to as the _Knightian uncertainty_(Knight, 1921), or the _unknowns_, to distinguish it from the _risk_ which stems from the randomness embedded by design in the scientific process, see Hansen and Marinacci (2016) for an overview. Transcribing to the context of data science, risk refers to the randomness embedded in a training by design, e.g., through random initialization, drop-outs etc., and uncertainty encompasses the extent to which the dataset is an adequate description of reality. _Robustness_, the ability to perform well under uncertainty, thus relates to several themes in ML including adversarial attacks, out-of-sample performance and
out-of-distribution performance. In this work, we mainly focus on the former but offer a unified perspective on robustness in all of its facets.
Vulnerability of DNNs to crafted adversarial attacks (AA), diagnosed in Biggio et al. (2013), Goodfellow et al. (2015), relates to the ability of an attacker to manipulate network's outputs by changing the input images only slightly - often in ways imperceptible to a human eye. As such, AA are of key importance for security-sensitive applications and an active field of research. Most works so far have focused on attacks under _pointwise_\(l_{p}\)-bounded image distortions but a growing stream of research, pioneered by Staib and Jegelka (2017) and Sinha et al. (2018), frames the problem using Wasserstein distributionally robust optimization (DRO). We offer novel contributions to this literature.
Our key contributions can be summarized as follows. **1)** We propose a unified approach to adversarial attacks and training based on sensitivity analysis for Wasserstein DRO. We believe this approach, leveraging results from Bartl et al. (2021), is better suited for gradient-based optimization methods than duality approach adopted in most of the works to date. We further link the adversarial accuracy to the adversarial loss, and investigate the out-of-sample performance. **2)** We derive a general adversarial attack method. As a special case, this recovers the classical FGSM attack lending it a further theoretical underpinning. However, our method also allows to carry out attacks under a _distributional threat model_ which, we believe, has not been done before. **3)** We develop certified bounds on adversarial accuracy, applicable to a general threat, including the classical pointwise perturbations. The bounds are first-order accurate and much faster to compute than, e.g., the AutoAttack (Croce and Hein, 2020) benchmark. The performance of our methods is documented using CIFAR-10 dataset (Krizhevsky, 2009) and neural networks from RobustBench (Croce et al., 2021).
## 2 Related Work
Adversarial Attack (AA).Original research focused on the _pointwise_\(l_{p}\)-bounded image distortion. Numerous attack methods under this threat model have been proposed in the literature, including Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015), Projected Gradient Descent (PGD) (Madry et al., 2018), CW attack (Carlini and Wagner, 2017), etc. In these white-box attacks, the attacker has full knowledge of the neural network. There are also black-box attacks, such as Zeroth Order Optimization (ZOO) (Chen et al., 2017), Boundary Attack (Brendel et al., 2018), and Query-limited Attack (Ilyas et al., 2018). AutoAttack (Croce and Hein, 2020), an ensemble of white-box and black-box attacks, provides a useful benchmark for _pointwise_\(l_{p}\)-robustness of neural networks.
Adversarial Defense.Early works on data augmentation (Goodfellow et al., 2015; Madry et al., 2018; Tramer et al., 2018) make use of strong adversarial attacks to augment the training data with adversarial examples; more recent works (Gowal et al., 2021; Xing et al., 2022; Wang et al., 2023) focus on adding randomness to training data through generative models such as GANs and diffusion models. Zhang et al. (2019) consider the trade-off between robustness and accuracy of a neural network via TRADES, a regularized loss. Analogous research includes MART (Wang et al., 2020) and SCORE (Pang et al., 2022). Other loss regularization methods such as adversarial distributional training (Dong et al., 2020) and adversarial weight perturbation (Wu et al., 2020) have been shown to smooth the loss landscape and improve the robustness. In addition, various training techniques can be overlaid to improve robustness, including group-out layers, early stopping and parameter fine-tuning Sehwag et al. (2020). The closest to our setting are Sinha et al. (2018), Garcia Trillos and Garcia Trillos (2022) which employ Wasserstein penalization and constraint respectively.
Robust Performance Bounds.Each AA method gives a particular upper bound on the adversarial accuracy of the network. In contrast, research on _certified robustness_ aims at classifying images which are robust to all possible attacks allowed in the threat model and thus providing an attack-agnostic lower bound on the classification accuracy. To verify robustness of images, deterministic methods using off-the-shelf solvers (Tjeng et al., 2019), relaxed linear programming (Wong and Kolter, 2018; Weng et al., 2018) or semi-definite programming (Raghunathan et al., 2018; Dathathri et al., 2020) have been applied. Hein and Andriushchenko (2017), Weng et al. (2018) derive Lipschitz-based metrics to characterize the maximum distortion an image can uphold; Cohen et al. (2019) constructs a certifiable classifier by adding smooth noise to the original classifier; see Li et al. (2023) for a review.
Distributionally Robust Optimization (DRO).Mathematically, it is formulated as a min-max problem
\[\inf_{\theta\in\Theta}\sup_{Q\in\mathscr{P}}\mathbf{E}_{Q}[f_{\theta}(Z)], \tag{1}\]
where we minimize the worst-case loss over all possible distributions \(Q\in\mathscr{P}\). In financial economics, such criteria appear in the context of multi-prior preferences, see (Gilboa and Schmeidler, 1989; Follmer and Weber, 2015). We refer to (Rahimian and Mehrotra, 2019) for a survey of the DRO.
We focus on the Wasserstein ambiguity set \(\mathscr{P}=B_{\delta}(P)\), which is a ball centered at the reference distribution \(P\) with radius \(\delta\) under the Wasserstein distance. We refer to Gao and Kleywegt (2022) for a discussion of many advantages of this distance. In particular, measures close to each other can have different supports which is key in capturing data perturbations, see Sinha et al. (2018). Staib and Jegelka (2017) interpreted _pointwise_ adversarial training as a special case of Wasserstein DRO (W-DRO). More recently, Bui et al. (2022) unified various classical adversarial training methods, such as PGD-AT, TRADES, and MART, under the W-DRO framework.
W-DRO, while compelling theoretically, is often numerically intractable. In the literature, two lines of research have been proposed to tackle this problem. The duality approach rewrites (1) into a max-min problem where the inner minimization is a more tractable univariate problem. We refer to Mohajerin Esfahani and Kuhn (2018) for the data-driven case, Blanchet and Murthy (2019), Bartl et al. (2020), Gao and Kleywegt (2022) for general probability measures and Huang et al. (2022) for a further application with coresets. The second approach, which we adopt here, considers the first order approximation to the original DRO problem. This can be seen as computing the sensitivity of the value function with respect to the model uncertainty as derived in Bartl et al. (2021), see also Lam (2016), Garcia Trillos and Garcia Trillos (2022) for analogous results in different setups.
## 3 Preliminaries
Image Classification Task.An image is interpreted as a tuple \((x,y)\) where the feature vector \(x\in\mathcal{X}\) encodes the graphic information and \(y\in\mathcal{Y}=\{1,\ldots,m\}\) denotes the class, or tag, of the image. W.l.o.g., we take \(\mathcal{X}=[0,1]\)2. A distribution of labelled images corresponds to a probability measure \(P\) on \(\mathcal{X}\times\mathcal{Y}\). We are given the training set \(\mathcal{D}_{tr}\) and the test set \(\mathcal{D}_{tt}\), subsets of \(\mathcal{X}\times\mathcal{Y}\), i.i.d. sampled from \(P\). We denote \(\widehat{P}\) (resp. \(\widetilde{P}\)) the empirical measure of points in the training set (resp. test set), i.e., \(\widehat{P}=\frac{1}{|\mathcal{D}_{tr}|}\sum_{(x,y)\in\mathcal{D}_{tr}} \delta_{(x,y)}\). A neural network is a map \(f_{\theta}:\mathcal{X}\rightarrow\mathbb{R}^{m}\)
Footnote 2: In practice, \(P\) is not accessible and we use \(\widehat{P}\) or \(\widetilde{P}\) instead, e.g., in (2) we replace \(P\) with \(\widehat{P}\) and then compute the clean accuracy as \(\widehat{P}(S)\). In our experiments we make it clear which dataset is used.
\[f_{\theta}(x)=f^{l}\circ\cdots\circ f^{1}(x),\qquad\text{where }f^{i}(x)= \sigma(w^{i}x+b^{i}),\]
\(\sigma\) is a nonlinear activation function, and \(\theta=\{w^{i},b^{i}:1\leq i\leq l\}\) is the collection of parameters. We denote \(S\) the set of images equipped with their labels generated by \(f_{\theta}\), i.e.,
\[S=\Big{\{}(x,y)\in\mathcal{X}\times\mathcal{Y}:\arg\max_{1\leq i\leq m}f_{ \theta}(x)_{i}=\{y\}\Big{\}}.\]
The aim of image classification is to find a network \(f_{\theta}\) with high (clean) prediction accuracy \(A:=P(S)=\mathbf{E}_{P}[\![1_{S}]\). To this end, \(f_{\theta}\) is trained solving3 the stochastic optimization problem
Footnote 3: By convention, cross entropy is a function of two probability measures. In this case, we implicitly normalize the logit \(z\) by applying \(\,\mathrm{softmax}\), and we associate a class \(y\) with the Dirac measure \(\delta_{y}\).
\[\inf_{\theta\in\Theta}\mathbf{E}_{P}[L(f_{\theta}(x),y)], \tag{2}\]
where \(\Theta\) denotes the set of admissible parameters, and \(L\) is a (piecewise) smooth loss function, e.g., cross entropy loss\({}^{3}\mathrm{CE}:\mathbb{R}^{m}\times\mathcal{Y}\rightarrow\mathbb{R}\) given by
\[\mathrm{CE}(z,y)=-(\log\circ\,\mathrm{softmax}(z))_{y}. \tag{3}\]
Wasserstein Distances.Throughout, \((p,q)\) is a pair of conjugate indices, \(1/p+1/q=1\), with \(1\leq p\leq\infty\). We consider a norm \(\|\cdot\|\) on \(\mathcal{X}\) and denote \(\|\cdot\|_{\bullet}\) its dual, \(\|\tilde{x}\|_{\bullet}=\sup\{\langle x,\tilde{x}\rangle:\|x\|\leq 1\}\). Our main interest is in \(\|\cdot\|=\|\cdot\|_{r}\) the \(l_{r}\)-norm for which \(\|\cdot\|_{\bullet}=\|\cdot\|_{s}\), where \((r,s)\) are conjugate
indices, \(1\leqslant r\leqslant\infty\). We consider adversarial attacks which perturb the image feature \(x\) but not its label \(y\). Accordingly, we define a pseudo distance4\(d\) on \(\mathcal{X}\times\mathcal{Y}\) as
Footnote 4: Our results can be adapted to regression tasks where the class label \(y\) is continuous and sensitive to the perturbation. In such a setting a different \(d\) would be appropriate.
\[d((x_{1},y_{1}),(x_{2},y_{2}))=\|x_{1}-x_{2}\|+\infty\mathbb{1}_{\{y_{1}\neq y _{2}\}}. \tag{4}\]
We denote \(\Pi(P,Q)\) the set of couplings between \((x,y)\) and \((x^{\prime},y^{\prime})\) whose first margin is \(P\) and second margin is \(Q\), and \(T_{\#}P:=P\circ T^{-1}\) denotes the pushforward measure of \(P\) under a map \(T\).
The \(p\)-Wasserstein distance, \(1\leqslant p<\infty\), between probability measures \(P\) and \(Q\) on \(\mathcal{X}\times\mathcal{Y}\) is
\[\mathcal{W}_{p}(P,Q):=\inf\left\{\mathbf{E}_{\pi}[d((x_{1},y_{1}),(x_{2},y_{2 }))^{p}]:\pi\in\Pi(P,Q)\right\}^{1/p}. \tag{5}\]
The \(\infty\)-Wasserstein distance \(\mathcal{W}_{\infty}\) is given by
\[\mathcal{W}_{\infty}(P,Q):=\inf\{\pi\text{--}\mathrm{ess}\;\sup d((x_{1},y_{1 }),(x_{2},y_{2})):\pi\in\Pi(P,Q)\}. \tag{6}\]
We denote the \(p\)-Wasserstein ball centered at \(P\) with radius \(\delta\) by \(B_{\delta}(P)\). We mainly consider the cases where \(p,r\in\{2,\infty\}\). Intuitively, we can view \(p\) as the index of image-wise flexibility and \(r\) as the index of pixel-wise flexibility. Unless \(p=1\) is explicitly allowed, \(p>1\) in what follows.
## 4 Wasserstein Distributional Robustness: adversarial attacks and training
W-DRO Formulation.The Wasserstein DRO (W-DRO) formulation of a DNN training task is given by:
\[\inf_{\theta\in\Theta}\sup_{Q\in B_{\delta}(P)}\mathbf{E}_{Q}[L(f_{\theta}(x),y)], \tag{7}\]
where \(B_{\delta}(P)\) is the \(p\)-Wasserstein ball centered at \(P\) and \(\delta\) denotes the budget of the adversarial attack. In practice, \(P\) is not accessible and is replaced with \(\widehat{P}\). When \(p=\infty\), the above adversarial loss coincides with the pointwise adversarial loss of Madry et al. (2018) given by
\[\inf_{\theta\in\Theta}\mathbf{E}_{P}[\sup\{L(f_{\theta}(x^{\prime}),y):\|x^{ \prime}-x\|\leqslant\delta\}].\]
Recently, Bui et al. (2022) considered a more general criterion they called _unified distributional robustness_. It can be re-cast equivalently as an _extended_ W-DRO formulation using couplings:
\[\inf_{\theta\in\Theta}\sup_{\pi\in\Pi_{\delta}(P,\cdot)}\mathbf{E}_{\pi}[J_{ \theta}(x,y,x^{\prime},y^{\prime})], \tag{8}\]
where \(\Pi_{\delta}(P,\cdot)\) is the set of couplings between \((x,y)\) and \((x^{\prime},y^{\prime})\) whose first margin is \(P\) and the second margin is within a Wasserstein \(\delta\)-ball centered at \(P\). This formulation was motivated by the observation that for \(p=\infty\), taking \(J_{\theta}(x,y,x^{\prime},y^{\prime})=L(f_{\theta}(x),y)+\beta L(f_{\theta}(x),f_{\theta}(x^{\prime}))\), it retrieves the TRADES loss of (Zhang et al., 2019) given by
\[\inf_{\theta\in\Theta}\mathbf{E}_{P}\Big{[}L(f_{\theta}(x),y)+\beta\sup_{x^{ \prime}:|x-x^{\prime}|\leqslant\delta}L(f_{\theta}(x),f_{\theta}(x^{\prime})) \Big{]}.\]
W-DRO Sensitivity.In practice, training using (7), let alone (8), is computationally infeasible. To back propagate \(\theta\) it is essential to understand the inner maximization problem denoted by
\[V(\delta)=\sup_{Q\in B_{\delta}(P)}\mathbf{E}_{Q}[J_{\theta}(x,y)],\]
where we write \(J_{\theta}(x,y)=L(f_{\theta}(x),y)\). One can view the adversarial loss \(V(\delta)\) as a certain regularization of the vanilla loss. Though we are not able to compute the exact value of \(V(\delta)\) for neural networks with sufficient expressivity, DRO sensitivity analysis results allow us to derive a numerical approximation to \(V(\delta)\) and further apply gradient-based optimization methods. This is the main novelty of our approach -- previous works considering a W-DRO formulation mostly relied on duality results in the spirit of Blanchet and Murthy (2019) to rewrite (7).
**Assumption 4.1**.: We assume the map \((x,y)\mapsto J_{\theta}(x,y)\) is \(L\)-Lipschitz under \(d\), i.e.,
\[|J_{\theta}(x_{1},y_{1})-J_{\theta}(x_{2},y_{2})|\leqslant Ld((x_{1},y_{1}),(x _{2},y_{2})).\]
The following result follows readily from (Bartl et al., 2021, Theorem 2.2) and its proof.
**Theorem 4.1**.: _Under Assumption 4.1, the following first order approximations hold:_
1. \(V(\delta)=V(0)+\delta\Upsilon+o(\delta),\) _where_ \[\Upsilon=\Big{(}\mathbf{E}_{P}|\nabla_{x}J_{\theta}(x,y)\|_{\star}^{q}\Big{)}^ {1/q}.\]
2. \(V(\delta)=\mathbf{E}_{Q_{\delta}}[J_{\theta}(x,y)]+o(\delta),\) _where_ \[Q_{\delta}=\Big{[}(x,y)\mapsto\big{(}x+\delta h(\nabla_{x}J_{\theta}(x,y)) \|\Upsilon^{-1}\nabla_{x}J_{\theta}(x,y)\|_{\star}^{q-1},y)\Big{]}_{\#}P,\] _and_ \(h\) _is uniquely determined by_ \(\langle h(x),x\rangle=\|x\|_{\star}\)_._
The above holds for any probability measure, in particular with \(P\) replaced consistently by an empirical measure \(\widehat{P}\) or \(\widetilde{P}\). In Figure 1, we illustrate the performance of our first order approximation of the adversarial loss on CIFAR-10 (Krizhevsky, 2009) under different threat models.
WD-Adversarial Accuracy.We consider an attacker with perfect knowledge of the network \(f_{\theta}\) and the data distribution \(P\), aiming to minimize the prediction accuracy of \(f_{\theta}\) under an admissible attack. Complementing the W-DRO training formulation, Staib and Jegelka (2017), Sinha et al. (2018) proposed _Wasserstein distributional threat models_ under which an attack is admissible if the resulting attacked distribution \(Q\) stays in the \(p\)-Wasserstein ball \(B_{\delta}(P)\), where \(\delta\) is the attack budget, i.e., the tolerance for distributional image distortion. We define the adversarial accuracy as:
\[A_{\delta}:=\inf_{Q\in B_{\delta}(P)}Q(S)=\inf_{Q\in B_{\delta}(P)}\mathbf{E}_ {Q}[\![\![\,_{S}]\!]. \tag{9}\]
Note that \(A_{\delta}\) is decreasing in \(\delta\) with \(A_{0}=A\), the clean accuracy. For \(p=\infty\), the Wasserstein distance essentially degenerates to the uniform distance between images and hence the proposed threat model coincides with the popular _pointwise_ threat model. For \(1\leqslant p<\infty\), the _distributional_ threat model is strictly stronger than the _pointwise_ one, as observed in Staib and Jegelka (2017, Prop. 3.1). Intuitively, it is because the attacker has a greater flexibility and can perturb images close to the decision boundary only slightly while spending more of the attack budget on images farther away from the boundary. The threat is also closely related to out-of-distribution generalization, see Shen et al. (2021) for a survey.
WD-Adversarial Attack.We propose _Wasserstein distributionally adversarial attack_ methods. We believe this is a novel contribution and, so far, even the papers which used distributional threat models to motivate DRO-based training methods then used classical pointwise attacks to evaluate robustness of their trained DNNs. Our contribution is possible thanks to the explicit first-order expression for the distributional attack in Theorem 4.1(ii).
We recall the Difference of Logits Ratio (DLR) loss of Croce and Hein (2020). If we write \(z=(z_{1},\ldots,z_{m})=f_{\theta}(x)\) for the output of a neural network, and \(z_{(1)}\geqslant\cdots\geqslant z_{(m)}\) are the order
Figure 1: Performance of the first order approximation for the W-DRO value derived in Theorem 4.1. Left: WideResNet-28-10 (Gowal et al., 2020) under CE loss (3) and \((\mathcal{W}_{\infty},l_{\infty})\) threat model with \(\delta=1/255,\ldots,10/255\). Right: WideResNet-28-10 (Wang et al., 2023) under ReDLR loss (10) and \((\mathcal{W}_{2},l_{2})\) threat models with \(\delta=1/16,\ldots,10/16\).
statistics of \(z\), then DLR loss is given by DLR loss is given by
\[\mathrm{DLR}(z,y)=\begin{cases}-\dfrac{z_{y}-z_{(2)}}{z_{(1)}-z_{(3)}},&\text{if }z_{y}=z_{(1)},\\ -\dfrac{z_{y}-z_{(1)}}{z_{(1)}-z_{(3)}},&\text{else}.\end{cases}\]
The combination of CE loss and DLR loss has been widely shown as an effective empirical attack for _pointwise_ threat models. However, under _distributional_ threat models, intuitively, an effective attack should perturb more aggressively images classified far from the decision boundary and leave the misclassified images unchanged. Consequently, neither CE loss nor DLR loss are appropriate -- this intuition is confirmed in our numerical experiments, see Table 1 for details. To rectify this, we propose ReDLR (Rectified DLR) loss:
\[\mathrm{ReDLR}(z,y)=-(\mathrm{DLR})^{-}(z,y)=\begin{cases}-\dfrac{z_{y}-z_{(2 )}}{z_{(1)}-z_{(3)}},&\text{if }z_{y}=z_{(1)},\\ 0,&\text{else}.\end{cases} \tag{10}\]
Its key property is to leave unaffected those images that are already misclassified. Our experiments show it performs superior to CE or DLR.
An attack is performed using the test data set. For a given loss function, our proposed attack is:
\[x^{t+1}=\mathrm{proj}_{\delta}\big{(}x^{t}+\alpha h(\nabla_{x}J_{\theta}(x^{t },y))\big{|}\widetilde{\mathcal{T}}^{-1}\nabla_{x}J_{\theta}(x^{t},y)\big{|} \mathfrak{z}^{t-1}_{*}\big{)}, \tag{11}\]
where \(\alpha\) is the step size and \(\mathrm{proj}_{\delta}\) is a projection which ensures the empirical measure \(\widetilde{P}^{t+1}:=\frac{1}{|\widetilde{D}_{tt}|}\sum_{(x,y)\in\mathcal{D}_ {tt}}\delta_{(x^{t+1},y)}\) stays inside the Wasserstein ball \(B_{\delta}(\widetilde{P})\). In the case \(p=r=\infty\), one can verify \(h(x)=\mathrm{sgn}(x)\) and write (11) as
\[x^{t+1}=\mathrm{proj}_{\delta}\big{(}x^{t}+\alpha\ \mathrm{sgn}(\nabla_{x}J_{ \theta}(x^{t},y))\big{)}.\]
This gives exactly Fast Gradient Sign Method (single step) and Projected Gradient Descent (multi-step) proposed in Goodfellow et al. (2015), Madry et al. (2018) and we adopt the same labels for our more general algorithms.5 A pseudocode for the above attack is summarized in Appendix C.
Footnote 5: To stress the Wasserstein attack and the particular loss function we may write, e.g., _W-PGD-ReDLR_.
Finally, note that Theorem 4.1 offers computationally tractable approximations to the _W-DRO adversarial training_ objectives (7) and (8). In Appendix D we propose two possible training methods but do not evaluate their performance and otherwise leave this topic to future research.
## 5 Performance Bounds
Understanding how a DNN classifier will perform outside of the training data set is of key importance. We leverage the DRO sensitivity results now to obtain a lower bound on \(A_{\delta}\). We then use results on convergence of empirical measures in Fournier and Guillin (2015) to translate our lower bound into guarantees on out-of-sample performance.
Bounds on Adversarial Accuracy.We propose the following metric of robustness:
\[\mathcal{R}_{\delta}:=\frac{A_{\delta}}{A}\in[0,1].\]
Previous works mostly focus on the maximum distortion a neural network can withhold to retain certain adversarial performance, see Hein and Andriushchenko (2017), Weng et al. (2018) for local robustness and Bastani et al. (2016) for global robustness. However, there is no immediate connection between such a maximum distortion and the adversarial accuracy, especially in face of a distributionally adversarial attack. In contrast, since \(A=A_{0}\) is known, computing \(\mathcal{R}_{\delta}\) is equivalent to computing \(A_{\delta}\). We choose to focus on the relative loss of accuracy as it provides a convenient normalization: \(0\leqslant\mathcal{R}_{\delta}\leqslant 1\). \(\mathcal{R}_{\delta}=1\) corresponds to a very robust architecture which performs as well under attacks as it does on clean test data, while \(\mathcal{R}_{\delta}=0\) corresponds to an architecture which loses all of its predictive power under an adversarial attack. Together the couple \((A,A_{\delta})\) thus summarizes the performance of a given classifier. However, computing \(A_{\delta}\) is difficult and time-consuming. Below, we develop a simple and efficient method to calculate theoretical guaranteed bounds on \(\mathcal{R}\) and thus also on \(A_{\delta}\).
**Assumption 5.1**.: We assume that for any \(Q\in B_{\delta}(P)\)
1. \(0<Q(S)<1\).
2. \(\mathcal{W}_{p}(Q(\cdot|S),P(\cdot|S))+\mathcal{W}_{p}(Q(\cdot|S^{c}),P(\cdot|S^ {c}))=o(\delta),\) where the conditional distribution is given by \(Q(E|S)=Q(E\cap S)/Q(S)\).
The first condition stipulates non-degeneracy: the classifier does not perform perfectly but retains some accuracy under attacks. The second condition says the classes are well-separated: for \(\delta\) small enough an admissible attack can rarely succeed.
We write the adversarial loss condition on the correctly classified images and misclassified images as
\[C(\delta)=\sup_{Q\in B_{\delta}(P)}\mathbf{E}_{Q}[J_{\theta}(x,y)|S]\quad\text {and}\quad W(\delta)=\sup_{Q\in B_{\delta}(P)}\mathbf{E}_{Q}[J_{\theta}(x,y)|S^ {c}].\]
We note that an upper bound on \(\mathcal{R}_{\delta}\) is given by any adversarial attack. In particular,
\[\mathcal{R}_{\delta}\leqslant\mathcal{R}_{\delta}^{u}:=Q_{\delta}(S)/A. \tag{12}\]
**Theorem 5.1**.: _Under Assumptions 4.1 and 5.1, we have an asymptotic lower bound as \(\delta\to 0\)_
\[\mathcal{R}_{\delta}\geqslant\frac{W(0)-V(\delta)}{W(0)-V(0)}+o(\delta)= \widetilde{\mathcal{R}}_{\delta}^{l}+o(\delta)=\overline{\mathcal{R}}_{\delta }^{l}+o(\delta), \tag{13}\]
_where the first order approximations are given by_
\[\widetilde{\mathcal{R}}_{\delta}^{l}=\frac{W(0)-\mathbf{E}_{Q_{\delta}}[J_{ \theta}(x,y)]}{W(0)-V(0)}\quad\text{and}\quad\overline{\mathcal{R}}_{\delta}^ {l}=\frac{W(0)-V(0)-\delta\Upsilon}{W(0)-V(0)}. \tag{14}\]
The equality between the lower bound and the two first-order approximations \(\widetilde{\mathcal{R}}_{\delta}^{l}\) and \(\overline{\mathcal{R}}_{\delta}^{l}\) follows from Theorem 4.1. Consequently, \(\mathcal{R}_{\delta}^{l}:=\min\{\widetilde{\mathcal{R}}_{\delta}^{l}, \overline{\mathcal{R}}_{\delta}^{l}\}\) allows us to estimate the model robustness without performing any sophisticated adversarial attack. Our experiments, detailed below, show the bound is reliable for small \(\delta\) and is orders of magnitude faster to compute than \(\mathcal{R}_{\delta}\) even in the classical case of pointwise attacks. The proof is reported in Appendix A. Its key ingredient is the following tower-like property.
**Proposition 5.2**.: _Under Assumptions 4.1 and 5.1, we have_
\[V(\delta)=\sup_{Q\in B_{\delta}(P)}\mathbf{E}_{Q}[C(\delta)\mathbbm{1}_{S}+W (\delta)\mathbbm{1}_{S^{c}}]+o(\delta).\]
Bounds on Out-of-Sample Performance.Our results on distributionally adversarial robustness translate into bounds for performance of the trained DNN on unseen data. We rely on the results of Fournier and Guillin (2015) and refer to Lee and Raginsky (2018) for analogous applications to finite sample guarantees and to Gao (2022) for further results and discussion.
We fix \(1<p<n/2\) and let \(N=|\mathcal{D}_{tr}|\), \(M=|\mathcal{D}_{tt}|\). If sampling of data from \(P\) is described on a probability space \((\Omega,\mathcal{F},\mathbb{P})\) then \(\widehat{P}\) is a random measure on this space and, by ergodic theorem, \(\mathbb{P}\)-a.s., it converges weakly to \(P\) as \(N\to\infty\). In fact, \(\mathcal{W}_{p}(\widehat{P},P)\) converges to zero \(\mathbb{P}\)-a.s. Crucially the rates of convergence were obtained in Dereich et al. (2013), Fournier and Guillin (2015) and yield
\[\mathbb{E}[\mathcal{W}_{p}(\widehat{P},P)]\leqslant KN^{-\frac{1}{n}}\quad \text{and}\quad\mathbb{P}(\mathcal{W}_{p}(\widehat{P},P)\geqslant\varepsilon) \leqslant K\exp(-KN\varepsilon^{n}), \tag{15}\]
where \(K\) is a constant depending on \(p\) and \(n\) which can be computed explicitly, see for example Guo and Obloj (2019, Appendix). This, with triangle inequality and Theorem 5.1, gives
**Corollary 5.3**.: _Under Assumptions 4.1 and 5.1 on measure \(\widehat{P}\), with probability at least \(1-2K\exp(-K\varepsilon^{n}\min\{M,N\})\) it holds that_
\[\vec{A}=\vec{P}(S)\geqslant\widehat{A}\widehat{\mathcal{R}}_{2\varepsilon}^{l} +o(\varepsilon).\]
Next results provide a finer statistical guarantee on the out-of-sample performance for robust (W-DRO) training. Its proof is reported in Appendix B.
**Theorem 5.4**.: _Under Assumption 4.1, with probability at least \(1-K\exp(-KN\varepsilon^{n})\) we have_
\[V(\delta)\leqslant\widehat{V}(\delta)+\varepsilon\sup_{Q\in B_{\delta}^{*}( \widehat{P})}\Bigl{(}\mathbf{E}_{Q}\|\nabla_{x}J_{\theta}(x,y)\|_{s}^{q}\Bigr{)} ^{1/q}+o(\varepsilon)\leqslant\widehat{V}(\delta)+L\varepsilon\]
_where \(B_{\delta}^{\star}(\widehat{P})=\arg\max_{Q\in B_{\delta}(\widehat{P})} \mathbf{E}_{Q}[J_{\theta}(x,y)]\) and constant \(K\) only depends on \(p\) and \(n\)._
Our lower bound estimate in Theorem 5.1 can be restated as
\[\Delta\widehat{A}_{\delta}:=\widehat{A}-\widehat{A}_{\delta}\leqslant\frac{ \widehat{V}(\delta)-\widehat{V}(0)}{\widehat{W}(0)-\widehat{C}(0)}+o(\delta).\]
We now use Theorem 5.4 to bound \(\Delta A(\delta)\), the shortfall of the adversarial accuracy under \(P\), using quantities evaluated under \(\widehat{P}\).
**Corollary 5.5**.: _Under Assumptions 4.1 and 5.1, with probability at least \(1-K\exp(-KN\delta^{n})\) it holds that_
\[\Delta A_{\delta}(P)\leqslant\frac{\widehat{V}(\delta)-\widehat{V}(0)}{ \widehat{W}(0)-\widehat{C}(0)}+\frac{2L\delta}{\widehat{W}(0)-\widehat{C}(0)} +o(\delta).\]
We remark that the above results are easily extended to the out-of-sample performance on the test set, via the triangle inequality \(\mathcal{W}_{p}(\widehat{P},\widehat{P})\leqslant\mathcal{W}_{p}(\widehat{P},P)+\mathcal{W}_{p}(P,\widetilde{P})\). By using complexity measures such as entropy integral (Lee and Raginsky, 2018), Rademacher complexity (Gao, 2022, Gao et al., 2022) a further analysis can be undertaken for
\[\inf_{\theta\in\Theta}\sup_{Q\in B_{\delta}(P)}\mathbf{E}_{Q}[J_{\theta}(x,y)] \quad\text{and}\quad\inf_{\theta\in\Theta}\sup_{Q\in B_{\delta}(\widehat{P})} \mathbf{E}_{Q}[J_{\theta}(x,y)]. \tag{16}\]
In particular, a dimension-free estimate of out-of-sample performance is obtained in (Gao, 2022) under a Lipschitz framework with light-tail reference measures.
## 6 Numerical Experiments
Experimental Setting.We conduct experiments on a high performance computing server equipped with 49 GPU nodes. The algorithms are implemented in Python. All experiments are conducted on CIFAR-10 dataset (Krizhevsky, 2009), comprising 60,000 color images across 10 mutually exclusive classes, with 6,000 images per class. Each image contains \(32\times 32\) pixels in 3 color channels. We normalize the input feature as a vector \(x\in[0,1]^{3\times 32\times 32}\). The dataset is further divided into training and test sets, containing 50,000 and 10,000 images respectively. We evaluate the robustness of neural networks on the test set only.
We consider four threat models \((\mathcal{W}_{p},l_{r})\) with \(p,r\in\{2,\infty\}\) with different range of attack budget \(\delta\) depending on the relative strength of the attack. E.g., roughly speaking, if an \(l_{\infty}\)-attack modifies one third of the pixels of an image with strength 4/255, then it corresponds to an \(l_{2}\)-attack with strength 1/2. When clear from the context, we drop the \(\delta\) subscript.
We take top neural networks from RobustBench (Croce et al., 2021), a lively maintained repository that records benchmark robust neural networks on CIFAR-10 against _pointwise_ attacks. For _pointwise_ threat models \((\mathcal{W}_{\infty},l_{r})\), RobustBench reports \(A_{\delta}\) obtained using AutoAttack (Croce and Hein, 2020) for \(l_{\infty},\delta=8/255\) and \(l_{2},\delta=1/2\), see Appendix F. However, due to high computational cost of AutoAttack, we apply PGD-50 based on CE and DLR losses as a substitute to obtain the reference adversarial accuracy for attacks with relatively small budgets \(\delta=2/255,4/255\) for \(l_{\infty}\) and \(\delta=1/8,1/4\) for \(l_{2}\). For _distributional_ threat models \((\mathcal{W}_{2},l_{r})\), there is no existing benchmark attacking method. Therefore, W-PGD attack (11) based on ReDLR loss is implemented to obtain the reference adversarial accuracy \(A_{\delta}\). All PGD attacks are run with 50 iteration steps and take between 1 and 12 hours to run on a single GPU environment. Bounds \(\mathcal{R}^{l},\mathcal{R}^{u}\) compute ca. 50 times faster.
Distributionally Adversarial Attack.We report in Table 1 the average accuracy of top neural networks on RobustBench against pointwise and distributional attacks under different loss functions. The predicted drop in accuracy between a pointwise, i.e., \(\infty\)-W-DRO attack and a distributional 2-W-DRO attack is only realized using the ReDLR loss.
In Figure 2, we compare the adversarial accuracy of robust networks on RobustBench against _pointwise_ threat models and _distributional_ threat models. We notice a significant drop of the adversarial accuracy even for those neural networks robust against _pointwise_ threat models.
Bounds on Adversarial Accuracy.We report in Table 2 the computation time of our proposed bounds \(\mathcal{R}^{l}_{\delta}=\min\{\widetilde{\mathcal{R}}^{l}_{\delta},\overline{ \mathcal{R}}^{l}_{\delta}\}\) in (14) and \(\mathcal{R}^{u}_{\delta}\) in (12) with the computation time of \(\mathcal{R}_{\delta}\) obtained from AutoAttack. Computing our proposed bounds \(\mathcal{R}^{l},\mathcal{R}^{u}\) is orders of magnitude faster than performing an attack to estimate \(\mathcal{R}\). This also holds for _distributional_ threat attacks.
To illustrate the applications of Theorem 5.1, we plot the bounds \(\mathcal{R}^{l}\) and \(\mathcal{R}^{u}\) against \(\mathcal{R}\) for neural networks on RobustBench. The results are plotted in Figure 3 and showcase the applicability of our bounds across different architectures.6 Note that as \(\delta\) increases we are likely to go outside of the linear approximation regime, see Figure 1. Indeed, in Appendix E we plot the results for pointwise attack with \(\delta=8/255\) where some of neural networks have a lower bound \(\mathcal{R}^{l}\) greater than the reference \(\mathcal{R}\). Note that smaller \(\delta\) values are suitable for the stronger \(\mathcal{W}_{2}\)-distributional attack. For _pointwise_ threat models (top row) we compute the bounds using CE loss. For _distributional_ threat models (bottom row), reference adversarial accuracy is obtained from a W-PGD-DLR attack and, accordingly, we use ReDLR loss to compute \(\mathcal{R}^{u}\) and \(\mathcal{R}^{l}\). In this case, the width of the gap between our upper and lower bounds varies significantly for different DNNs. To improve the bounds, instead of \(\mathcal{R}^{l}\), we could estimate \(V(\delta)\) and use the lower bound in (13). This offers a trade-off between computational time and accuracy which is explored further in Appendix E.
Footnote 6: We use all 60 available networks on RobustBench (model zoo) for \(l_{\infty}\) and all 20 available networks for \(l_{2}\).
## 7 Limitations and future work
Future work.We believe our research opens up many avenues for future work. These include: developing stronger attacks under distributional threat models, testing the performance of the two
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & PreActResNet-18 & ResNet-18 & ResNet-50 & WRN-28-10 & WRN-34-10 & WRN-70-16 \\ \hline \hline \(\mathcal{R}\) & 197 & 175 & 271 & 401 & 456 & 2369 \\ \(\mathcal{R}^{l}\&\mathcal{R}^{u}\) & 0.52 & 0.49 & 0.17 & 0.55 & 0.53 & 1.46 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Computation times of \((\mathcal{W}_{\infty},l_{\infty}),\delta=8/255\) attack for one mini-batch of size \(100\), in seconds. We compute \(\mathcal{R}\) by AutoAttack and average the computation time over models on RobustBench grouped by their architecture.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \(\mathcal{W}_{\infty}\) & \multicolumn{3}{c}{\(\mathcal{W}_{2}\)} \\ \cline{2-5} Methods & AutoAttack & W-PGD-CE & W-PGD-DLR & W-PGD-ReDLR \\ \hline \(l_{\infty}\) & 57.66\% & 61.32\% & 79.00\% & **45.46\%** \\ \(l_{2}\) & 75.78\% & 74.62\% & 78.69\% & **61.69\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of adversarial accuracy of neural networks on RobustBench under different empirical attacks. Set attack budget \(\delta=8/255\) for \(l_{\infty}\) threat models and \(\delta=1/2\) for \(l_{2}\) threat models.
Figure 2: Shortfall of WD-adversarial accuracy with different metrics \(l_{\infty}\) (left) and \(l_{2}\) (right).
training algorithms derived here and investigating further sensitivity-based ones, as well as analyzing the relation between the values and optimizers in (16), verifying empirical performance of our out-of-sample results, including Corollary 5.5, and extending these to out-of-distribution performance.
Broader Impact.Our work contributes to the understanding of robustness of DNN classifiers. We believe it can help users in designing and testing DNN architectures. It also offers a wider viewpoint on the question of robustness and naturally links the questions of adversarial attacks, out-of-sample performance, out-of-distribution performance and Knightian uncertainty. We provide computationally efficient tools to evaluate robustness of DNNs. However, our results are asymptotic and hence valid for small attacks and we acknowledge the risk that some users may try to apply the methods outside of their applicable regimes. Finally, in principle, our work could also enhance understanding of malicious agents aiming to identify and attack vulnerable DNN-based classifiers.
## Acknowledgements
The authors are grateful to Johannes Wiesel for his most helpful comments and suggestions in the earlier stages of this project. JO gratefully acknowledges the support from St John's College, Oxford. YJ's research is supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). XB and GH's work, part of their research internship with JO, was supported by the Mathematical Institute and, respectively, St John's College and St Anne's College, Oxford.
|
2301.08525 | Promises and pitfalls of deep neural networks in neuroimaging-based
psychiatric research | By promising more accurate diagnostics and individual treatment
recommendations, deep neural networks and in particular convolutional neural
networks have advanced to a powerful tool in medical imaging. Here, we first
give an introduction into methodological key concepts and resulting
methodological promises including representation and transfer learning, as well
as modelling domain-specific priors. After reviewing recent applications within
neuroimaging-based psychiatric research, such as the diagnosis of psychiatric
diseases, delineation of disease subtypes, normative modeling, and the
development of neuroimaging biomarkers, we discuss current challenges. This
includes for example the difficulty of training models on small, heterogeneous
and biased data sets, the lack of validity of clinical labels, algorithmic
bias, and the influence of confounding variables. | Fabian Eitel, Marc-André Schulz, Moritz Seiler, Henrik Walter, Kerstin Ritter | 2023-01-20T12:05:59Z | http://arxiv.org/abs/2301.08525v1 | # Promises and pitfalls of deep neural networks in neuroimaging-based psychiatric research
###### Abstract
By promising more accurate diagnostics and individual treatment recommendations, deep neural networks and in particular convolutional neural networks have advanced to a powerful tool in medical imaging. Here, we first give an introduction into methodological key concepts and resulting methodological promises including representation and transfer learning, as well as modelling domain-specific priors. After reviewing recent applications within neuroimaging-based psychiatric research, such as the diagnosis of psychiatric diseases, delineation of disease subtypes, normative modeling, and the development of neuroimaging biomarkers, we discuss current challenges. This includes for example the difficulty of training models on small, heterogeneous and biased data sets, the lack of validity of clinical labels, algorithmic bias, and the influence of confounding variables.
keywords: Deep learning, Convolutional Neural Networks, Psychiatry, Neuroimaging, MRI +
Footnote †: journal: Experimental Neurology
## 1 Introduction
By setting new standards in image and speech recognition tasks, deep neural networks advanced to a key technology in research [1; 2]. In medical imaging, a number of applications have reached or even exceeded human
level performance, especially in cases where the sample size was rather large (\(N>15000\)) and the prediction task was well defined (i.e., the pathology is clearly identifiable in the imaging data). This includes, for example, the detection of diabetes from fundus images, skin cancer from photographs, and pneumonia from chest X-rays [3; 4]. Given these success stories, it has been asked to what extent deep neural networks are also capable of identifying brain diseases based on neuroimaging data, e.g., data obtained from magnetic resonance imaging (MRI; [5]). While most neurological diseases are associated with measurable brain damage, such as atrophy or lesions, that is visible in structural MRI, brain alterations in psychiatric disorders are considered to be more subtle, mostly functional and still under debate [6; 7]. Nevertheless, in the last two decades, neuroimaging has become one of the cornerstones in the search for biomarkers that explain neurobiological variance associated with psychiatric disease [7; 8]. Promising biomarkers include regional atrophy and reduced cortical thickness [9; 10], brain age [11] as well as alterations in task-induced functional MRI activity or resting-state connectivity [12]; for a review, see Lui et al. [7]. However, such biomarkers are not necessarily disease-specific and have a high overlap across psychiatric (and neurological) diseases [13].
In addition to traditional statistical analyses, where mean differences between groups (e.g., patients and controls) have been investigated, neuroimaging-based biomarkers have also been employed in machine learning analyses with the goal to draw conclusions about individual subjects [14; 15; 16; 17; 18]. By being settled in the framework of precision medicine, machine learning is considered to provide a huge promise for transforming healthcare in general [19] and psychiatry in particular [20]. While machine learning approaches can been applied to different kinds of data including deep phenotyping, genetics, and metabolomics, they are in particular a suitable candidate for analyzing neuroimaging data due to their ability to flexibly handle high-dimensional data where the number of variables (e.g., voxels or regional volumes) commonly exceeds the number of samples (see Table 1 for a brief description of machine learning related terms). The learning settings range from automatic disease diagnosis and prognosis to subtype discovery and prediction of treatment outcome [16; 21; 22; 23].
Most classical machine learning algorithms learn comparably simple input-output functions. They therefore have difficulties in processing raw MRI data and rely on hand-crafted features such as cortical thickness or connectivity matrices [1; 24; 25; 26]. In psychiatry, the design of those features is build on
neurobiological disease models and usually reflects only one part of disease pathology, but does not meet the multifactorial nature of many psychiatric diseases [13]. Deep learning approaches, on the other hand, can learn hierarchical representations directly from the raw data and thus are capable of solving more complex problems [1; 2]. Major breakthroughs have been achieved in natural image recognition by using a specialized deep learning architecture called convolutional neural networks (CNNs); which are artificial neural networks (ANNs) that exploit the structural dependencies of data coming from arrays such as image, audio, and video signals [1]. The key idea behind CNNs is inspired by the mechanism of receptive fields in the primate's visual cortex and relates to the application of local convolutional filters in combination with downsampling [27; 2]. In addition to their utilization in industry for diverse image and speech recognition tasks, they have advanced to a strong instrument for analyzing medical imaging data, including some as
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline
**Term** & **Description** \\ \hline \multicolumn{3}{p{142.3pt}}{**Machine learning**} \\ \multicolumn{3}{p{142.3pt}}{**Deep learning**} \\ Artificial neural network (ANN)/ & \\ Deep neural network (DNN) & \\ \multicolumn{3}{p{142.3pt}}{_Fully-connected neural network_} \\ \multicolumn{3}{p{142.3pt}}{_Convolutional neural network (CNN)_} \\ \multicolumn{3}{p{142.3pt}}{_Recurrent neural network (RNN)_} \\ \multicolumn{3}{p{142.3pt}}{_Recurrent neural network (RNN)_} \\ \multicolumn{3}{p{142.3pt}}{ANN which introduces recurrent connections between the artificial neurons; and is, thus, primarily used on sequential data.} \\ \hline \end{tabular}
\end{table}
Table 1: Description and interdependencies of central terms in machine learning and deep learning.
pects of neuroimaging data [5; 24; 28]. Besides promising preliminary results in the field of neurology and psychiatry [24; 29], a number of open challenges still exist. On the one hand, these challenges can be technology-related, such as the difficulty of deep neural networks to learn robust models on small, heterogeneous data sets or to give meaningful explanations for complex model decisions [30; 31; 32; 33]. On the other hand, these challenges are driven by factors that are controversial within psychiatry itself, e.g., the heterogeneity of psychiatric diseases in their clinical presentation and the reliance on clinical symptoms rather than neurobiological substrates for establishing disease categories [34; 35; 36].
In this review, we will first give a short introduction into MRI and its role in psychiatry (section 2) and then introduce the basic concepts of machine and deep learning with a focus on CNNs as the most popular type of deep neural network currently applied to neuroimaging data (section 3; please see Durstewitz et al. [20] for an introduction into recurrent neural networks for analyzing temporal data in psychiatry). Both concepts are needed in order to understand the following sections; readers interested in more general introductions of machine learning in medicine and psychiatry are referred to [17; 37; 38]. In section 4, we will outline three methodological promises for the use of deep learning in psychiatry, namely representation learning, transfer learning, and the use of model architectures incorporating neuroimaging-specific priors (a.k.a. inductive bias). In sections 5 and 6, we will first present promising application scenarios within psychiatry, namely automated diagnostics, subtype identification, and development of new biomarkers; and then review existing applications for major psychiatric disorders including Alzheimer's disease, schizophrenia, substance abuse, neurodevelopmental disorders, and internalizing disorders such as depression, anxiety, and obsessive-compulsive disorder. Finally, current challenges and implications in the use of deep neural networks will be discussed in section 7.
## 2 Magnetic Resonance Imaging (MRI) and its role in psychiatry
After a short introduction into the role of MRI in psychiatric research, we will present the most common MRI modalities (and preprocessing steps) that have been used at the intersection of psychiatry and machine learning. MRI is a non-invasive medical imaging technique which has become an important tool in the diagnosis and monitoring of neurological diseases including
stroke, multiple sclerosis and tumor detection [39; 40]. In psychiatry, MRI data is mainly used to rule out other causes (e.g., brain tumors) that might explain changes in behavior and feelings but also became a cornerstone for investigating neurobiological correlates in psychiatric diseases [7]. The focus in psychiatric research, using structural MRI (sMRI), is on high-resolution T1-weighted MRI data (e.g., Magnetization Prepared RApid Gradient Echo (MPRAGE) [41]) providing the best contrast between grey and white matter which is therefore useful for measuring cortical thickness and regional atrophy in the brain. White matter abnormalities can be identified using T2-weighted imaging (e.g., Fluid-Attenuated Inversion Recovery (FLAIR)), magnetization transfer imaging, and diffusion tensor imaging but thus far play only a minor role in machine learning analyses. Although structural changes in patients with psychiatric diseases are considered to be small and difficult to detect by visual inspection, some relative consistent findings have been reported, e.g., smaller hippocampi in patients with depression or larger ventricles in patients with schizophrenia [42; 43; 7].
Functional MRI (fMRI), in contrast to sMRI, measures brain activity by changes in blood oxygenation and blood flow as a result of neural activity (the so-called blood oxygen level dependent (BOLD) response [44]). Whereas in task-based fMRI, the activation during specific tasks (e.g., processing of emotional faces) is examined, in resting-state fMRI it is the intrinsic network activity during rest (i.e., no task) that is investigated. Specifically, resting-state functional connectivity matrices capturing spatial correlations in BOLD fluctuations are thought to give important insights about the functional organization of the healthy and diseased brains and have been employed in machine and deep learning [45; 46]. However, the within- and between-subject variability is considerable and makes finding robust functional biomarkers challenging [47].
Most studies applying CNNs on MRI so far have focused on sMRI (in particular T1-weighted MPRAGE) rather than fMRI. There are two main reasons for this focus. First, sMRI shares more properties with the modalities where CNNs have been successful (e.g., natural images); fMRI, in contrast, does not contain clearly visible hierarchical objects such as brain regions with high-frequency edges. Second, given that fMRI is 4-dimensional, it makes computation very expensive on the raw data (see section 7.2.3), or leads to the aggregation across time or space, potentially losing relevant information. In the future, advanced MRI techniques such as quantitative MRI may play an important role [28].
To study the same brain regions across subjects, MRI data are usually spatially normalized to a common space (e.g., the Montreal Neurological Institute [MNI] template) [48]. Deep learning, on the other hand, does not necessarily require spatial normalization but it may help in reducing variance in the data especially in light of small sample sizes. Therefore, most deep learning studies at least linearly register the data to MNI space [31]. Additionally, deep learning algorithms have also been used to perform nonlinear registration by themselves (for an overview of deep learning applications in MRI aside from clinical applications, see [28]).
To analyze diseases trans-diagnostically and on a larger scale, a number of large multi-site imaging cohorts have been elicited (e.g., HCP2, UK biobank3, ENIGMA4, and IMAGEN5). The existence of large data bases is essential for the success of machine and deep learning techniques in psychiatric research (see section 7).
Footnote 2: [http://www.humanconnectomeproject.org/](http://www.humanconnectomeproject.org/)
Footnote 3: [https://www.ukbiobank.ac.uk/](https://www.ukbiobank.ac.uk/)
Footnote 4: [http://enigma.ini.usc.edu/](http://enigma.ini.usc.edu/)
Footnote 5: [https://imagen-europe.com/resources/imagen-dataset/](https://imagen-europe.com/resources/imagen-dataset/)
## 3 Key ideas of machine and deep learning
To make the methodological promises and challenges of deep learning technology within psychiatric research more accessible, we first briefly introduce the reader to some fundamental concepts of the more general field of machine learning, including different learning settings and validation schemes. We then concentrate on the sub-field of deep learning, which is the focus of the later explanations. Therefore, we briefly describe the underlying principles of deep learning, namely artificial neural networks (ANNs), and convolutional neural networks (CNNs) in particular since this type of ANN is predominantly used to analyze image data, as well as methods to explain individual predictions of these types of models. While experienced readers might skip this section, novice readers are encouraged to consider [49; 50] for a comprehensive coverage of the fundamentals of machine learning and deep learning or [16; 17; 37; 38] for its applications to medicine.
### Machine learning
#### 3.1.1 Basic concepts
Machine learning is an interdisciplinary field at the intersection of computer science, statistics, mathematics, and others, which in general defines algorithms that learn from data. In contrast to traditional rule-based systems in artificial intelligence, where human knowledge is encoded in rigid rules, machine learning algorithms can learn to perform a specific task (e.g., discriminating between images of dogs and cats) without being explicitly programmed [49; 51] (see Figure 1). This flexibility allows learning algorithms to be applied to complex problems which are not feasible for hand-tuned approaches [52; 53; 54].
Most machine learning algorithms can be categorized into two main paradigms: supervised learning and unsupervised learning (see Figure 2). In supervised learning, a data set contains examples of input-output pairs, and a function that maps the input features to the output labels is learned. In psychiatry, the input features could be a set of biomarkers (e.g., volumes of particular brain structures), while the label is given by group membership (e.g., patients or healthy controls), disease severity (e.g., the extent of positive symptoms in schizophrenia), prognosis, or treatment outcome (e.g.,
Figure 1: Artificial intelligence became popular in the mid’ 1900s using large tables of hand designed decision strategies, which for example beat human players in chess. Starting in the 1980s the field of machine learning gained traction and showed that the previously hand designed strategies could be learned from data. The so far unpopular methods around ANNs had a breakthrough in the early 2010s when large data sets and computing power allowed them to become deeper and more complex, and thus outperformed other methods in many disciplines including image and speech recognition.
response to cognitive-behavioral psychotherapy). After learning this mapping, the algorithm can then be used to make statements about new, unseen subjects. If the output labels are discrete classes, the prediction task is called classification, while for continuous labels, it is known as regression. In contrast, in unsupervised learning, the data set only contains input features without any information about the labels. Often, the goal of this paradigm is to discover a compact and more informative representation of the data. An example here is the identification of disease clusters or subtypes solely based on neuroimaging data.
To enable a machine learning algorithm to learn performing such tasks from data, four components are usually used to define the algorithm: (1) a data set (which is split into training and test set), (2) a statistical model to learn an approximate representation of the data, (3) a loss function to measure the goodness of the model, and (4) an optimization procedure to alter the model parameters in a way that the loss is minimized. Concerning the statistical models, machine learning comprises a large number of model classes, of which in particular, support vector machines (SVMs), Gaussian processes, random forests, logistic regression, and ANNs are common choices
Figure 2: Examples of supervised and unsupervised machine learning. In supervised machine learning such as classification (left) each brain MRI is a data point (represented as circles and crosses) and has a pre-determined class such as patient or control. The task is to find a model (black line) that discriminates between those classes. In unsupervised machine learning such as clustering (right) each brain MRI is again a data point but conversely does not have a class label. Here, the task is to find clusters in the data which can be well separated. These clusters could for example represent subgroups of a disease.
in the neuroimaging domain [16; 49; 55].
#### 3.1.2 Validation and performance metrics
The central challenge in machine learning is the ability of the learning algorithm to perform well on new, unobserved data; this is also called generalization [56]. Therefore, a quantitative measure of predictive performance, such as the model error, is required to test this ability. Additionally, according to the 'No Free Lunch Theorem' [57] in machine learning, there is no single model which performs universally best on all possible data sets. Therefore, two different procedures are performed to identify the best performing model on a data set which furthermore generalizes well to unseen data: model selection and model assessment [56].
Model selection is the process of selecting the best of many competing models. Since most machine learning algorithms have hyperparameters which must be set manually, finding the optimal hyperparameter configuration is performed as part of this model selection step. This step is crucial to adjust the expressive capacity (often called expressivity) of the model to match the complexity of the task. A mismatch between model capacity and task complexity is a common problem in machine learning. In particular, overfitting, in which the model learns noisy and too complex patterns in the data set without improving the generalization, results from too much model capacity for the task, and therefore, requires regularization to penalize this capacity. In the model assessment, the generalization ability of a model is evaluated based on the generalization error of the model.
This generalization error of the learning algorithm must be approximated, which is usually done by the holdout validation method. The data set is split by randomly partitioning it into three disjoint subsets: a training set, a validation set, and a test set. These subsets are assumed to be independent and identically distributed (i.i.d.) samples from an unknown data generating distribution. The i.i.d. assumption is crucial to receive an unbiased estimate of the generalization error, although violations are common in neuroimaging-based machine learning [58], for example, resulting from data leakage by splitting clinical data containing repeated measurements on the observation-level rather than the subject-level [59]. Based on the training set, different models are fitted and evaluated on the validation set by using the model errors as an estimate of the generalization error. The best model is then selected and tested on the holdout test set using the test error as an estimate
of the generalization error of the model.6
Footnote 6: Please note that the model selection error on the validation set will underestimate the true error of the models and should therefore not be used as an estimate of the models’ generalization ability.
In practice, especially in clinical applications, the data set sizes are usually small such that the single train-validation-test split might result in an inaccurate estimate of the generalization error [56]. The most common technique to address this issue is cross-validation (CV). In CV, the data set is partitioned into several different subsets, so-called folds. By choosing one of the folds as the test set, while the others are used as a training set, it is possible to repeat this training and testing computation until each fold has been used as a test set. The estimate of the generalization error is then an average over the errors on the test set of each repetition. A special case of CV is leave-one-out CV, where the number of folds is equivalent to the number of samples in the data set. This form of CV has been used in clinical research (e.g., [60; 61]) due to particularly small data set sizes. However, Kohavi [62] has empirically shown that although leave-one-out estimates are almost unbiased, the variance of the estimates can be large. Besides, it has been argued that repeated random splits lead to more stable results [63]. More recently, in multi-site studies, so-called leave-one-site-out CV is used (e.g, [64; 65]). Here, the data from different sites are pooled so that the data of each site being a separate fold in the CV framework. If both model selection and model evaluation are performed, CV needs to be extended to receive an unbiased estimate of the generalization error [66]. This extension is called nested CV and is merely a nesting of two CV loops, performing model selection in the inner loop while evaluating the final model in the outer loop.
The predictive performance of the final model is then reported using a performance metric. Common metrics include (balanced) accuracy, sensitivity and specificity, area under the receiver operating characteristic curve (ROC AUC), F-1 score for classification, and mean-squared-error (MSE) for regression. While some metrics such as accuracy might be strongly influenced by the class distributions, others try to correct for that, such as balanced accuracy, ROC AUC and F-1 score.
### Deep learning
#### 3.2.1 Artificial neural networks
Deep learning is a particular subtype of machine learning which is based on ANNs. These ANNs describe a large class of models that consists of a collection of connected units or artificial neurons. Although these models have recently gained attraction after achieving state-of-the-art results in fields such as computer vision [67] or natural language processing [68], the origins of these models date back to the 1940s and were used to study biologically inspired representations of information processing [69]. To describe the working mechanisms of ANNs, we first introduce the most basic type of these networks, the multilayer perceptron (MLP) or fully-connected neural network. This type is a feed-forward neural network which is constructed by connecting groups of artificial neurons or units organized within layers (see Figure 3). Unlike recurrent neural networks [70; 71], feed-forward neural networks only allow the input information to flow in one direction, from input to output, without loops between the neuron connections. For this, an MLP consists of an input layer, one or more hidden layers and an output layer. While the input layer passes the input features to the network using one unit for each input feature (e.g., one unit for every voxel in an MR image), the output layer completes the task by outputting a prediction, with the number of units depending on the task (e.g., one unit for patients and one unit for healthy controls in a binary classification task). The hidden layers define the capacity of ANNs, with the number of units in a hidden layer defining the width and the number of hidden layers defining the depth of the network [50]. An ANN architecture using more than one hidden layer is called a deep neural network; the name deep learning is derived from this terminology and therefore serves as a collective term for ANNs with multiple hidden layers.
In an MLP, a fully-connected layer connects each artificial neuron in a layer with every artificial neuron in the previous layer, enabling the flow of information between sets of artificial neurons. Each unit thus receives an input from every artificial neuron in the previous layer connected to it to compute a weighted sum of these connections using the associated weight parameters. Therefore, the weights can be seen as the relative strength or importance of the connections between consecutive units. Furthermore, inside a unit, a nonlinear activation function (e.g., rectifier function [72] or logistic sigmoid function) is applied to the previously computed weighted sum to nonlinearly transform the input to an output of the unit. These outputs then serve as
inputs to the units in the subsequent layer, creating a nested chain structure of connected units between layers. Therefore, an MLP can be understood as a complex mathematical function composed of many simpler functions.
Regarding the expressive power, the Universal Approximation Theorem (e.g., [73]) states that a fully-connected neural network with one hidden layer and a sufficient number of hidden units can theoretically approximate any continuous function.7 However, the required width of the hidden layer may be infeasibly large such that the fully-connected neural network may fail to learn and generalize correctly. Therefore, the capacity of an ANN model is typically increased by adding depth to the network [75]. This leads to a series of nonlinear transformations which enable an ANN, or rather a deep neural network, to hierarchically learn multiple levels of representations from raw data to abstract representations. Unlike traditional machine learning methods, which often rely on the discriminatory power of handcrafted features and previous feature engineering, ANNs can learn feature representations from raw data by using a general learning procedure [76]. Examples of feature engineering include the delineation of lesions to calculate the total lesion load for use in a classifier for multiple sclerosis or computing a gray matter segmentation, dividing it into separate brain regions and using the mean gray matter density per region in a classifier for schizophrenia. Artificial neural networks, however, are able to process the raw images and learn similar or
Figure 3: Architecture of a fully-connected network with interconnected groups of neurons in the input layer, a single hidden layer and the output layer.
potentially more powerful features themselves.
To learn the optimal model parameters (i.e., weights), ANNs rely on gradient-based optimization using the backpropagation algorithm [77]. During the training phase, the backpropagation algorithm uses the chain-rule to compute the partial derivatives of the loss function with respect to the weight parameters to further optimize the weight parameters using a gradient-based optimization procedure. This whole procedure is then repeated many times to find the optimal parameter configuration of the artificial neural network model. An essential aspect of the tremendous success of ANNs is the efficient implementation of this learning procedure using highly optimized programming libraries such as PyTorch [78], Tensorflow [79] or Theano [80] and the support of graphical processing units (GPUs), which enable an efficient parallelized computation of this procedure [1].
Despite the success and the representational power of ANNs, the learning process introduces certain challenges to this class of machine learning algorithms. First, although attempts at automated approaches [81] exist, the specification of an ANN model requires manually determined design choices such as the number of hidden layers or hidden units in a layer, regularization techniques, as well as further hyperparameters. These choices are mostly guided by background knowledge and experimentation, making training an ANN quite an art. Due to this issue, artificial neural network models are usually overparametrized using thousands or even millions of parameters and are prone to overfit the training data. To address the risk of overfitting, regularization techniques such as dropout [82] or weight decay [83] are used in the training process to reduce the network's capacity. Second, the loss surface, especially of deep neural network models, is highly non-convex and possesses many local minima [84]. Thus, the gradient-based optimization algorithms do not guarantee to converge to the global minimum of the loss surface such that different starting configurations of the network are required. However, Choromanska et al. [85] have shown that although deep neural networks mostly converge to local minima, the resulting models often generalize well to new data, so the problem of local minima is negligible.
#### 3.2.2 Convolutional neural networks
Although a fully-connected neural network can theoretically approximate any continuous function, the wiring of the neurons can lead to certain drawbacks, especially on grid-like topologies such as MR images [86]. First, the number of parameters to be learned for this often high-dimensional input
data can be very large due to the many required connections between the neurons in the input and the subsequent hidden layer. Second, for grid-like data such as images, the pixels or voxels of the input images are treated independently, ignoring the spatial information in form of correlations between pixels and translation invariance of objects in the image. Convolutional neural networks (CNNs) [87] use the mathematical convolution operation within a convolutional layer to address these drawbacks by introducing a biologically inspired local receptive field [27], weight sharing, and downsampling.
A convolution is a linear mathematical operation (see Figure 4) which computes the weighted sum of an input and the weight parameters of a function, also known as a kernel or filter, at every location in the input space to produce a feature map. Thus, instead of connected groups of neurons, each unit in the feature map is only locally connected to a specific region, also known as a receptive field, in the input space. This enables the extraction of local features, and therefore, preserves the spatial structure of the input space. Moreover, the units in the feature map are restricted to share the same weight parameters for every region in the input space. As a result, the number of parameters to be learnt and thereby the complexity is reduced to a set of parameters equivalent to the size of the kernel. Since the convolution is a linear operation, activation functions are again used to nonlinearly transform the resulting feature maps. Typically, in a CNN, a convolution layer is followed by a pooling layer that replaces the values of a local region in the feature map with summary statistics to reduce the dimensionality of the input. This results in translation invariance, which is an important property when the presence of a particular feature is of importance but not its
Figure 4: Example of a 2-dimensional convolution operation.
location. In a typical CNN architecture, successive pairs of convolutional layers and pooling layers are used to learn different levels of representations of the data, e.g. from an MR image (from edges and blobs to more abstract concepts such as lesions or atrophy). The final layers of a CNN architecture are typically fully-connected layers to compute the output predictions for a specific task. Most studies applying neural networks for classification in MRI follow the architectures of successful computer vision models such as DenseNets [88], ResNets [89] or VGGNet [90]. Yet, recently researchers have started designing architectures specialized for the properties of raw data or functional connectivity matrices introducing inductive biases (see section 4.3). For applications of CNNs to neuroimaging data, a couple of specialized software libraries exist (e.g., PHOTONAI [91], DeepNeuro [92]).
#### 3.2.3 Explaining model predictions
Despite their recent success, ANNs, and in particular, deep neural networks, are often criticized for being black-boxes [93; 94]. Although deep neural networks are mostly deterministic models, the large parameter space of these models, often comprising thousands or millions of parameters, and highly nonlinear interactions make it difficult to understand the relationship between the inputs and the outputs from a human perspective. This is problematic particularly in risk-sensitive disciplines such as medicine, where a transparent and verifiable decision-making process is crucial [95]. To address this lack of transparency, different methodological approaches have been proposed to understand the behaviour of machine learning models in general (e.g., LIME [96], SHAP [97]) and neural networks in particular [98; 99; 100; 101].
Various attribution methods have been introduced to explain neural network predictions in image classification [102], which attempt to determine the attribution or relevance of each input feature to the predicted output of a neural network. The resulting attributions are then displayed in a heatmap to visualize which input features have positively or negatively influenced the prediction. The proposed attribution methods can be categorized into two different methodological approaches: perturbation-based methods and backpropagation-based methods. Perturbation-based methods assign attribution values to input features directly by modifying the input space through removing, masking, or altering features and measuring the difference between the predictions based on the modified and the original inputs. An example is the occlusion method proposed by Zeiler and Fergus [103], which system
atically masks a region in the input image to observe potential changes in the target class probability. Rieke et al. [104] extended this method to an atlas-based occlusion for MR images. Backpropagation-based methods, on the other hand, assign the attribution for all input features using the backpropagation algorithm to either compute partial derivatives of the outputs with respect to the input features [105; 106] or to backpropagate a relevance score through the network [107]. Although the visualizations do not directly relate to output variations, these approaches are computational less expensive compared to perturbation-based methods and thus make them more suitable for high-dimensional MR images [30]. A popular method of this kind is layer-wise relevance propagation (LRP; [108]), in which a relevance score is introduced to the output layer, which is then then propagated back to the input layer using a modified backpropagation algorithm.
## 4 Methodological promises
Deep neural networks present advantages over classical statistics and machine learning methods that are particularly promising for neuroimaging data in psychiatry. In this section, we discuss how deep neural networks can provide meaningful intermediate representations, which facilitate discovery of disease subtypes. Furthermore, we show how these intermediate representations enable transfer learning to deal with comparably small sample sizes in neuroimaging. Lastly, we discuss how specialized architectures (i.e., inductive biases) can more efficiently exploit the structure in neuroimaging data. These three advantages lay the methodological foundation for the application scenarios in section 5.
### Representation learning
A central advantage of deep neural networks is their ability to perform automatic feature engineering on raw or minimally processed data (e.g., spatially normalized MRI data) as shown in Figure 5. These automatically learned features can be interpreted as a new view or representation of the input data and can be extracted by reading out the activations of neurons in a given layer [76]. Thus, any given data point (e.g., an MR volume of a particular subject) can be described by the activation profile that it generates in a particular layer of a deep neural network.
In analogy to how a CNN trained to classify natural images will yield successive layers of intermediate representations detecting edges, textures,
and, finally, whole objects [109], one expects a deep neural network trained on neuroimaging data to provide a hierarchy of intermediate representations reflecting brain structure or function (e.g., low-level neural structures such as boundaries between grey and white matter, focal abnormalities such as lesions or atrophy, and disease profiles [110]).
To create representations that are useful for a specific research question, there exist two general approaches: supervised and unsupervised represen
Figure 5: Deep neural networks have the ability to automatically perform feature extraction (bottom). Automatic feature extraction allows the automatic design and selection (“learning”) of representative features. In contrast (top), traditional machine learning models mostly require manual extraction and selection of features. These features are typically designed (“engineered”) by human experts and therefore introduce a bias about the experts’ knowledge into the features. Furthermore, these methods require additional tools (e.g., Statistical Parametric Mapping (SPM) or Freesurfer) or manual effort (e.g., manual segmentation).
tation learning. In supervised representation learning, models are trained to predict a target variable, and successive layers of the model discard more and more prediction-irrelevant information while retaining and recombining prediction-relevant information into increasingly abstract representations of the input data (for a formal discussion, see e.g., Tishby and Zaslavsky [111]). In contrast, unsupervised representation learning operates exclusively on the input data. One example are autoencoders that transform the data to pass through a low-dimensional bottleneck and then try to reconstruct the original data from the compressed intermediate representation [112; 50]. This approach implicitly assumes that a compressed representation of the data is likely to mirror abstract higher-order concepts or latent variables that generated the data. Recently popularized "self-supervised" learning extends this approach to other auxiliary tasks for learning meaningful representations, such as predicting the next word in a sentence [113], color channels from a grayscale image [114], or anatomical segmentations from an MR image [115]. Intermediate representations in deep neural networks can be constrained to have potentially useful mathematical properties such as normality and independence [116], or to selectively disentangle factors of variation [117]. They are often designed to reduce the dimensionality of the input data (e.g., high-dimensional neuroimaging data) for further analyses, such as clustering (section 5.2) or to test scientific hypotheses (section 5.3).
### Transfer learning
In many domains, for instance in computer vision, there exist large general purpose data sets (e.g., ImageNet) while data sets for specific applications are often prohibitively small. Recently, a similar situation has arisen in neuroimaging, where small clinical studies are now augmented by large-scale data collection initiatives such as the UK Biobank or ENIGMA. The existence of intermediate representations in deep neural networks can allow for carrying insights gained in large general purpose data sets over to small sample clinical settings [118; 119]. Models can be split at the level of an intermediate representation, allowing to _pre-train_ the lower layers of the networks on a large data set, and then _fine-tune_ the higher layers on the target data set. This approach is called transfer learning [120].
The motivation for transfer learning in CNNs is simple. Instead of initializing weights randomly, they are initialized based on a related data set. Ideally, the data used for pre-training the weights should have common properties with the target data, such as objects with high frequency edges. If the
low-level representations of a CNN are responsible for detecting those edges, then the learned properties of edge detection can be transferred to the new task. The transferred weights can then either be frozen i.e. used as a fixed feature extractor in combination with a classifier (e.g. fully-connected layers), or can be fine-tuned to adapt to the target domain. This setup can permit the application of expressive deep learning models even in situations where the sample size of the target data set would be insufficient for training such a model from scratch.
For analyzing MRI data, transfer learning from natural images [121; 122] and from related MRI data (e.g., other MRI sequences; [123; 31]) has been suggested. Even though, the number of studies on transfer learning in neuroimaging is growing, studying, for instance, brain lesion segmentation [124; 125; 123; 126] or Alzheimer's disease classification [127; 128; 30], the most effective approach for transfer learning in clinical neuroimaging has not yet been established.
### Inductive bias
Every machine learning algorithm must make certain assumptions in how it processes the training data. For instance, (\(l_{1}\)-regularized) linear regression assumes a (sparse) linear relationship between input and target variable. The sum of assumptions of a given learning algorithm is called its inductive bias. Inductive biases that reflect the data-generating process or restrict the space of potential solutions to those that are a priori meaningful, can substantially reduce the number of training samples needed to achieve the prediction goal [129]. The flexible architecture of deep neural networks allows for the creation of complex inductive biases [130]. The success and subsequent (re-)popularisation of deep learning in the last two decades has been attributed mainly to advances in CNNs, i.e. the development of an inductive bias that allows efficient exploitation of translation invariance in natural images. Other examples include recurrent neural networks which assume time invariance [131], or graph neural networks which assume invariance to ordering (except for pair relations) [132].
In contrast to natural images, MRI brain images do not possess strong translation invariance properties. Due to a standard brain structure (e.g., hippocampus is at the same location in all humans) and further registration to a common template (e.g., MNI), the usefulness of translation invariance as an inductive bias for neuroimaging data has been questioned [32]. Recently, specialized architectures of deep neural networks have been proposed that are
specifically adapted to neuroimaging data. Kawahara et al. [133] proposed an architecture designed for connectivity matrices. Eitel et al. [134] restrict the translation invariance of CNNs to local patches, accounting for spatially normalized MRI data. Graph CNNs [135; 136; 137] have been adapted to neuroimaging data by e.g., Parisot et al. [138; 139]. Further advances in inductive biases for neuroimaging data may be an important prerequisite for fully utilizing deep neural networks in psychiatric research.
## 5 Application scenarios
Motivated by the ability of CNNs to extract hierarchical representations of complex image data, and several groundbreaking results for medical imaging applications where human-level performance has been surpassed [3; 4], we have selected three broad use cases which we determined as specifically promising for neuroimaging-based psychiatric research. The following section outlines these use cases (see also Figure 6), namely the automatic diagnosis of diseases and prognosis, the identification of disease subtypes and modeling normative distributions for detecting aberrations, and how deep learning can be used to develop new neuroscientific hypotheses.
### Automatic disease diagnosis and prognosis
The most apparent tasks for deep learning in neuroimaging-based psychiatric research are automatic disease diagnosis, prognosis, and prediction of treatment response. While signs of psychiatric disorders in neuroimaging data are very difficult to detect for human experts, several studies have suggested that there exist subtle, but measurable alterations in brain MRI data [7]. Hence, computer-assisted diagnosis using machine learning models that can pick up even very small and diffuse signals become highly promising. Most applications so far focused on binary classification tasks, such as patient vs. control [24; 59], converter vs. non-converter [140], and response vs. non-response to treatment [141], but the extension to multi-class classification tasks, such as differential diagnosis [142; 143; 144], is straightforward.
Since deep neural networks specifically profit from data which contains objects that are composed of a hierarchy of abstractions, deep neural networks are a particularly suitable method for sMRI data; the brain is composed of four main lobes, which are each composed of several brain regions and each regions is composed of edges and structures. The complexity of the neurobiology of psychiatric disorders makes nonlinear methods such as
deep learning more likely to successfully detect subtle disease-related alterations. Since the aim is to predict an outcome for each subject individually, rather than group level associations, machine learning in general may facilitate personalized medicine [145, 18]. Crucially, transfer learning of deep neural networks may help circumvent the problem of small sample sizes in psychiatric studies. Furthermore, automatic disease diagnosis is less prone to human bias [146, 147]. This is especially true for deep learning as it does not require manual feature extraction which can introduce specific biases such as the importance of some regions over others.
Beyond end-to-end learning, in which the model is fed with imaging data and outputs a class label, more indirect approaches exist. Here, the model output can include segmentation of white matter hyperintensities [148] or grey matter segmentation, which can form the basis for atrophy measurements [149]. These outputs can either be used to support a human expert's
Figure 6: Three application scenarios for deep learning in neuroimaging-based psychiatric research. The input is processed by a convolutional feature extractor (1.) and then further analyzed (2.) by one of the three described scenarios: (a) automatic disease diagnosis and prognosis, (b) subtype discovery and normative modeling, and (c) hypothesis generation and biomarker identification.
decision or to extract latent features for training another classifier [150].
When using longitudinal data, one can build deep learning models for prognosis, which have shown to outperform other methods in modeling disease progression [151]. From a clinical perspective, it is highly relevant to be able to assess time-to-event questions such as treatment outcome, survival prediction, relapse probability or potential disease onset. Current data sets are slowly approaching sample sizes that may allow for answering those questions. With future large-scale data sets, new milestones for clinical neuroimaging might be reached [152].
### Subtype discovery and normative modeling
Traditional diagnostic categories rely on identifying groups of frequently co-occurring symptoms, which are assumed to represent coherent disease entities. While successful in many areas of medicine, this approach tends to fail in psychiatry, where symptoms are far removed from the underlying pathomechanisms. Mounting evidence suggests that established disease categories (such as ICD-10 or DSM-V [153]) insufficiently match actual biological dysfunction [154; 155; 156; 35; 157; 36]. What appears as a coherent disease entity often turns out to be a mixture of distinct subtypes with heterogeneous biomarkers, treatment responses, and disease progressions. Consequently, robust biomarkers that allow for unambiguous diagnosis and predicting treatment response, common in other areas of medicine, remain elusive [36].
Increasingly available large-scale neuroimaging data sets allow for a new approach to disease categorization. Instead of using clusters of often qualitative symptoms as reference points in the search for the biological causes of psychiatric disease, it becomes possible to identify clusters directly from the quantitative neuroimaging data. This data-driven approach to disease categorization has multiple benefits. First, it prioritizes biomarkers. The whole process centers on quantitative neuroimaging data for identifying clusters of dysfunction and thus directly yields historically elusive neuroimaging biomarkers for psychiatric diseases. Second, by considering data that is closer to the level of biological dysfunction than self-reported or clinician-observed symptoms, disease categories may better match the underlying pathomechanism. Third, and consequently, such biomarkers and disease categories should better predict disease progression and treatment response [36].
The data-driven search for biologically meaningful disease categories, or disease subtypes (biotypes), relies on cluster analysis [56; 158; 159; 160] - a
class of machine learning algorithms that automatically subdivide data samples into distinct groups, called clusters, such that data points in the same group are maximally similar while data points assigned to different groups are maximally dissimilar. The question now becomes how to define similarity. Similarity in the space of voxels of a brain MRI is unlikely to yield clusters that are relevant for psychiatry for two reasons. First, a brain MRI is extremely high-dimensional, so that the variables, i.e. voxels, vastly outnumber the available data samples. This leads to the so-called curse of dimensionality; the higher the dimensionality, the more ways there are to be dissimilar and distance (or similarity) functions become increasingly meaningless. Second, similarity in the voxel-space of a brain MRI will be dominated by strong dimensions of variation such as sex and age, which tend to drown out the comparatively small dimensions of clinical relevance [161]. For neuroimaging-based clustering to be useful for psychiatry, the raw voxels of the brain MRI need to be transformed into a more meaningful representation which alleviates the curse of dimensionality and emphasizes clinically relevant variation over irrelevant confounders. Deep neural networks are particularly well suited to provide these highly abstract representations of neuroimaging data which are crucial for data-driven disease subtyping (see section 4.1).
Both promises and challenges of neuroimaging-driven subtyping can be illustrated by a study on depression subtypes by Drysdale et al. [162]. These authors used canonical correlation analysis (CCA) [163] to generate an intermediate representation, linking resting-state fMRI functional connectivity to the Hamilton Depression Rating. The low-dimensional intermediate representation emphasized specifically those aspects of the fMRI functional connectivity that pertain to the patients' self-reported depression symptoms. Cluster analysis on the intermediate representation identified four subtypes of depression, each with distinct connectivity and clinical symptom profiles, and which were predictive of patients' responses to transcranial magnetic stimulation. In a replication study, however, Dinga et al. [164] were unable to reproduce these results on independent data and emphasize that "[clustering algorithms] always yield clusters, regardless of the structure of the data", showing that comprehensive statistical analysis is necessary to ensure robust findings. The CCA that was employed by Drysdale et al. [162] can be seen as a linear precursor to more expressive models based on deep neural networks. Deep neural network extensions of CCA [165] and deep neural network architectures that are more finely tuned to the structure of neuroimaging data [133, 134, 139] may in the future unlock more intricate nonlinear interactions
in the data, which are inaccessible to classical models like CCA.
A related approach to utilizing intermediate (latent) representations of neuroimaging data for psychiatry is normative modeling [166; 21]. Instead of training models on data from diseased subjects to delineate disease profiles, normative modeling relies on data from healthy subjects to model the underlying dimensions of normal variation in neuroimaging data. The high-dimensional neuroimaging data is reduced to a low-dimensional representation, either via traditional machine learning models (see [166] for an overview), or more recently, by extracting the intermediate representations of deep neural networks (see section 4.1; [167; 168; 169]). In the space spanned by the new low-dimensional representation, the distribution of healthy subjects can be easily characterized, thus defining a normal range of variation. Subjects that fall outside of this normal range can be considered to have abnormal brain structure or function. There are different ways in which subjects can deviate from the normal range. Subjects may be normal in most dimensions of the model's intermediate representation and deviate in one or two specific dimensions, others may be outliers in a whole range of dimensions. Investigating these anomaly profiles could serve as an approach to disease classification in its own right [170].
### Hypothesis generation and biomarker identification using deep learning
In contrast to training disease models on a priori defined features (e.g., hippocampal volume in Alzheimer's disease [171]), deep learning allows for extracting its own representations (see section 4.1) which can be used to generate new scientific hypotheses and indicate directions for further study. One approach is to extract the learned representation of a CNN into a comprehensible format, such as a voxel-wise relevance heatmap (see section 3.2.3), in order to detect patterns across subjects. For instance, multiple studies proposed different ways to compute relevance heatmaps in Alzheimer's disease for explaining individual network decisions [104; 30; 172; 173]. In accordance with neurobiological evidence, all studies found the hippocampus and other temporal regions as primarily important. On the one hand, relevance scores can be used for model validation and the detection of data set bias when comparing regions with neurobiological evidence. On the other hand, new disease discriminating associations between certain brain regions and psychiatric disorders could be identified. These associations could also include common patterns of relationships between different brain regions, which only in combination elucidate predictiveness. This similarly applies to
multi-modal studies which integrate multiple imaging modalities or combine them with clinical and socio-demographic features [174; 175; 176; 141; 177]. Here, more complex relationships between various variables might be detected, such as the covariance of a brain region with the clinical history of a patient. In addition, biomarkers which most human experts would only identify in a specialized neuroimaging modality (e.g., diffusion tensor imaging) might have traces in a more common modality (e.g., T1-weighted MPRAGE). Those traces could be learned in a multi-modal fashion and then be used in cases where only the latter modality is available. It is essential that new hypothesis and biomarkers are tested on independent data, in order to avoid the so called "double dipping" in which one data set is used in circular analysis [178].
## 6 Applications in psychiatric research
In the last 1-2 decades, a number of neuroimaging-based machine learning studies in psychiatric research have been performed (for reviews, see [16; 23; 18; 179; 180]). Regardless of the difficulty of the application scenario, most of the initial studies were quite promising with accuracies above chance level for applications ranging from diagnosing patients with mental illness to identifying patients at risk and clinical subgroups to predicting treatment outcome [65; 181; 182; 183]. Although the data sets and methods used were quite diverse and difficult to benchmark, usually classical machine learning analyses consisting of some kind of feature extraction (e.g., grey matter density, volumes of brain structures, cortical thickness or functional connectivity) in combination with a linear or nonlinear machine learning algorithm (e.g., SVMs, logistic regression or random forest) have been employed [23]. However, most of the studies relied on relatively small sample sizes (\(N<100\)) and internal testing which often lead to overly optimistic performance metrics [184; 185; 23; 186] in addition to publication bias towards positive findings [187]. Even though deep learning studies within neuroimaging-based psychiatric research are still rare (with the exception of Alzheimer's disease), we review its application on several mental disorders, which have been employed in at least one of the applied machine learning settings described in section
### Alzheimer's disease
Most MRI-based machine and deep learning studies so far have been conducted on Alzheimer's disease (AD), a neuropsychiatric disorder which is the main cause of dementia in the elderly. AD is symptomatically characterized by loss of memory and other cognitive abilities to such an extent that it affects daily life [188]. In contrast to other psychiatric disorders, the neurobiological correlates are rather clear (i.e., neurodegeneration and resulting atrophy starting in the hippocampus). In addition, the existence of the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, a large open data base with easy access for (non-clinical) researchers, considerably boosted the number of machine learning studies [189; 23]. In a recent review of deep learning studies in AD, accuracies of up to 98% have been reported for the discrimination between patients with AD and healthy controls, and 83% for the conversion from mild-cognitive impairment (MCI) to AD [140]. However, Wen et al. [59] pointed out that a considerable number of those studies suffer from data leakage (see section 3.1.2) and therefore overestimate the prediction performance. From the 16 studies where data leakage could be excluded, accuracies between 76% and 91% for the discrimination of patients with AD and healthy controls and 65% to 83% for the discrimination between MCI and healthy controls have been reported. Whereas early studies mostly analyzed 2-dimensional slices of MRI data and have drawn on established architectures and pre-training techniques in computer vision (e.g., [127; 190]), most studies now analyze 3-dimensional data in either a ROI-based (e.g., hippocampus) or a whole-brain setting [30; 191; 59]. In Wen et al. [59], those different approaches and additional settings regarding pre-processing and pre-training have been benchmarked with accuracies of up to 88%. Deep learning approaches based on 3-dimensional data were superior to 2-dimensional approaches but not better than a standard machine learning analysis relying on SVMs. The models also generalized well between different data sets when the inclusion criteria and the underlying distributions of demographics was not too different. Only few studies so far have tried to understand and explain what the deep neural networks (i.e., CNNs) have learned during training (see section 3.2.3). Different attribution methods, indicating the importance of each voxel for the final classification decision, have been compared [104] and especially layer-wise relevance propagation (LRP) has been shown to explain CNN decisions well in AD classification [108; 30]. In Wood et al. [192], a recurrent visual attention model is suggested that processes the data iteratively and thus is easier to interpret than
CNN-based approaches.
### Schizophrenia
Schizophrenia is a mental disorder characterized by episodes of psychosis for which neurobiological abnormalities in sMRI and fMRI data with a focus on enlarged ventricles and an involvement of structures within temporal and frontal lobe have been reported [193; 194; 195; 196]. Based on diverse features ranging from grey matter density to volumes to functional correlates, a number of classical machine learning studies have been performed to either diagnose schizophrenia, predict first-episode psychosis, or identify subtypes [181; 197; 198; 199; 200; 143; 201] (for a recent review, see [202]). The classification accuracies varied here considerably between 60% and 95%; a tendency of lower accuracies for larger data sets. Independent testing suggested that reported accuracies should be taken with caution [23; 185]. Regarding deep learning, a first promising study using a deep belief network has shown that more depth resulted in a strong increase of accuracy (from around 60% to 90%) for separating patients with schizophrenia and healthy controls [110]. Another study using a deep belief network has shown a slightly higher accuracy than for a SVM (74% to 68%) [203]. Using CNNs and MRI data from patients with schizophrenia and healthy controls from 5 public data sets (\(N=873\)), Oh et al. [204] report AUCs of up to 0.90 on the left-out data set with a high relevance of the right temporal region. However, the importance of a similar training and test distribution was shown by a rather low AUC of 0.71 for an external data set with younger patients and a shorter illness duration. Notably, for clinicians (5 psychiatrists, 2 radiologists) the average accuracy was 62% on 100 randomly selected MRI volumes.
For resting-state and task-based fMRI data, diverse techniques have been employed to discriminate between patients with schizophrenia and healthy controls including different kinds of autoencoders [205; 206; 207], deep generative models [208], and CNNs [209; 210]. For functional connectivity matrices from a large multi-site sample (\(N=734\)), accuracies between 81% and 85% for leave-one-site-out classification have been reported [206]. In accordance with neurobiological findings, most important for the classification were connectivity values within the cortical-striatal-cerebellar circuit. Compared to whole brain images and graph-based metrics obtained from resting-state fMRI, functional connectivity measures have generally been shown to be more informative in a machine and deep learning framework [211]. In Plis et al. [212], a deep learning-based translation approach is suggested in
order to investigate the linkage between MRI-based structure (grey matter) and function (dynamic connectivity). For task-based fMRI data, similar accuracies (around 80% to 84%) have been reported [207; 210], relying on, for example, a sensorimotor task [207]. To exploit the temporal structure of fMRI data, Yan et al. [213] developed a multi-scale recurrent neural network and report accuracies between 80% and 83% in a multi-site setting (\(N=1100\)). In Pinaya et al. [167], a normative model based on a deep autoencoder was trained in a large cohort of healthy controls (\(N=1113\)) and then the reconstruction error as a measure of deviation was assessed in patients with schizophrenia.
### Internalizing disorders
Internalizing disorders describe a group of psychiatric disorders which are characterized by negative affectivity and include, for instance, depression, anxiety disorders, and post-traumatic stress disorder (PTSD). In contrast to AD and schizophrenia, internalizing disorders have been comparably less studied using classical machine learning and deep learning. On the one hand, as for most psychiatric disorders, the labels per se have been criticized for being mainly based on symptoms rather than neurobiological correlates and thus being too noisy and unspecific for the use in supervised machine learning studies [34]. On the other hand, neurobiological correlates obtained from neuroimaging data are less clear for internalizing disorders and might be mediated by underlying subtypes [214; 162].
For major depressive disorder (MDD), a recent review [215] summarized the results of 66 classical machine learning studies equally distributed on structural MRI, resting-state and task-based fMRI. In comparison of modalities, models using resting-state fMRI data tended to yield higher accuracies. Some studies also investigated machine learning algorithms in the context of predicting treatment outcome with a focus on fMRI connectivy as a biomarker. The sample size of studies were mostly below 200 and accuracies for discriminating MDD patients and healthy controls ranged from chance (approx. 50%) to unrealistic 100%, again showing the need for caution when interpreting results. Problems such as data-leakage or insufficiently sized test sets (see section 7.1.2) have most likely afflicted at least some of these studies [162; 216; 215]. For bipolar disorder, a large recent multi-site study [217] based on 13 cohorts from ENIGMA exists (\(N=3020\)) that report aggregated subject-level accuracies of about 65% using a combination of SVMs
and extracted MRI features (regional cortical thickness, surface area, and subcortical volumes).
Using CNNs and recurrent neural networks, Pominova et al. (2018) report AUCs of up to 0.66 for sMRI and 0.73 for fMRI for separating patients with MDD and healthy controls. In a large-scale machine learning challenge (PAC 20188, \(N=2240\)) a variety of simple and high-complex classifiers (CNNs etc.) were benchmarked by 49 teams for the classification between patients with MDD and healthy controls. However, none of the teams achieved an accuracy higher than 65% on the holdout set. Given that a number of studies have reported classification accuracies of over 90% for the classification of depression based on sMRI data (for an overview, see Wolfers et al. (2016), Gao et al. (2015)), those results seem rather surprising on a large well-controlled data set. A possible conclusion from these results is that PAC showed more realistic accuracies than published studies that suffer from overconfidence as previously explained. Based on the same data set, it has been shown that accuracies are systematically overestimated in random subsets (Luo et al., 2016) replicating small sample size effects, e.g., reported in (Kumar et al., 2016; Luo et al., 2016; Luo et al., 2016). In contrast to decoding clinical categories, Pervaiz et al. (2016) suggests to predict neurotisicsm as an underlying construct for the potential incidence of mood disorders. However, based on functional connectivity data from the UK biobank (\(N\approx 14000\)) and diverse optimised pipelines, the average correlation between predicted and true neuroticism score was below 0.2.
Footnote 8: [https://www.photon-ai.com/pac](https://www.photon-ai.com/pac)
Regarding anxiety disorders including obsessive-compulsive disorder (OCD) and PTSD, a review from 2015 included 8 studies which all had a rather low sample size and report comparable high accuracies (Wolfers et al., 2016). Additional studies investigated the prediction of treatment outcome in anxiety disorders (for a review, see Lueken et al. (2016)). For OCD, a systematic review found 12 studies with a wide range of accuracies based on different modalities but sample sizes were rather small (Kumar et al., 2016). None of those studies used deep learning.
### Neurodevelopmental disorders
Neurodevelopmental disorders describe a group of neurological and psychiatric conditions that originate in childhood. We will focus here on two diseases, namely attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD), which both have been investigated in deep learning
frameworks [24; 220].
ADHD is characterized by an ongoing pattern of inattention, hyperactivity and/or impulsivity, which has been related to smaller brain volumes (e.g., in the dorsolateral prefrontal cortex) and altered functional connectivity [221; 222]. In a competition from 2012 based on the ADHD-200 data set9, the best predictive model relied only on personal characteristic data including age, gender, and several IQ scales, and led to an accuracy of 62.5%.10 The best image-based model resulted in an accuracy of 60.5%. In an extended ADHD-200 data set, it has been shown that models based on only personal characteristic data outperform models based on resting-state fMRI data (75.0 to 70.7%) [223]. However, since then a number of deep learning studies based on resting-state fMRI data have been performed and accuracies of up to 90% have been reported [224; 225; 226] (for an overview, see [24]). Recently, also based on the ADHD-200 data set, a spatio-temporal model and a multi-channel model for combining resting-state fMRI data as well as demographics have been shown to result in AUCs between 0.74 and 0.8 [227; 228].
Footnote 9: [http://fcon_1000.projects.nitrc.org/indi/adhd200/](http://fcon_1000.projects.nitrc.org/indi/adhd200/)
Footnote 10: [http://fcon_1000.projects.nitrc.org/indi/adhd200/results.html](http://fcon_1000.projects.nitrc.org/indi/adhd200/results.html)
Subjects with ASD are mainly characterized by repetitive behavior and difficulties in social interactions. A number of structural and functional abnormalities have been described including a slightly thinner temporal cortex, a thicker frontal cortex, reduced volumes of amygdala and nucleus accumbens, as well as a reduced functional connectivity [229; 230]. An overview over deep learning studies in ASD can be found in Khodatars et al. [220]. Based on sMRI and resting-state fMRI from the Autism Brain Imaging Data Exchange (ABIDE) initiative, accuracies of about 70% have been reported [231; 220].
### Substance abuse
Substance use disorders (SUDs) describe a class of mental disorders related to problematic consumption of alcohol, tobacco and illicit drug use affecting daily and working life. For alcohol use disorder (AUD), the most prominent SUD, sMRI and fMRI studies have revealed a number of neurobiological correlates including enlarged ventricles, grey and white matter loss in frontal and reward-related brain areas, as well as altered functional connectiv
ity in the amygdala and nucleus accumbens [232; 233; 234; 235; 236]. Classical machine learning models have been employed to identify AUD or predict alcohol consumption / binge drinking on different kinds of data including demographics, history of life events, personality traits, cognition, candidate genes, as well as brain structure and function [237; 238; 239; 240; 174]. For sMRI data, it has been shown that a computer-based classification approach performed better than a blinded radiologist in diagnosing alcohol dependence based on regional grey matter (74% to 66%) and predicting future alcohol consumption [237]. Compared to demographics, sMRI and task-based fMRI, Fede et al. [241] showed that resting-state connectivity resulted in the lowest root-mean-squared error (RMSE) for predicting alcohol severity measured by the Alcohol Use Disorders Identification Test (AUDIT; resting-state fMRI: 8.04, demographics: 9.76, sMRI: 8.11, task-based fMRI: 8.63). Alterations in the reward network and executive control network have been reported as informative for diagnosing AUD [240]. For predicting alcohol use during adolescence, thinner cortices and less brain activation in frontal and temporal areas have been found [242]. The largest classical machine learning study so far has been performed on the IMAGEN data set, as part of which Whelan et al. [174] report an AUC of 0.90-0.96 for the separation of 14-year old binge drinkers and 14-year old controls and an AUC of 0.75 for predicting binge drinking at 16 years. Here, a combination of history, personality traits, and brain features were most predictive, supporting the hypothesis that multiple causal factors shape later alcohol use. In another study, sex-specific psychosocial and neurobiological features have been identified for the initiation of cannabis use [243].
To the best of our knowledge, only three recent studies (from a single group) exist that have used CNN models to identify AUD [244; 245; 246]. They applied different CNN models to 2-dimensional slices and reported accuracies around 97% in discriminating patients with abstinent long-term chronic AUD (\(N=188\)) and healthy controls (\(N=191\)). Due to the excessive amounts of alcohol consumed, the CNN models presumably captured neurotoxic effects rather than neurobiological underpinnings explaining AUD.
### Brain age
As an overarching biomarker relevant in many diseases including AD, schizophrenia, and substance abuse, brain age as opposed to chronological age has been suggested [237; 247; 248; 249]. Using CNNs, highly successful and robust models for brain age have been developed on raw MRI data [11]
and parcellations [250] with average deviations (mean absolute error) of 4-6 years. However, similar deviations were achieved using Gaussian processes applied to grey matter volumetric maps [11]. Based on a large and heterogeneous multi-site cohort (\(N=11729\)), Bashyam et al. [248] have shown that moderately fitting brain aging models (based on 2-dimensional CNNs) might lead to a better separation between patients (with AD, MCI, schizophrenia or depression) and healthy controls than tightly-fitting models.
## 7 Challenges
Besides the potential of deep learning in neuroimaging-based psychiatric research outlined in sections 4 and 5 and early promising studies presented in section 6, the field faces some substantial challenges hindering immediate application in clinical routine.
Noise in MRI data and disease labels affects prediction performance and increases the number of data samples required for training deep neural networks (section 7.1). In section 7.2, we discuss difficulties of model choice and model comparisons within the neuroimaging domain. For instance, it is still contentious, under which circumstances deep neural networks are useful for neuroimaging data and whether neuroimaging data require nonlinear models (section 7.2.1). And even within different neural network architectures, comparing model performances is impeded by a lack of neuroimaging benchmark data sets (section 7.2.2). Furthermore, the computational expense of 3-dimensional CNNs hinders widespread application (section 7.2.3).
Even when a deep neural network achieves high prediction performance on an independent test set, pitfalls with respect to validity and explainability remain (section 7.3). Imbalances in the training data can lead to systematically misdiagnosing minorities (7.3.1). Correction for confounding variables becomes substantially more challenging than in classical statistics (7.3.2). In addition, complex machine learning models are hard to interpret and require sophisticated engineering to extract explanations for a models' predictions (7.3.3).
### Noise in MRI data
#### 7.1.1 Low signal-to-noise ratio
The most basic precondition for machine learning to succeed is the existence of exploitable mutual information between brain images and target variables. While this precondition is surely fulfilled for sex or age, brain
lesions or atrophy, the situation is less clear for more intricate variables representing aspects of human cognition and affectivity which are relevant in psychiatry [251; 23]. It is, for instance, controversial to what extent subtle traits such as intelligence are reflected in brain morphometry measured via currently available sMRI technology [252; 253]. Similarly, fMRI does not directly reflect neuronal activity and instead relies on the haemodynamic response as a proxy, giving it a temporal and spatial resolution that is far removed from the actual underlying phenomenon [254; 23]. The actual signal, for example the cognitive process in question, will realistically only occupy a small fraction of a diverse conglomerate of different signals [255; 256]. Unrelated anatomical differences, differences in physiology, unrelated physiological processes such as breathing or head movements, thermal and system noise from the scanner, even effects from unrelated cognitive processes may obscure a particular trait or cognitive process to the point of invisibility. In comparison to areas where deep learning is highly effective, such as natural language, natural images, and even some areas of biomedicine (e.g., histology [257], or dermatology [3]), neuroimaging data in the context of psychiatry contains high levels of noise and often little trace of the phenomenon under investigation. Low signal-to-noise poses challenges to the application of deep learning, requiring a higher amount of training data (7.1.2) and may impede the extraction of nonlinear structure (7.2.1).
#### 7.1.2 Lack of ground truth labels
The success of supervised machine learning approaches in neuroimaging-based psychiatric research may be limited by insufficiently valid and reliable labels. Although psychiatric research is not imaginable without clinical labels such as depression or schizophrenia, those diagnostic categories have been severely criticized for not incorporating underlying neurobiological correlates and for their limited ability to account for heterogeneity as well as comorbidities within and across clinical categories [157; 36; 35; 154]. Moreover, both low inter-rater and low test-retest reliability have been repeatedly documented [47]. To address this issue, the National Institute of Mental Health (NIMH) suggested to study mental health along the so-called Research Domain Criteria (RDoC), and by this to shift focus away from classical clinical labels towards carefully designed domains of neurocognitive functioning across different levels of analysis assumed to be relevant to mental health [157; 35].
In machine learning terminology, the field of psychiatry would be de
scribed as suffering from high "label noise". The higher the label noise, the more data samples are necessary to reliably characterize the statistical relationships between brain image and clinical label. This reduced sample efficiency, i.e. the reduced amount of information gained per sample, further exacerbates the problem of insufficient sample sizes in neuroimaging psychiatry, where generating additional training data is often prohibitively expensive. Especially for small sample sizes, high label noise increases the likelihood of unrepresentative, spurious high-accuracy results [184; 185]. These can arise not only by overfitting or data leakage, but also out of sheer coincidence; the particular subsample of data used for model evaluation may randomly contain more correct or more easy-to-classify labels than what would be representative for the data set [186].
### Model choice and model comparison
#### 7.2.1 Linear vs. nonlinear models
Even though deep learning has revolutionized the fields of computer vision and natural language processing, it is yet unclear if these successes will fully translate to neuroimaging. Results on deep learning based prediction of demographic or behavioral phenotypes have been mixed. While various early successes have been reported [110; 24], deep learning models have often failed to outperform linear baselines [32; 33]. The mixed literature results are mirrored in recent challenges and competitions for machine learning in neuroimaging. In a number of competitions, for instance in recent ABCD [252], PAC [186], and TADPOLE [258] challenges, classical machine learning models outperformed more complex deep learning approaches. In their review of deep learning in neuroimaging, Vieira et al. [24] conclude that "despite the success of [deep learning] in several scientific areas, the superiority of this analytical approach in neuroimaging is yet to be demonstrated".
One possible explanation is the comparatively low sample size of neuroimaging studies. While computer vision and natural language processing (NLP) models are regularly trained on millions of data samples, a typical neuroimaging study will have only hundreds or at most thousands of participants. Given the arguably lower signal-to-noise ratio of neuroimaging data, one may expect a need for larger rather then smaller sample sizes to extract complex nonlinear interactions from neuroimaging data. This view is supported by research indicating that the performance of linear models on neuroimaging data is not yet saturated at \(N\approx 10000\)[32]. If present sample sizes are insufficient to even fully characterize linear effects, then it seems
unlikely that more complex nonlinear interactions can be extracted without further efforts in large scale data collection.
An alternative explanation for the difficulty of applying deep learning to psychiatry pertains to the high levels of noise in neuroimaging data (see section 7.1.1). Schulz et al. [32] argue that high levels of additive noise can in certain instances linearize the true decision boundary. In this case, no amount of additional training samples would allow complex nonlinear models such as deep neural networks to outperform linear baselines. Some nonlinear relationships in neuroimaging data could be inaccessible in principle.
Whether inability to exploit nonlinear structure in some neuroimaging data is due to insufficient sample size, or inaccessible due to noise remains an open question. Nevertheless, recent research suggests that machine learning for psychiatry will not necessarily be solved by more and more complex and expressive models alone. Researchers may instead need to focus on stronger inductive biases, which impose structural priors on the data to more efficiently learn statistical relationships from limited data [130]. Inductive biases that are specifically adapted to neuroimaging data (see section 4.3) may unlock prediction successes in the future.
#### 7.2.2 Lack of benchmark data sets
The literature contains a plethora of different deep neural network architectures, and new designs are continually being invented. Thus, reliable model comparisons are necessary to make an informed choice about which particular deep neural network architecture (or even which linear model) to use on a given brain imaging data set. Standardized benchmark data sets are crucial for a reliable and fair comparison of models and methods, and are an integral part of machine learning research (see, e.g., Imagenet [259] or Cityscapes [260] for computer vision). However, to the best of our knowledge, no large-scale neuroimaging benchmark data sets for the detection of psychiatric disorders are openly available and Arbabshirani et al. [185] even state that comparison of accuracies between studies is "essentially meaningless". The necessity of benchmark data sets for model comparisons, particularly in neuroimaging-based psychiatric research, is due to broadly four reasons.
First, benchmark data sets can control for heterogeneity of the data sources. Cityscapes, for example, contains street scenery from 50 different cities, procured in different seasons and weather conditions. In multi-site neuroimaging data sets, heterogeneity is both a promise and a pitfall. Since the imaging site has a significant effect on prediction [261], it is beneficial for
generalization to include several imaging sites, but it is also required to have similar label distributions across sites and to have high counts of samples per site.
Second, no further pre-processing of the data is required or code is available for additional steps such as cropping or intensity harmonization. In neuroimaging, MRI pre-processing is considerably more complex than on natural images. Consequently, the selection of pre-processing steps and tools have a significant impact on task-driven neurobiological inference and patient-wise prediction [262]. Neuroimaging benchmark data sets therefore need to either come fully pre-processed or include code to repeat the pre-processing. The aim is not to find an optimal pre-processing strategy but rather to ensure replicability and comparability between studies.
Third, the same motivation holds for a unique split of the data into training and testing before publication. As the heterogeneity of a data set is proportional to its size \(N\)[263, 184], the prediction result will change drastically with different train and test splits. Therefore, in order to compare model performances between studies it is required that all studies use the same train and test split. Cityscapes has carefully designed a 3-way split of the data (train/validation/test) which attributes equal amounts of data based on geographical location, city size and season. Similar efforts would be suggested for neuroimaging data. Another downside of not sharing unique data splits is the risk of data leakage as described in section 3.1.2 which has been shown for AD classification in [59].
And fourth, a benchmark data set for neuroimaging would need to provide an adequate level of task difficulty. On the one hand, the prediction task needs to be difficult enough for a superior model to clearly and reliably outperform simpler baselines. If a task is too simple and models are approaching the tasks' irreducible error (which might be the case for sex or AD classification), then it is unlikely that differentiation of model performance can be reliably observed. On the other hand, the task cannot be too complex, so that models can be successfully trained with available sample sizes.
Creating large benchmark data sets typically requires the collaboration of several clinics to form a multi-site study. Even though, conducting multi-site machine learning studies seems highly promising, and is most likely a requirement for building generalizable models, properly designing multi-site studies remains a challenge. The main issue with multi-site data sets is that the data collected will have measurable differences between sites and perfectly matching participants between sites is not feasible. Most often,
different sites have a different ratio of cases and controls. When aggregating those sites, covariates such as age, sex, total intracranial volume etc. might correlate strongly with the case-control ratio. High capacity machine learning models will easily learn to pickup those correlations to differentiate the MR images, rather than using disease specifying properties (see section 7.3.1). Hence, benchmarks on multi-site data sets might be overly optimistic in their results. Investigating the issue could be done by training a machine learning model solely on those covariates. If the covariates lead to a result which is not significantly different than the neuroimaging based model, then the results of the latter might have no meaning beyond portraying covariate differences.
Even if not matching the criteria of benchmark data sets, the publication of large and open data sets is almost always valuable for machine learning research. However, for creating better benchmarks, study designers should ensure that the aforementioned points are met. This includes reporting any data source heterogeneity, the inclusion of a standardized processing pipeline, a pre-determined data set split and determination of clinically relevant questions to address, which have an appropriate difficulty. In addition, it is necessary to control for the correlation of covariates within and between sites.
#### 7.2.3 Computational challenges
Other impediments to the application of deep neural networks for neuroimaging pertain to computational cost. Although general processing GPUs (gpGPUs) have enabled deep learning to take off in many fields, computational resources remain expensive and especially neuroimaging-based deep learning is still limited by technical capacities. In contrast to other applications where cloud computing can be used, medical data typically needs to stay within the boundaries of the hospital and cannot be transferred to the cloud provider's network. Furthermore, as sMRI data is 3-dimensional it requires cubic operations leading to extensive GPU memory utilization. With images that often have more than a million voxels, the memory of a typical GPU is quickly exceeded when using very deep neural networks, large batch sizes or wide fully-connected layers. In fMRI, which has an additional time dimension, the issue becomes even more severe. Therefore, most researchers are highly limited in the amount of architecture search they can do. Many deep learning applications are extremely sensitive to hyperparameters. However, the optimization problem of finding hyperparameters is ill defined and often results in random searches. This means that usually the search space
of hyperparameters is underexplored and one cannot prove that no better settings exist. This issue is similarly pressing when training simpler baseline models and comparing the suggested model to those baselines. Here, benchmark data sets as described in section 7.2.2 could help by giving many researchers the opportunity to search for hyperparameters, allowing them to reference the current state-of-the-art results as baselines.
### Validity and explainability
#### 7.3.1 Algorithmic bias
Challenges arise not only in training deep neural networks for psychiatry, but also in generalisation. How do trained models deal with new data samples that were generated after training and validation was already completed? In day-to-day clinical use, new data samples may differ from the training data in subtle but critical ways. The MRI scanner may be of a different model, scanning protocols or pre-processing pipelines may differ, and certain ethnicities or comorbidities may not have been included in the training data sample. This can lead to critical errors. The predictions of a deep neural network are meaningful only for new data samples from the same distribution that generated the original training data [264]. Goodfellow et al. [265] argue that "classifiers based on modern machine learning techniques, even those that obtain excellent performance on the test set, are not learning the true underlying concepts that determine the correct output label. Instead, these algorithms have built a Potemkin village that works well on naturally occurring data, but is exposed as a fake when one visits points in space that do not have high probability in the data distribution". A deep neural network will still make predictions for out-of-distribution samples, but these prediction will often be meaningless or misleading.
Furthermore, the expressivity of deep neural networks exacerbates distortions arising from sampling biases and confounding variables in the data [266; 267]. A deep neural network has neither the goal nor the ability to prefer causal relationships over a spurious associations and will exploit whatever serves to predict the target variable [94]. Hence, biases that exist in the training data will be reproduced in the learned model [268]. This phenomenon is well studied and to some extent under control for simple linear models, where (known) confounders can be anticipated and corrected for [269]. However, the highly expressive deep neural networks may pick up highly complex and unexpected biases in the training data. For instance, a deep neural network trained to classify skin lesions learned that dermatologists tend to include a
ruler in the photos of skin lesions they where particularly concerned about it, and used the ruler as a proxy for malignancy [270]. Thus, one flip-side of the high expressive capacity of deep neural networks is the resulting difficulty to control for potential biases in the predictions. Such "algorithmic biases" have diverse practical impacts. Minorities, who are often underrepresented in training data sets, have a high chance of being systematically misclassified, with potentially severe clinical consequences [271; 272]. Deep neural network's ability to exploit the most subtle biases in the training data represents a significant problem for multi-site studies where deep learning may pick up on even the slightest sampling differences between sites and thus use measurement site as a proxy for the actually studied phenomenon [266], further corrupting predictions on new data.
#### 7.3.2 Confound modeling
While the preceding section was most concerned with prediction errors, similar complications arise in the interpretation of results. Frequently, researchers will not only attempt to predict a target variable but will investigate the relative contributions of different inputs on a model's accuracy, for instance to delineate neural correlates of a psychopathology from confounds like gender, age, or education which may influence the risk of disease.
The high expressivity of deep neural networks and the necessity of comparatively large sample sizes for training them may require novel approaches to disentangle the impact of confounds on prediction performance, as traditional approaches become increasingly impractical. For instance, a priori balancing of the training data (i.e., matched, case controlled study design) is costly, limiting the achievable sample size and thus the prediction performance. Post hoc balancing by subsampling [273; 274] similarly reduces the effective sample size available for training the neural network. Regressing out the influence of confounds directly from the neuroimaging data [275; 276; 277; 273] works for simple linear models, but will not completely eradicate complex nonlinear traces of the confounding variables from the input. Machine learning models, particularly the highly expressive deep neural networks, may still exploit residual information that could not be regressed out. In a model agnostic approach, Dinga et al. [278] engage this problem by controlling for confounds post hoc on the level of model predictions.
Deep learning does not only complicate confound modelling, but also allows for novel solutions to this problem. Some researchers propose using confounding variables as additional targets in a multi-task setting and to
subsequently zero the model weights which are important for the confounding variable but not the target variable [279]. Others advocate incorporating confounds directly into the model (e.g., by disentangling them from the intermediate representations [117; 167]), or even using adversarial optimisation objectives [280] to train deep neural networks that correct against the influence of confounding variables.
Although properties of deep neural networks may exacerbate problems of algorithmic bias and confound modelling, the flexibility of deep neural networks to incorporate and isolate confounding variables within the model may in the future offer uniquely powerful solutions to deal with heterogeneous biomedical imaging data.
#### 7.3.3 Explainability
Another way to check for biased or implausible predictions is to interrogate the models decision making process (see section 3.2.3). However, even the features of linear models are not unambiguous in neuroimaging [281; 282]. Since deep neural networks have orders of magnitudes more parameters than most other machine learning models, they are extremely hard to comprehend from a human perspective. While in computer vision we are able to visualize the filters of convolutional layers and to assign meaning to them (i.e. edge detectors or more abstract concepts such as eyes for face recognition [103]), this becomes more challenging in neuroimaging as sMRI is 3-dimensional and abstract concepts are less immediate (e.g., how would a filter for atrophy look like?).
Attribution methods provide a popular approach for reasoning about deep learning models. In high-risk settings such as medicine, the performance of an algorithm might have a vital impact on an individual. Hence, it has been argued that explainability of medical algorithms might become a prerequisite for clinical adoption. On the other side, it has been discussed (in, e.g., London [283]) whether explainability should be a requirement for machine learning systems that have a high accuracy, considering that the comprehension of many other causal mechanisms in medicine, such as the mechanisms of certain drugs or psychotherapy, is incomplete. In any case, Lapuschkin et al. [94] have shown how post-hoc analyses of models with high accuracy can identify artifacts in the training data which the models misuse for classification. This showed that attribution methods can help to identify bias or subpar learning strategies, which in neuroimaging could for example be a focus on scanner artifacts, human motion, or random correlations between brain features and
group membership. As a caveat, it was recently shown that most attribution methods only minimally change their produced heatmaps when several layers of the model are being randomized [284; 285]; how to quantitatively assess attribution methods remains a challenge. Here, neuroimaging creates an opportunity, as the quantitative validation of attribution maps using neurobiological knowledge is more objective than the sole visual inspection [30; 31; 286].
## 8 Conclusion
While machine and in particular deep learning approaches provide a huge potential for transforming neuroimaging-based psychiatric research towards precision psychiatry, the field is still at the very beginning. From a structural perspective, the further collection of large, multi-site, standardized, and open-source neuroimaging data sets with overlapping disease categories and thorough clinical and behavioral characterization is essential. For addressing questions about disease prognosis and treatment outcome, longitudinal data will be increasingly important. In addition, efforts should be made in order to make analyses between different groups comparable, e.g., by creating benchmark data sets. The key to success will then be whether clinically meaningful and transparent representations can be learned from neuroimaging data that additionally account for individual and site-specific covariates.
## 9 Acknowledgements
We acknowledge support from the German Research Foundation (DFG, 389563835; 402170461-TRR 265; 414984028-CRC 1404), the Deutsche Multiple Sklerose Gesellschaft (DMSG), the Manfred and Ursula-Muller Stiftung, and the Brain & Behavior Research Foundation (NARSAD grant; USA).
|
2304.09783 | Application of attention-based Siamese composite neural network in
medical image recognition | Medical image recognition often faces the problem of insufficient data in
practical applications. Image recognition and processing under few-shot
conditions will produce overfitting, low recognition accuracy, low reliability
and insufficient robustness. It is often the case that the difference of
characteristics is subtle, and the recognition is affected by perspectives,
background, occlusion and other factors, which increases the difficulty of
recognition. Furthermore, in fine-grained images, the few-shot problem leads to
insufficient useful feature information in the images. Considering the
characteristics of few-shot and fine-grained image recognition, this study has
established a recognition model based on attention and Siamese neural network.
Aiming at the problem of few-shot samples, a Siamese neural network suitable
for classification model is proposed. The Attention-Based neural network is
used as the main network to improve the classification effect. Covid- 19 lung
samples have been selected for testing the model. The results show that the
less the number of image samples are, the more obvious the advantage shows than
the ordinary neural network. | Zihao Huang, Yue Wang, Weixing Xin, Xingtong Lin, Huizhen Li, Haowen Chen, Yizhen Lao, Xia Chen | 2023-04-19T16:09:59Z | http://arxiv.org/abs/2304.09783v3 | # Application of attention-based Siamese composite neural network in medical image recognition
###### Abstract
Medical image recognition often faces the problem of insufficient data in practical applications. Image recognition and processing under few-shot conditions will produce overfitting, low recognition accuracy, low reliability and insufficient robustness. It is often the case that the difference of characteristics is subtle, and the recognition is affected by perspectives, background, occlusion and other factors, which increases the difficulty of recognition. Furthermore, in fine-grained images, the few-shot problem leads to insufficient useful feature information in the images. Considering the characteristics of few-shot and fine-grained image recognition, this study has established a recognition model based on attention and Siamese neural network. Aiming at the problem of few-shot samples, a Siamese neural network suitable for classification model is proposed. The Attention-Based neural network is used as the main network to improve the classification effect. Covid- 19 lung samples have been selected for testing the model. The results show that the less the number of image samples are, the more obvious the advantage shows than the ordinary neural network.
keyword:Medical image recognition, Few-shot,Fine-grained,Siamese neural network,Attention mechanism,Covid-19 (c) 2023
## 1 Introduction
In recent years, big data training for image recognition has been studied in depth. Many classical convolutional neural networks (such as Resnet18) have been widely used in the field of image recognition. However, these classical models require a large number of samples. It is often inadequate in practical applications since only insufficient set of samples are available. These include medical image analysis, crop disease identification, as well as applications in the fields of military and finance. Under the condition of small set of samples, deep learning suffers from over-fitting, low recognition accuracy and insufficient robustness. Therefore, few-shot learning is one of the challenges that image recognition often has to face. On the other hand, Computer vision and artificial neural network algorithms have been widely used in identifying large categories of objects, such as cat and dog classification, vehicle roadblock recognition, handwritten numerals recognition, face recognition, etc.[1; 2; 3; 4], in which the subtle differences between sub-classes make it easier to identify large classes of objects, while it is more difficult to identify more refined sub-classes. The difficulties come from the small visual differences between different subcategories under the same large category. When image recognition is affected by interference factors, it is more difficult to make correction identifications. Therefore, fine-grained image recognition is a challenge that intelligent image recognition often faces.
The applications of deep learning in image recognition, such as in medical image analysis, often face both challenges identified above. Typically, in the field of medical image analysis, computer-aided diagnosis systems can be used to improve the diagnostic accuracy of human experts[5], and may even re
place human experts in the future. At present, owing to the fact that many images are not in digital format, quality of images varies from different sources, and data annotation consumes a lot of time and labor, the standard sample data required for the learning model are insufficient, which is especially true in the cases of rare diseases, diseases not yet well understood, and epidemics at its early stage. In cases like these, the number of samples available is limited, resulting in enormous amount of difficulties in medical image recognition. Moreover, many medical image analyses depend on subtle differences in the images to diagnose subtle changes in the early stage of the disease or distinction between different diseases of the same organ, and varying degrees of noises of images further increase the difficulty of medical image recognition. In all, few-shot and fine-grained image recognition are two main challenges faced by deep learning technology in the application of medical image analysis.
For the problem of few-shot learning, Y Taigman et al.[6] proposed Siamese neural network. Siamese neural network contains two or more sub-networks with the same parameters and weights. Parameter updating can be carried out jointly on the two sub-networks. Sub-networks share weights so that training does not require a large number of parameters. Therefore, it has advantages in identifying small sample image sets and is less prone to over-fitting in recognition. This study is based on Siamese neural network, and its structure is modified to establish recognition model.
Traditional neural network features equal extraction of image characters, both in the image channel and in spatial plane. But when people make observations, they typically have elements they lend focus on. The direction of the development of deep learning is to simulate the'selectability and differentiability' of human observers. This is the 'Attention Model' proposed in image recognition research. This model can selectively extract features from an image and solve the problem of small differences between image sub-classes, and is greatly affected by perspectives, backgrounds, occlusion, noise and other factors[7]. Therefore, the image recognition model established in this study uses the "attention mechanism" principle.
To summarize, this study has established a new composite deep learning model based on attention mechanism and Siamese neural network model. First of all, we establish a classifiable siamese neural network. Then, the effects of different attention mechanisms on different networks are tested, and finally the model with the best performance is selected to be embed into the classifiable Siamese neural network to improve the model effect. Covid-19 lung samples are used for testing the model.
The experimental results show that the composite deep learning model developed in this study has the following advantages.
* It solves the problem that a large number of samples are needed in deep learning in the traditional models. The established model shows obvious advantages in identifying small sample image sets such as Covid-19. The fewer the samples are, the better the recognition effect is compared with the traditional network.
* The problem of feature extraction without center of gravity in deep learning is solved. The lungs infected by Covid-19 can be identified from the lung images. With the attention mechanism added, the classification accuracy of the samples is improved.
* The proposed lightweight composite model simplifies the calculation and reduces the requirements to the computing power. Covid-19 can be diagnosed rapidly and at a lower cost, which is significant in rapid diagnosis of epidemic diseases. It can be easily deployed.
## 2 Related work
### Recognition of medical images
Ever since scientists discovered X-rays, x-ray imaging has been an indispensable tool. Doctors have been increasingly relying on medical imaging for diagnosis. However, doctors' assessments on medical images have been, avoidably, to certain degree subjective. Different doctors can give different interpretations to the same image. Errors in this high-intensity work may also lead to false diagnosis. These motivates the development of computation-based technology to assist medical image recognition.
Since the outbreak of Covid-19 in 2019, related researches based on deep learning have made good progress[8]. The diagnosis by RT-PCR of suspected contractions has received positive result.. However, relevant studies have shown that the detection rate of RT-PCR is far from adequate, generally about 30 %-60 %[9], and chest CT images can only be assisted to detect the disease to a certain extent. To improve the efficacy, a deep learning model might be useful to detect and classify the lung images of Covid-19 infection, assisting doctors to identify 'highly suspected pneumonia patients'. In 2020, Bai et al. first segmented the CT image, and used 2DEfficientNetB4 to score the segmented image. With the idea of ensemble learning, the scores of many CT slices were integrated and the final prediction was made[10]. In the same year, Wang et al. first used the U-Net segmentation model to obtain images of the lung region, and used the segmented images and the original images as the input of 3D deep neural network, and predicted the probability of COVID-19 infection.
### Few-Shot Learning and Siamese Neural Network
In recent years, deep learning technology has seen abundant research results in the field of image recognition. However, most of the current technologies require the support of a large number of samples, which limits the application of deep learning technology in cases where only small number of samples are available. These include some cases in medical fields, astronomy, and identification of endangered animals and plants, to name a few. At present, the solutions for few-shot learning include transfer learning, data augmentation and meta-learning[11; 12; 13].
However, transfer learning has high requirements for the source domain samples. If the selection is incorrect, it may produce negative effect. Besides, for the second method, the effectiveness of data augmentation for improving the fine-grained
image recognition is limited. It may increase the impact of noise. The main purpose of meta-learning is to learn a universal and generalized representation method in data, which can be directly used in the target data set, such as in the learning "comparative ability" in Siamese neural networks. Siamese neural network was first proposed in 1993. In 2005, Chopra et al. proposed a network to train "comparative ability" from data[14].
### Fine-Grained Image Recognition and Attention Mechanism
At present, there are two mainstream methods for fine-grained image recognition. One is the recognition model based on strong supervision information and the other is the recognition model based on weak supervision information.[15, 16, 17, 18]
In addition to the two methods mentioned above, an attention mechanism has also been proposed. Through task-oriented screening and filtration of information, human attention mechanism enables our brain to focus on processing the information related to tasks, thereby improving the efficiency of the information processing and utilization. The attention mechanism in computation was first proposed in the field of computer vision. Although the essential concept has long been proposed, it gained real attention only when the Google DeepMind team used the concept in image classification in June 2014. In the same year, Bahdanau et al. used the concept in machine translation[19]. Following these developments, attention mechanism has been widely adopted in deep learning. In the field of computer vision, there are several classic attention mechanisms. SEnet proposed in 2017 laid the foundation for the development of subsequent channel attention mechanisms[20]. SKnet and ECAnet are developed on this foundation[21, 22]. In 2019, Li et al. proposed a lightweight attention mechanism SGEnet, which also combines spatial and channel attention mechanisms[23].
## 3 Methodology
Firstly, we have studied the effects of different attention mechanisms on classical neural networks. Secondly, using the idea of Siamese neural network, we have constructed a classified Siamese neural network model, embedded the neural network with attention mechanism which preforms well, and tested the effectiveness of the model. After that, we have gradually reduced the sample size, and compared to the effects of without using Attention-based Siamese neural network in order to explore the role of Attention-based Siamese neural network in few-shot learning. In the end, we build an Attention-Based Siamese Neural Network and achieved good recognition accuracy in the identification of Covid-19 lung.
### Attention-Based Neural Network
First of all, we need to make assessment on the best combination of attention mechanism and different neural networks.
Two classical neural networks, InceptionV3 and Resnet18, are used as the foundations, in which some attention mechanisms are respectively embedded: SE, SK, SGE and ECA. Since an attention mechanism does not change the shape of the feature map, it can be placed in any convolution layer of a neural network. In order to save computation resources, and also avoid destroying the initial network structure, we have designed a composite attention-based neural network.
The inceptionV3 consists of 11 blocks of five types. Each block consists of many convolution layers and pooling layers, and the model is relatively complex. Therefore, it should be avoided to add too many attention modules. Even with one attention module for each block, we would have 11 new modules, a number that still could be too large. Therefore, we decided to add one attention module to each type of blocks. Finally, we add five attention modules.
For Resnet18, in addition to the ordinary convolution layer and pooling layer, the most important thing is to include two residual blocks. Because a residual block is a structure of skip connections, in order to maintain the integrity of such a structure, the attention module should not be placed inside a residual block. It can be seen that each two residual blocks can be regarded as a whole. Therefore, we consider each two residual blocks as a layer, a total of four layers, and add the attention module at the end of each layer. Four attention modules are added.
The function that the attention-based neural network needs to complete is to perform multiple classifications of images. We have used a cross entropy loss function to describe it. Because the samples are four categories, the four-classification cross entropy loss function is described by the following:
\[L=\frac{1}{N}\sum_{i}L_{i}=-\frac{1}{N}\sum_{i}\sum_{c=1}^{4}y_{ic}\log{(p_{ic })} \tag{1}\]
Where \(N\) is the number of samples, and \(y_{ic}\) is either 0 or 1. If the real category of sample \(i\) is equal to \(c\), then \(y_{i}c\) is 1, otherwise 0. \(p_{i}c\) is the probability that the observation sample \(i\) belongs to category \(c\).
### Classifiable Attention- Based Siamese Composite Neural Network
#### 3.2.1 Establishment of Attention-Based Siamese Composite Neural Network
Furthermore, we have integrated the attention-based neural network into a classifiable Siamese neural network.
A classifiable Siamese neural network can be divided into two parts: a front-end contrast-training structure and a back-end classifying-predicting structure. The contrast-training structure accepts inputs of two images and outputs the spatial distance of the two images. The training process is to make the distance of the same category of images in space as small as possible, and the distance of different categories of images in space as large as possible. The back end receives the distance from the front end and outputs the category of an image. Its overall network structure has two main types, training structure and predicting structure (Figures 1-a and 1-b).
As shown in Fig. 1-a, the training structure accepts a sample pair, all from the training set, and the front end and the back end are trained at the same time. In Fig. 1-b, the picture in the predicting structure is a picture to be classified, and the other
picture is from the training set. Finally, the category is the output.
by the front end as the input, and iteratively update the probability in the four categories. The formula of probability iteration is as the following:
\[\text{base}[i]+=\text{base}[i]+(\text{predict}[i]-0.5) \tag{2}\]
Among them, prediction[i] denotes the probability between the known class i image and an unknown image passing through the back-end logic regression output. When the probability of a certain category exceeds 1, the sample is determined to belong to this category. If the probability of a certain category is less than 0, the sample does not belong to this category, and the probability iteration of this category is abandoned.
#### 3.2.2 Loss Function of Classifiable Attention-Based Siamese Neural Network
Our objective is to train the distance of images belonging to the same category as small as possible. On the contrary, the distance of images belonging to different categories in space is as large as possible. Therefore, the loss function can choose the contrastive loss function[24]. The contrastive loss function formula is:
\[L=\frac{1}{2N}\sum_{n=1}^{N}\left(yd^{2}+(1-y)\max(\text{margin}-d,0)^{2}\right) \tag{3}\]
where \(d\) represents the distance between two vectors, which is the output of the front end; \(y\) represents the label of a sample pair; 0 represents that two images belong to the same category; 1 represents that two images do not belong to the same category. Margin represents the threshold of the distance. When a sample belongs to different categories, the distance exceeds the threshold, and the loss is set to 0. In our algorithm, the margin is set to 2. From the above contrastive loss function, when samples belongs to the same category, the loss function is \(\sum_{n=1}^{N}yd^{2}\). Therefore, with the learning of neural network, the loss value decreases continuously, and the distance between the samples decreases continuously as well. When samples does not belong to the same category, the loss function is \(\sum_{n=1}^{N}(1-y)\max(\text{margin}-d,0)^{2}\),the loss value decreases continuously, the distance d is also increasing. These are consistent with our expectation of using distance to evaluate whether samples belong to the same class.
### Model Evaluation Indicators
In the experiment, the model is a four-classification problem, and the kappa coefficient is used for evaluation [2]. The calculation formula is as the following:
\[k=\frac{p_{o}-p_{e}}{1-p_{e}} \tag{4}\]
Among them, \(p_{o}\)is the sum of the correctly classified samples of each class divided by the total number of samples, which is simply our accuracy. The calculation of \(p_{e}\) is given by:
\[p_{e}=\frac{a_{1}\times b_{1}+a_{2}\times b_{2}+\ldots+a_{C}\times b_{C}}{n \times n} \tag{5}\]
Where,\(a_{1},a_{2}\ldots a_{C}\) is the number of real samples for each class, and \(b_{1},b_{2}\ldots b_{C}\) is the number of samples predicted for each class. is the total number of samples. Kappa calculation results are between -1 1, the greater the Kappa coefficient is, the better the model classification effect is.
## 4 Experiment
In this section, we will describe the hardware we use, the data set, and the details and results of the experiment. All experiments are completed by using pytorch 1.9.0. In terms of hardware, two Titan XP from nvidia are used for GPU acceleration, and the CUDA version is 11.2.
### Data Sets
In this experiment, we use one dataset, COVID-19 Radiography Database[25; 26].
COVID-19 Radiography Database: A team of researchers from Qatar University, Doha, Qatar, and the University of Dhaka, Bangladesh along with their collaborators from Pakistan and Malaysia in collaboration with medical doctors have created a database of chest X-ray images for COVID-19 positive cases along with Normal and Viral Pneumonia images. The dataset is still updated, initially containing only 219 COVID-19,1341 normal lung images and 1345 viral pneumonia chest images. Up to now, the dataset has 3616 COVID-19 lung images, 6012 Lung Opacity, and 1345 Viral Pneumonia images and corresponding lung masks. The dataset also reflects the problem of sample scarcity in the early outbreak of an infectious disease. It also proves the necessity of few-shot learning.
### Experiment Detail
#### 4.2.1 Attention-Based Neural Network
In order to test the best combination of different attention mechanisms and different networks, we have done the following experiments.
**COVID-19 Radiography Database**: Because we only needed to test the effectiveness of the attention mechanism, we streamlined the dataset, and in order to balance the data, we decide to keep 400 images randomly for each category.
**Division of Data Sets**: To evaluate the ability of model learning and prediction, the dataset was divided into a training set of 300 images and a testing set of 100 images. All training sets account for 75 % of the total number of samples. The number of samples are more balanced in both the training set and the testing set. This improves the reliability of the model assessment.
**Experiment Process and Results**
Next, we complete the fusion of two networks and four attention mechanisms, and test them on the constructed testing set. The final results take the optimal kappa value of the testing set (Table 1).
As shown in Table 1, after adding the attention mechanism, the classification effect of InceptionV3 is slightly improved. Resnet18 is also improved.
#### 4.2.2 Attention-based Siamese Composite Neural Network
**Data Reduction** Since the problem solved in our research is the few-shot problems, the training set should be reduced. We use two training sets, namely, the training set with only 10 pictures in each category and the training set with only 20 pictures in each category, and the test set still uses the test set used in the previous part.
**Construction of Dataset** The dataset of Attention-based Siamese neural network is mainly divided into three categories: the training set during training, the test set during testing, and the training set that needs to be referred to during testing.
Training set during training: since each image can form a paired sample with any other image, the training set of 10 pictures in each category can form up to 39 + 38 +... + 1 = 780 paired samples, and the training set of 20 pictures in each category can form up to 79 + 78 +... + 1 = 3160 paired samples. Therefore, the sample size (length) of the training set during training should be optimized, and the number of positive and negative samples should be uniform (Fig. 4).
As shown in Fig.4, a training set of size 8 is constructed. Sample labels are [0,1,0,0,0,1,1,1], which contains 4 positive samples and 4 negative samples. When testing, the testing set and the reference training set are made of single image. The batch size is fixed to 1. When testing, images one by one from the testing set and randomly paired with each type of image in the reference training set until the completion of the test.
**Experiment Process and Results** First, we set the hyperparameters: In this section, the hyperparameters include the length of the training set, the batch size of the training set, the base value in the classification-prediction structure, the learning rate and the epochs. Because the batch size of the two data sets in the test part is set to 1, it is not a hyperparameter (Table 2).
As shown in Table2, all hyperparameters are consistent except for the length of the training set.
In order to illustrate the advantages of Attention-based Siamese network in solving the few-shot problems, we run the neural network only using attention mechanism and the attention-based Siamese neural network. we select the best two networks as the attention-based neural network in the previous section: Resnet18-SK, InceptionV3-SGE.
Firstly, the performance of each training set with 20 images per class in two modes and two kinds of neural networks is tested. we still use kappa coefficient as the evaluation index, which is the best under the testing set (Table 3).
As shown in Table 3, with few-shot samples, all kappa values decrease, but the attention-based Siamese composite neural network is much more superior to the neural network with only attention mechanism in the previous part.
Next, we continue to reduce the number of training set images, and test the performance of training set with 10 images per class in six models (Table 4).
As shown in Table 4, with the further reduction of each type of images, the neural network with only attention mechanism in the previous part has basically lost its learning ability, the model does not have the ability to predict, and the kappa value approaches 0. However, the classification ability of attention-based Siamese composite neural network has almost no change.
In addition to the best kappa coefficient of the testing set, we are also concerned about the training process of the model. We find that in few-shot problems, the stability of the training set and the speed of the model convergence are different in both modes (Fig. 5).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & length & batchsize & Base & lr & epochs \\ \hline
10 images & 500 & 32 & 0.5 & 0.001 & 30 \\
20 images & 2000 & 32 & 0.5 & 0.001 & 30 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The hyperparameter of different training set
\begin{table}
\begin{tabular}{c c c} \hline \hline Attention-Based neural network & Kappa of Testing set \\ \hline & No-Siamese & Siamese \\ Resnet18-SK & 0.334 & 0.563 \\ InceptionV3-SGE & 0.378 & 0.503 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test results for a training set of 20 images per class on different networks
Figure 4: Training samples
\begin{table}
\begin{tabular}{c c c c} \hline \hline Attention mechanism & ResNet18 & InceptionV3 \\ \hline None & 0.688 & 0.792 \\ SENet & 0.816 & 0.868 \\ SK & 0.847 & 0.875 \\ SGE & 0.795 & 0.882 \\ ECA & 0.833 & 0.865 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The results of attention-based neural network
Figure 5: Classifiable Attention- Based Siamese neural network
Figure 5 shows the variation of Kappa with the number of iterations in Resnet + SK in 20 Covid-19 lung samples. It can be seen that when the Siamese neural network is not used, the convergence rate is slow, and the training fluctuation is remarkable, and the training Kappa peak is only 0.3338. However, after using the Siamese network, the convergence rate of the model is fast, the model fluctuation is slight, the training is more stable, and the training peak even is 0.5633.
## 5 Conclusion
This paper presents a composite model of attention-based Siamese neural network, which can be used to solve the problem of few-shot and fine-grained. Good performance is demonstrated in the application of Covid-19 lung identification. With a training set of only 10 images in each category, the traditional convolutional neural network is incapable of learning any useful features, cannot classify the unknown categories, and the kappa coefficient is essentially 0. On the other hand, with the same data set, the model proposed in this paper improves the kappa coefficient remarkably.
For the traditional convolutional neural network, with the increase of number of categories, the effectiveness of the model decreases. On the other hand, due to the attention-based Siamese neural network's learning ability of comparison, the model has stronger learning ability under more categories of samples. We can expect more in-depth research on the relationship between category number and model learning ability in the future. Moreover, not only can this model be applied to the identification of Covid-19 lung, but also be extended to other few-shot and fine-grained image recognition fields.
## 6 Acknowledgements
We gratefully acknowledge the financial support from the Innovation and Entrepreneurship Training Program of Hunan Province (Grant number: S202110532360), the Natural Science Foundation of Changsha City (Grant number: KQ2202137), and the Changsha City takes the lead in major science and technology projects (No. KQ2102002).
## 7 Statement
This preprint is currently under consideration at Pattern Recognition Letters.
|
2309.02158 | Traffic Light Recognition using Convolutional Neural Networks: A Survey | Real-time traffic light recognition is essential for autonomous driving. Yet,
a cohesive overview of the underlying model architectures for this task is
currently missing. In this work, we conduct a comprehensive survey and analysis
of traffic light recognition methods that use convolutional neural networks
(CNNs). We focus on two essential aspects: datasets and CNN architectures.
Based on an underlying architecture, we cluster methods into three major
groups: (1) modifications of generic object detectors which compensate for
specific task characteristics, (2) multi-stage approaches involving both
rule-based and CNN components, and (3) task-specific single-stage methods. We
describe the most important works in each cluster, discuss the usage of the
datasets, and identify research gaps. | Svetlana Pavlitska, Nico Lambing, Ashok Kumar Bangaru, J. Marius Zöllner | 2023-09-05T11:50:38Z | http://arxiv.org/abs/2309.02158v1 | # Traffic Light Recognition using Convolutional Neural Networks:
###### Abstract
Real-time traffic light recognition is essential for autonomous driving. Yet, a cohesive overview of the underlying model architectures for this task is currently missing. In this work, we conduct a comprehensive survey and analysis of traffic light recognition methods that use convolutional neural networks (CNNs). We focus on two essential aspects: datasets and CNN architectures. Based on an underlying architecture, we cluster methods into three major groups: (1) modifications of generic object detectors which compensate for specific task characteristics, (2) multi-stage approaches involving both rule-based and CNN components, and (3) task-specific single-stage methods. We describe the most important works in each cluster, discuss the usage of the datasets, and identify research gaps.
## I Introduction
Detection and classification of traffic lights (TL) from camera images, also called _traffic light recognition_ (TLR), plays a pivotal role in enabling automated driving. It helps to maintain efficient and safe traffic flow management, reduce traffic congestion and minimize the risk of accidents. TLR as a task comprises _traffic light detection_, which aims at localizing the traffic lights in the image, as well as _classification of TL states_ (colors) and _pictograms_ (arrows), as shown in Figure 1. The development of convolutional neural networks (CNNs) has dramatically improved the accuracy of traffic light detection due to their ability to learn complex features from images. The effectiveness of CNN-based approaches depends on the choice of architecture and training data.
In this work, we review and group existing CNN-based approaches for traffic light detection and classification. Unlike existing surveys on TLR, we focus on the choice of CNN architectures. Older surveys [1, 2] focused more on classic image processing approaches since neural networks were only sporadically used then. To the best of our knowledge, the only concurrent modern work is that by Gautam et al. [3], which presents an in-depth overview but focuses on the whole pipeline. For the three main steps in the proposed pipeline (segmentation, feature extraction, and classification), Gautam et al. consider both classical computer vision approaches, like histograms of oriented gradients, and those using neural networks. In contrast, we focus on CNN model architectures and particularly on the modifications made to generic object detectors.
## II Datasets for Traffic Light Recognition
Since the appearance of traffic light signalling devices varies over different countries, a number of TLR benchmarks has been released. We provide an overview of publicly available datasets in Table I. We also refer to the journal paper by Jensen et al. [2], which gives a comprehensive overview of datasets published before 2016.
**La Route Automatisce (LaRa) dataset**[6] was one of the first publicly available datasets published in 2015 by a French joint research unit La Route Automatisce. It contains over 11,000 images and 9,000 annotations recorded as a 25 Hz video during about a 9-minute long ride in Paris. The images have a relatively low resolution of \(640\times 480\) pixels. All labels were annotated manually as bounding boxes (BBoxes) with object IDs for tracking evaluation. The TL state was labeled as _green, orange, red_, or _ambiguous_. Furthermore, each
Fig. 1: Examples of subtasks within the TLR task.
image was annotated with a sequence ID and timestamp.
**LISA Traffic Light Dataset**[2] is a comprehensive dataset that contains over 40,000 images, originating from the Vision for Intelligent Vehicles and Application (VIVA) challenge, which included the traffic light detection benchmark. Therefore, the dataset itself is sometimes also referred to as the _VIVA dataset_. The data was captured as a 10 Hz video using a stereo camera with an image resolution of \(1280\times 960\) pixels and a horizontal field of view of approximately 43\({}^{\circ}\). Additionally, depth disparity maps for each image are provided. The dataset consists of a training subset and a test subset, the latter is kept private to serve as the basis for benchmarking. All labels were annotated manually and are provided as pixel-level binary masks and BBoxes. TL states are encoded using seven classes: _go, go forward, go left, warning, warning left, stop, stop left_. The LISA dataset covers several USA cities (San Francisco, Berkeley, and Chicago) under different lighting and weather conditions.
**Bosch Small Traffic Lights Dataset (BSTLD)**[8] was recorded along the El Camino Real in California's San Francisco Bay Area using a stereo camera. The dataset includes the corresponding disparity maps. All labels were annotated manually utilizing 15 classes to describe the color and pictogram of the TLs. The labels are provided as pixel-wise binary masks and BBoxes and are split into a training set and a test set of nearly equal size. Although the training data is labeled with a full set of 15 classes, the test data includes only four classes (_red, yellow, green, off_).
**DriveU Traffic Light Dataset (DTLD) v1.0**[9] was published in 2016 by the Intelligent User Interfaces (IUI) group at the University of Ulm in Germany. It has a number of images comparable to LISA but exceeds all other datasets in terms of the number of annotations (more than 230,000). Images with a resolution of \(2048\times 1024\) pixels were recorded by a stereo camera with a frame rate of 15 Hz. The dataset includes the corresponding disparity maps.
The DTLD v2.0 dataset was released in 2021 as an extension of the DTLD v1.0. It contains images of the same resolution and frame rate but covers a broader range of traffic scenarios, such as roundabouts and T-junctions. Both datasets were annotated with BBoxes using manual and semi-automatic methods. The manual annotation was performed by human annotators, who labeled the TLs with pixel-level accuracy. The semi-automatic annotation was performed using a deep neural network trained to detect and classify TLs in the images.
Both datasets provide a comprehensive set of labels, arranged into the following groups: the orientation (_front, back, left, right_), relevance/occlusion, orientation (_horizontal, vertical_), the number of lamps, state (_red, yellow, green, red-yellow, off_), and pictogram (_circle, arrow left, pedestrian_, etc.). Because of the large number of possible combinations of these tags, the resulting number of unique labels exceeds that of any other dataset. DTLD v1.0 and v2.0 were collected from eleven German cities, including urban and suburban environments, to provide diverse TL scenarios.
Furthermore, a number of **other public datasets** either include labeled TL states or have been extended to include them. However, they lack additional attributes such as orientation, pictogram, and relevance information, which are necessary to utilize the detected TLs for autonomous driving. Examples of the datasets extended with TL states include COCO Traffic [12], where TL states were annotated in the images from the COCO [13] dataset, as well as Cityscapes TL++ dataset [10] containing images with fine annotations from the Cityscapes [14] dataset with additional TL labels for four attributes: type (_car, pedestrian, bicycle, train, unknown_), relevant (_yes, no_), visible (_yes, no_), and state (_red, red-yellow, yellow, green, off, unknown_). Other datasets containing only TL state labels are the Roboflow Self-Driving Car dataset [15], a modified version of the Udacity Self-Driving Car Dataset [16], Waymo Open Dataset [17], WPI [7], BDD100K [18], and ApolloScape [19] datasets.
## III Overview of Architectures for Traffic Light Recognition
Compared to the generic object detection task, specific challenges in TLR include small object size, sparse structure, and high variability of the background. Various works have proposed different methods to approach these issues. We cluster them into three groups: (1) **modifications** of generic object detectors, (2) **multi-stage approaches**, which perform TL localization and TL state/pictogram classification in separate steps, and (3) **task-specific single-stage approaches**, which perform TLR within a single network.
Table II summarizes existing work on CNN-based TLR approaches. In the following, we give an overview of the most important works in each group. Please note that we have deliberately omitted approaches involving only TL classification, without previous detection step (e.g., Gautam and Kumar [20]), as they are unrealistic for the deployment.
### _Modifications of Generic Object Detectors_
The first group comprises approaches that use an existing CNN-based model for generic object detection with minor
\begin{table}
\begin{tabular}{|c c c c c c c c c c c c|} \hline
**Dataset** & **Year** & **Ref.** &
\begin{tabular}{c} **Number of** \\ **Image** \\ \end{tabular} & **Resolution** & **Depth [bit]** & **Frame Rate** & **Number of** & **Impairly** & **Platforms** & **Clauses** & **Country** & **License** \\ \hline LaRa & 2015 & [6] & 11.179 & 640\(\times\)480 & 8 & 25 & 9,168 & & 4 & France & NA \\ LISA & 2016 & [2] & 43.007 & 1280\(\times\)960 & 8 & 16 & 119.231 & ✓ & ✓ & 7 & USA & CC & BY-NC-SA 4.0 \\ WPI & 2016 & [7] & 3.456 & 1920\(\times\)1080 & NA & NA & 6766 & & ✓ & 21, 2 & USA & NA \\ BSTLD & 2017 & [8] & 13.427 & 1280\(\times\)720 & 8.12 & 15 & 24,242 & ✓ & ✓ & 15,4\({}^{*}\) & USA & MIT \\ Direct v1.0 & 2018 & [9] & 40.979 & 2048\(\times\)1024 & 8.16 & 15 & 229.029 & ✓ & ✓ & 423 & Germany & Academic \\ Direct v2.0 & 2021 & [9] & 40.979 & 2048\(\times\)1024 & 8.16 & 15 & 292,245 & ✓ & ✓ & 620 & Germany & Academic \\ Cityscapes TL++ & 2022 & [10] & 5.000 & 2048\(\times\)1024 & 16 & 17 & N/A & ✓ & 6 & Germany & LGP-L2.1 \\ S\({}^{\prime}\)TLD & 2022 & [11] & 5.786 & 1080\(\times\)1920 & 720\(\times\)1280 & NA & NA & 14.130 & & 5 & China & MIT \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of TLR datasets (\(*\) – the number of classes in the test subset)
\begin{table}
\begin{tabular}{|l l l|l l l l l l l|} \hline
**Author** & **Year** & **Ref.** & **Approach** & **Dataset** & **Inference** & **Accuracy** & **TL** & **TL** & **TL** & **platforms** & **Source** \\ & & & & & & & & **speed** & & **classified** & **classified** & **code** \\ \hline John et al. & 2014 & [21, 22] & CNN similar to LeNet & Private (USA, & & & & & **reduce**: 96.25-99.4\% & ✓ & & \\ \hline Weber et al. & 2016 & [23] & DeepTLR (single CNN) & Private (Germany) & 30-77 ms & F1: 93.5\% & ✓ & & & \\ \hline Behrend et al. & 2017 & [8] & Detection: VOLO/L & BSTLD & 67-100 ms & F1: \(-\)80\% & ✓ & & & \\ \hline Jensen et al. & 2017 & [24] & Modified VOLO/Q2 & LISSA/LRaRa & N/A & AUC: 90.499 (LSA) & & & \\ \hline Weber et al. & 2018 & [5] & BSTLD: H/s single CNN & Private (Germany) & 83 ms & F1: 88.8\% (BSTLD) & ✓ & ✓ & & \\ \hline Muller and Dietzayer & 2018 & [4] & (Interpolito-v3) & DTLD & 100 ms & Recall: 95\% & ✓ & & ✓ \\ \hline Fun et al. & 2018 & [5] & Faster R-CNN (NeNeNe-30) & BSTLD & 15 ms & mAP: 53\% & ✓ & & \\ \hline Bich et al. & 2018 & [25] & Modified Faster R-CNN (ResNet-50) & DTLD & N/A & mAP: 83\% & ✓ & & \\ \hline Kim et al. & 2018 & [25] & Color space transformation + an ensemble of 3 networks: & & & & & \\ \hline Kim et al. & 2018 & [25] & Faster R-CNN (inception-ResNet-v2) & SSTLD & N/A & mAP: 38.48\% & ✓ & & \\ & & or Resnet-101 & of RF-CNN (ResNet-101) & & & & & \\ \hline Lu et al. & 2018 & [25] & Visual attention proposal + detection, both based on Faster R-CNN & Private (China) & N/A & mAP: 91.1\% (LISA) & ✓ & ✓ & \\ \hline Wang et al. & 2018 & [25] & saliency map filtering, classification: AlexNet & Private (Singapore) & 35 ms & mAP: 98.9\% & ✓ & ✓ & \\ \hline Kim et al. & 2018 & [30] & SSD for coarse-grained detection + spatiotemporal refinement & Private (USA) & N/A & F1: 10.05\% - 69.68\% & ✓ & ✓ & \\ \hline Wang et al. & 2018 & [31] & Detection: VOLO/G3 & BDD110K & 35 ms & Accuracy: 98\% & ✓ & & \\ \hline Yudin et al. & 2018 & [32] & Detection: fully-connected network & Near TLR & 63 ms & Recall: 94.7\%, & & & \\ \hline Hin et al. & 2019 & [33] & Modified Faster R-CNN (VGG16) & Private (China) & N/A & mAP: 89.26\% & & & \\ \hline Possari et al. & 2019 & [34] & VOLO’s prior maps & Pritute (Brazil) & 48 ms & mAP: 50.5\% (LISA) & ✓ & & ✓ \\ \hline Eminhield et al. & 2019 & [35] & Faster R-CNN (ResNet-101), & BSTLD, LISA & 200.333 ms & mAP: 79.01\% & ✓ & & \\ \hline Gupta and Choudhary & 2019 & [36] & Detection: Faster R-CNN (VGG16), Classification: Grassmannian manifold & BSTLD, LaRa, LISA, WPI & 31 ms & Accuracy: 98.80\% & ✓ & ✓ & \\ \hline Du et al. & 2019 & [37] & YOLO3 & Private (China) & 106 ms & mAP: 96.18\% & & & \\ \hline Yoh et al. & 2019 & [38, 39] & DTL state classification: VOLO-3-airy & LISA, Firestone (China) & 31-52 ms & mAP: 66\% (LISA) & ✓ & ✓ & \\ \hline Kim et al. & 2019 & [40] & Detection: ENsek-based network, Classification: LeNet-based CNN & BSTLD & 34 ms & F1: 95.10\% & ✓ & & \\ \hline Amechs et al. & 2019 & [41] & Retain
modifications to compensate for smaller object sizes. The corresponding approaches are marked green in Table II. Generic object detectors are especially favorable due to their inference speed.
The earliest approach to modify YOLO [67] for the TLR task was presented by Jensen et al. [24]. Here, YOLOv2 [68] was modified by removing the last convolutional layer and adding three \(3\times 3\) convolutional layers with 1024 filters, followed by a \(1\times 1\) convolutional layer with the number of outputs needed for the specific detection. This model, however, only performed detection, not the classification of the TL states. Bali et al. [58] tried to replace the YOLOv2 backbone with different lightweight CNNs, whereas the best results were achieved with SqueezeNet [69].
Muller and Dietmayer [4] presented a modified version of the SSD [70] architecture for TLR with Inception-v3 [71] instead of VGG [72] backbone for a better accuracy-speed trade-off. The authors analyzed the layer and feature map sizes of Inception-v3 and showed that they cannot guarantee the detection of objects with a width of 5 pixels. Therefore, to increase the recall on small objects, they introduced modified priors placed not in the center of each feature cell but arbitrarily using the offset vectors. Furthermore, early and late feature layers were concatenated for the BBox and confidence prediction to use context information from the early layers better. As in SSD, the confidence loss was formulated as a two-class problem (TL vs. background). Also a further layer was added to detect the TL state (_red, yellow, green, off_).
Faster R-CNN [73] was first applied by Pon et al. [25] for TLR within the joint traffic light and traffic sign detection network. Bach et al. [26] suggested further modifications to Faster R-CNN for TLR. In particular, some layers of the feature extractor networks (ResNet-50) were modified. Furthermore, anchors were determined not arbitrarily but via k-means clustering of the training set BBoxes. Finally, the loss function was expanded to allow for TL classification. Han et al. [33] used the modified Faster R-CNN with VGG16 backbone for traffic sign and traffic light detection. To account for small object size, a small region proposal generator was used. For this, the _pool4_ layer of VGG16 was removed. Additionally, the online hard examples mining (OHEM) [74] approach was applied to locate small objects more robustly and helped to increase mAP by 2-3 pp. The best results, however, were achieved with ResNet-50 [75] with dilation.
Abraham et al. [52] used a modified YOLOv4 [76] with cross-stage partial connections (CSP). The feature extractor contained a Darknet53 [77] backbone, a path aggregation network, spatial pyramid pooling, and a spatial attention module, while the detector used the YOLOv4 head. A similar approach was followed by Wang et al. [56]. Here, YOLOv4 with CSPDarknet-53 feature extraction network was modified by fusing certain layers and enhancing the shallow features. Furthermore, the BBox uncertainty prediction was also added. Lastly, Zhao et al. [57] showed that ShuffleNet [78] leads to better results when used as a backbone in YOLOv4.
The work by Ennahhal et al. [35] is one of the few that compared several approaches. Their results show that Faster R-CNN outperformed R-FCN [79] and SSD in terms of mAP. Later, Gokul et al. [51] have also demonstrated that Faster-R-CNN has the best trade-off between accuracy and speed compared to YOLOv2 and YOLOv3.
Liu and Li [64] proposed to modify the backbone of the YOLOv51. The custom backbone architecture is inspired by the U2Net [80] and contains a series of residual U-blocks. Additionally, the authors replace the C3 modules in the neck part of YOLOv5 with ConvNextBlocks [81] to improve feature extraction. The resulting model has demonstrated better accuracy compared to the baseline YOLOv5. Models based on YOLOv5s have demonstrated a remarkable inference speed of 48 FPS.
Footnote 1: [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5)
Finally, a single approach that goes beyond CNN-based object detectors is that by Greer et al. [65]. The authors used the Deformable DETR [82], a generic object detector with a transformer encoder-decoder architecture and features extracted using a CNN backbone (ResNet-50). The authors evaluated the impact of the salience-sensitive focal loss and showed better performance on salient traffic lights.
### _Multi-Stage Approaches_
The second group contains approaches where the TLR task is split into two subtasks: detection and classification, s.t. a separate model is used for each of them. The corresponding approaches are marked blue in Table II.
**Generic object detector + CNN for classification:** In the work by Behrendt et al. [8] introducing the BSTLD dataset, YOLO was modified to detect TL objects as small as \(3\times 10\) pixels. For this, the authors took random crops of size \(448\times 448\) from an image. Also, the number of grid cells was increased from \(7\times 7\) to \(11\times 11\). The classification part of the original YOLO loss was removed. Instead, a small classification network consisting of three convolutional and three fully-connected layers was used to detect TL states.
Lu et al. [28] proposed an approach consisting of two parts: the first one proposes attention regions that can contain traffic lights, and the second part performs localization and classifications on the cropped and resized attention regions found by the first model. Both blocks follow the Faster R-CNN architecture.
A similar approach was followed by Wang et al. [31], who used YOLOv3 [77] for the detection of regions of interest (ROI). The classification of a TL status was performed with a lightweight CNN consisting of two convolutional and two max-pooling layers. Similarly, as in the previous work, the lightweight CNN gets ROIs from the YOLOv3 as input and predicts one of the four states (_red, green, yellow, unknown_).
Cai et al. [43] proposed a two-stage approach, where the detection part consisted of the SSDLite with MobileNetv2 [83], whereas classification was performed by a small three-layer network.
In the work by Kim et al. [40], the detection stage is performed by a semantic segmentation network, which is then used to calculate BBoxes. This is motivated by its better performance on very small objects. In particular, a binary version of the ENet [84] is used. For the classification part, a LeNet-5-based [85] model is used. This model was shown to beat Faster R-CNN from the previous work by authors [27] both in terms of accuracy and speed.
Jayasinghe et al. [60] used a two-stage approach, where detection was performed either with Faster R-CNN with a ResNet-50 backbone or SSD with MobileNetv2 backbone, and the classification was done with ResNet-18.
**Generic object detector + non-deep learning approach for classification:** Kim et al. [30] used an unmodified SSD with a standard VGG16 backbone as a coarse-grained detector. The fine-grained detection is performed via spatiotemporal filtering and has the goal to compensate for the poor performance of SSD on small objects. The latter uses a point-based reward system; the points are rewarded for detections consistent in the spatial and temporal domains.
Yudin et al. [32] used a U-Net [86]-inspired fully-convolutional network to predict a grayscale map of TL locations, which is further binarized using thresholding. After that, the detected regions are clustered using DBSCAN and filtered, yielding the predicted location. The proposed approach is shown to lead to higher precision and recall compared to the SSD300.
Gupta and Choudhary [36] used Grassman manifold learning for TL and pictogram classification, while the detection step was performed with a Faster R-CNN. For the TL classification, features extracted from VGG16 were used to create subspaces on a Grassman manifold for each TL state. After that, discriminant analysis on the manifold was used to distinguish between TLs.
In the work of Tran et al. [46], the detections and classifications made by YOLOv4 are additionally processed by a color-based clustering method to remove irrelevant predictions. Moreover, a rule-based heuristic to identify the most important TL in an input image is applied as the last step. Similarly, Nguyen et al. [47] validate the predictions done by YOLOv3 via hand-crafted features and classification using HSV color space.
**Non-deep learning detector + CNN for classification:** Wang et al. [29] used a high dynamic range camera to get input images for different channels; this allowed them to detect TL ROIs from input images using a saliency map. Then, a customized AlexNet was used for the TL classification. Kim et al. [27] also used a color-based approach. They proposed transforming an input image to another color space before passing it to a generic object detector. Different models represented the latter, whereas Faster R-CNN with Inception-ResNet-v2 was shown to be the most suitable for the task. The HSV color space was used in work by Gao et al. [49] to generate the ROIs, whereas the classification was performed with AlexNet. Vitas et al. [50] applied adaptive thresholding to generate ROIs at the detection step, whereas the classification was done with a simple three-layer CNN.
**Further approaches:** Possatti et al. [34] incorporated the usage of prior maps containing coordinates of TLs, whereas YOLOv3 was used for TLR. YOLOv3 was not additionally modified and trained to distinguish between two classes: _red-yellow_ and _green_ TLs. The TL position is projected to the image plane using the data from the prior maps and the vehicle localization data. Finally, only those BBoxes predicted by the YOLOv3 corresponding to the projected map objects are used for final predictions.
Yeh et al. [38, 39] presented a three-stage approach, where YOLOv3 first localizes traffic lights. Next, YOLOv3-tiny detects the TL states. Finally, LeNet is applied to classify the arrows in different directions. HD maps and collected LiDAR data are used to find the TL position.
### _Task-specific Single-stage Approaches_
Finally, the third group comprises those approaches where TLR is performed within a single network deliberately designed for this task. The corresponding methods are marked yellow in Table II. Unlike most methods, which follow the two-step approach involving TL detection and subsequent classification, the DeepTLR by Weber et al. [23] is a pure CNN that directly classifies each fine-grained pixel region over the image, thus creating a probability map for each of three classes: _red, yellow,_ and _green_. For the pixels in probability maps, which surpass a certain threshold, BBox prediction is performed. The feature extraction part of DeepTLR uses the AlexNet architecture [87], whereas the BBox regression follows that of the OverFeat [88].
The HDTLR approach [5] by Weber et al. builds upon DeepTLR, extending and improving the detection part. Unlike DeepTLR, HDTLR can use any CNN for the feature extraction part. Experiments were performed with AlexNet, GoogLeNet, and VGG, while the latter performed the best.
Wang et al. [59] proposed a joint detection and tracking approach, whereas a CNN and integrated channel feature tracking are used to predict both TL coordinates and states.
## IV Conclusion
In this paper, we gave an overview of the existing works on traffic light recognition. Our analysis has revealed that the predominant approach in the literature is the modification of a generic object detector like YOLO, SSD, or Faster R-CNN. In particular, YOLO versions 1-5 were used especially often. A large group of multi-stage approaches uses an existing detector as an attention or region proposal module, which determines the positions of the traffic lights, whereas an additional CNN classifier distinguishes between traffic light states and pictograms. This classification network usually has a very simple architecture. Less popular is the usage of a rule-based ROI detector or of a non-CNN classification method. Finally, a separate cluster of approaches is formed by methods that perform traffic light recognition within a single model so that the task is learned end-to-end without intrinsic separation into detection and classification steps.
Furthermore, our overview has shown that a lot of works reach real-time performance, but perform evaluation on private datasets, which makes a fair comparison of different
methods difficult. We also have determined that, unlike most object detection tasks, open-sourcing the code of the TLR models is still rare. We hope our findings facilitate further research on traffic light recognition.
## Acknowledgment
The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Climate Action within the project "Shuttle2X" (grant 19S22001B).
|
2305.08950 | Causal Analysis for Robust Interpretability of Neural Networks | Interpreting the inner function of neural networks is crucial for the
trustworthy development and deployment of these black-box models. Prior
interpretability methods focus on correlation-based measures to attribute model
decisions to individual examples. However, these measures are susceptible to
noise and spurious correlations encoded in the model during the training phase
(e.g., biased inputs, model overfitting, or misspecification). Moreover, this
process has proven to result in noisy and unstable attributions that prevent
any transparent understanding of the model's behavior. In this paper, we
develop a robust interventional-based method grounded by causal analysis to
capture cause-effect mechanisms in pre-trained neural networks and their
relation to the prediction. Our novel approach relies on path interventions to
infer the causal mechanisms within hidden layers and isolate relevant and
necessary information (to model prediction), avoiding noisy ones. The result is
task-specific causal explanatory graphs that can audit model behavior and
express the actual causes underlying its performance. We apply our method to
vision models trained on classification tasks. On image classification tasks,
we provide extensive quantitative experiments to show that our approach can
capture more stable and faithful explanations than standard attribution-based
methods. Furthermore, the underlying causal graphs reveal the neural
interactions in the model, making it a valuable tool in other applications
(e.g., model repair). | Ola Ahmad, Nicolas Bereux, Loïc Baret, Vahid Hashemi, Freddy Lecue | 2023-05-15T18:37:24Z | http://arxiv.org/abs/2305.08950v2 | # Causal Analysis for Robust Interpretability of Neural Networks
###### Abstract
Interpreting the inner function of neural networks is crucial for the trustworthy development and deployment of these black-box models. Prior interpretability methods focus on correlation-based measures to attribute model decisions to individual examples. However, these measures are susceptible to noise and spurious correlations encoded in the model during the training phase (e.g., biased inputs, model overfitting, or misspecification). Moreover, this process has proven to result in noisy and unstable attributions that prevent any transparent understanding of the model's behavior. In this paper, we develop a robust interventional-based method grounded by causal analysis to capture cause-effect mechanisms in pre-trained neural networks and their relation to the prediction. Our novel approach relies on path interventions to infer the causal mechanisms within hidden layers and isolate relevant and necessary information (to model prediction), avoiding noisy ones. The result is task-specific causal explanatory graphs that can audit model behavior and express the actual causes underlying its performance. We apply our method to vision models trained on classification tasks. On image classification tasks, we provide extensive quantitative experiments to show that our approach can capture more stable and faithful explanations than standard attribution-based methods. Furthermore, the underlying causal graphs reveal the neural interactions in the model, making it a valuable tool in other applications (e.g., model repair).
## 1 Introduction
Explainability and interpretability are crucial for deep neural networks (DNNs), which are disseminated in many applications, including vision and natural language processing. Despite their popularity, their opaque nature limits the adoption of these "black-box" models in domains requiring critical decisions without the ability to understand their behavior. Attempts to provide a transparent understanding of DNN systems have led to the development of many interpretability methods. Most of them focus on interpreting the function of DNNs through correlation-based measures, which attribute the model's decision to individual inputs [31]. The most popular ones are saliency (or feature attribution) methods [30, 28, 34, 32, 2, 26, 17, 8].
Saliency methods aim at helping the user to understand why a DNN made a particular decision by explaining the entire model. However, we observe two considerable limitations of these methods. First, they cannot explain the inner function of the neural system being examined. That means how internal neurons interact with each other to reach a particular prediction. As reported in [6], it is difficult to verify claims about black-box models without explanations of their inner workings. A second limitation, they are susceptible to noise and spurious correlations. Whether due to a property of the DNN system obtained during the training phase (e.g., biased inputs, overfitting, or misspecification) or the method being used to capture saliency [14, 15] as shown in Fig. 1). Alternatively, some methods seek to visualize the behavior of specific neurons [19] but cannot provide clear insights due to their large number and overall complex architectures.
In this paper, we propose a novel method that addresses the above limitations through the angle of causality. We show that a technique grounded in the theory of causal in
ference provides robust and faithful interpretations of model behavior while being able to reveal its neural interactions. Inspired by neuroscience, we analyze individual neurons' effects on model prediction by intervening in their connections (model's weights or filters).
We summarize our contributions as follows. a) We propose a robust interpretability approach to capture meaningful semantics and explain the inner working of DNNs. b) Our methodology relies on path interventions and cause-effect relations, providing stable and consistent explanations. More specifically, we seek to answer questions such as: _would the model's prediction have been higher if we prevented the flow of signals through particular paths?_ or, _what would have been the decision of the model had we attenuated or removed an individual or a set of components at a particular layer?_. Our analysis will lead to locating and isolating relevant and necessary information strongly and causally connected to model prediction up to a test of significance. c) We apply our method to vision models trained to classify MNIST, CIFAR10, and ImageNet data. d) We provide a flexible framework that can be applied to complex architectures and other tasks beyond interpretations.
## 2 Related Work
Interpretability and Attribution Methods.Interpretability for deep neural networks aims to provide insights into black box models' behavior. A broad family of methods has been developed in the past few years. The most common techniques are attribution methods which assign scores to input features indicating the contribution of each one to the model prediction. Gradient-based methods [30, 26, 2, 33, 29, 27] propagate gradients of pre-trained models from output backward until input. Recent studies have pointed out that these methods produce noisy and unstable attributions [14, 1]. Perturbation-based methods [20, 32, 21] are alternatives that focus on correlations between local perturbations of raw inputs and model output. They are black-box methods in the sense that they don't require access to the inner state of the model. Beyond these widely used techniques, various interpretability methods have been proposed [36]. Close to our work is [35]. They suggest disentangling knowledge hidden in the internal structure of DNNs by learning a graphical model. Their work focuses on convolutional neural networks (CNNs), where they fit the activations between neighboring layers. Our approach differs in what it considers explanatory graphs and how it infers them. We rely on causal analyses, which have been recently considered as an effective tool for DNN interpretability and explainability. Our framework does not assume a specific type of neural network, which makes the approach generic and flexible.
Capturing Explanations with Causality.More recently, causal approaches have been considered for interpreting DNNs. The inner structure of DNN has been viewed, for the first time, as a structural causal model (SCM) in [7]. They use SCM to develop an attribution method that computes the causal effect of each input feature on the output of a recurrent neural network. Other causal approaches were specifically developed to explain NLP-based language models, such as causal mediation analyses [31] and causal abstraction [9]. In contrast, [25] have developed a model-agnostic approach (CXPlain) to estimate feature importance for model interpretations. They use a causal objective to train a separate supervised model (U-net) to learn causal explanations for another black-box model. An important limitation of this method is that it has to be trained to learn to explain the target model. Another point is that its causal property is limited to the extrinsic effect of input on causing a marginal change in output. Therefore, it cannot link explanations to the model's internal structure, which remains a black box.
## 3 Causal Graph Inference of Neural Networks
### Notation and Intuition
NotationWe denote by \(\mathbf{x}\) an input image (without loss of generality), and \(y\in\mathbb{R}^{n_{y}}\) its corresponding output label, where \(n_{y}\) is the number of classes. We also denote by \(\hat{y}\in\mathbb{R}^{n_{y}}\) its predicted output obtained by a pre-trained neural network \(N(L)\) composed of \(L\) layers. We define the relation \(l\to l+1\) to refer to directed edges or connections between hidden nodes of layers \(l\) and \(l+1\), respectively. At every hidden node \(j\) of the \(l\)-th layer, we define features or activation map \(a^{l}_{j}\). We denote by causal graph \(\mathcal{G}\) an abstraction of \(N(L)\) as shown in Fig.2 (b) and (d). We use the term explanatory graph to refer to causal graphs whose nodes hold important features.
IntuitionThe activated signals \(\mathbf{a}^{l}\) flow to the next layer \(l+1\) through weighted edges \(\mathbf{W}^{l\to l+1}\) connecting hidden nodes of layers \(l\) and \(l+1\). These weights control the strength of information flow between two layers in a manner physically analog to a switch. In physical systems, manipulating the state of a switch (e.g., on-off or via continuous interventions) would change the system's physical state, thereby providing an interpretation of its behavior. We set this intuition to motivate our work. To our knowledge, there is relatively little research on DNN explainability by manipulating weights.
### Problem Formulation
Our goal is to discover causal explanatory graphs of \(N(L)\) via path (equiv. weights) interventions. Formally, we set the problem as follows. Let \(\mathbf{W}^{l\to l+1}\in\mathbb{R}^{n_{c}\times n_{p}}\) be
the weight matrix of the directed edges from layer \(l\) to \(l+1\), where \(n_{p}\) is the number of parent nodes in \(l\) and \(n_{c}\) is the number of child nodes in \(l+1\). These nodes define a subgraph \(\mathcal{G}^{l\to l+1}\). Let \(\mathbf{w}^{l\to l+1}\in\mathbb{R}^{n_{t}\times n_{p}}\), be the paths connecting \(n_{p}\) nodes in \(l\) to \(n_{t}\) target nodes in \(l+1\) (\(n_{t}<n_{c}\)). Our problem is then to estimate how significant the causal or treatment effect \((TE)\) resulting from intervening on the weights \(w_{j}^{l\to l+1}\) at node \(j\):
\[\mathbf{P}\{TE(do(w_{j}^{l\to l+1});\hat{\mathbf{Y}},\mathbf{X},\mathbf{W}\setminus w_{j}^{l \to l+1})=0\}<\alpha, \tag{1}\]
where \(do(w_{j}^{l\to l+1})\) is a mathematical operation referring to the action of interventions. \(\mathbf{X}\) is a subset of inputs in the data manifold \(\mathbf{X}\), and \(\hat{\mathbf{Y}}\) are their predictions (pre-softmax layer). \(\alpha>0\) is a probability threshold (equiv. p-value) that measures "significance". The formula in (1) defines a form of hypothesis testing, where the null hypothesis states that interventions on the paths from node \(j\) will not affect or change the original predictions of the model \(\hat{\mathbf{Y}}\). This formula means rejecting the null hypothesis, and will lead us to identify the most influential nodes of \(l\) on \(\hat{\mathbf{Y}}\).
### Causal Inference
In this section, we provide the details of our methodology for solving (1). We focus on vision models which encompass a set of convolution and MLP layers. Specifically, we use LeNet [16] with MNIST data for ease of explanations. The experiments section shows applications on common datasets and more complex architectures. Here, we seek to capture the causal explanatory graph of LeNet given inputs of digit \(k\in\mathbb{N}_{9}\).
Treatment EffectsThe first step of our approach is to compute the effects of path interventions on model outputs. Let us consider the MLP example in Fig. 2 (a) and (c). The interventions on the paths in the last hidden layer \(L-1\) allow measuring the effect on the outputs (\(y_{1}\) and \(y_{2}\)) directly. Meanwhile, for layer \(L-2\), the effects of interventions are mediated by the responses of hidden neurons in descendant layers; in this example, the child layer \(L-1\). The strength of response to path interventions depends on the structure and complexity of neural networks. Our goal is thus to analyze how significant these effects are. First, we define the treatment effect as a measure of the difference corresponding to path interventions.
**Definition 1**: _(**Treatment Effect**) Let \(\mathbf{X}\) be a set of input features and \(\hat{\mathbf{Y}}\) the corresponding output of a neural network \(N(L)\). Let \(w_{j}^{l\to l+1}\in\mathbb{R}^{n_{t}}\) be the weights vector directed from node \(j\) in layer \(l\) to \(n_{t}\) nodes in layer \(l+1\). By holding all other weights \(\mathbf{W}\setminus(w_{j}^{l\to l+1})\) fixed and intervening on \(w_{j}^{l\to l+1}\) (i.e., \(do(w_{j}^{l\to l+1})\)), we define their effect as follows:_
\[\begin{split}& TE(do(w_{j}^{l\to l+1});\hat{\mathbf{Y}},\mathbf{X},\bm {W}\setminus w_{j}^{l\to l+1})=\\ &\hat{\mathbf{Y}}_{w_{j}^{l\to l+1}=u_{1}}(\mathbf{X})-\hat{\mathbf{Y}}_{w_{j }^{l\to l+1}=u_{0}}(\mathbf{X})\end{split} \tag{2}\]
where \(u_{1}\) and \(u_{0}\) are intervention variables defined below. Equation (2) measures the relative change of the outputs
Figure 1: **Features importance from different explanation methods on a hard example from ImageNet data.** The actual class is “white wolf”. The predicted class by the pre-trained ResNet18 is “Malamute”. To the right is our method showing top 2 semantics (top head and body of wrong class) from the causal graph that explain the prediction. Other methods either fail or provide noisy features.
Figure 2: **Causal connections within the last three layers of a neural network.** (a) and (c) Coloured paths (red/yellow) transpose signals between layers to labels \(y_{1}/y_{2}\), respectively. (b) and (d) Two abstract graphs are obtained by causal inference. In each graph, neutral neurons (marked by dots) hold variant information which don’t influence models’ behavior for the corresponding label.
distribution over the inputs \(\mathbf{X}\) given the same actions at node \(j\). By considering all the \(n_{p}\) nodes in layer \(l\), we obtain the set of distributions \(\{TE\}_{j=1,\dots,n_{p}}\). Fig. 3, shows samples of the average treatment effect obtained over \(\mathbf{X}\) when interventions correspond to removing edges in the hidden layers.
Test of SignificanceTo capture the most influential nodes in the parent layer \(l\), we consider hypothesis testing as formulated in eq. (1). We observe that the null distribution is approximately Gaussian, given the sufficiently large number of samples (in training sets). This makes the z-test an appropriate choice to solve the problem. We set the probability threshold \(\alpha\) to its common value \(0.05\). That means, the effect of intervening on the paths coming out from node \(j\) is significant when eq. (1) holds with \(5\%\) chance of error.
Path InterventionsFollowing the intuition of our work, we propose the interventions \(w_{j}^{l\to l+1}=u\) such that \(u=\beta w_{j}^{l\to l+1}\), where \(\beta\) can either be discrete (remove connections) or continuous (attenuate connection's effect). In the discrete case, \(\beta\) is binary so that \(u_{1}=0\) and \(u_{0}=w_{j}^{l\to l+1}\). In the continuous case, we propose to sample \(\beta\) from a uniform distribution \(\mathcal{U}(b-\epsilon,b+\epsilon)\), where \(\epsilon<b<1.0\) is a predefined parameter and \(\epsilon=0.01\). We use continuous interventions to evaluate the consistency of the causal effects and estimated graphs.
### Path Selection
So far, we explained how to solve (1) using \(w_{j}^{l\to l+1}\) for each parent node \(j\). These weights correspond to a subset of targets we identify here via path selection. Indeed, manipulating all possible connections for a node \(j\) given parent layer \(l\) is computationally expensive and intractable for complex architectures with many neurons. An efficient way is to pay attention to specific paths and nodes via selection criterion. We propose a top-down approach starting from a specific output (e.g., class). It implies sequential processing starting from the last layer until reaching layer \(l\). Let us consider we seek to compute the effects of path interventions of LeNet's layer \(L-2\) for digit \(3\) (as shown in Fig. 4).
We start with the paths directed from all nodes in the parent layer \((L-1)\) to node \(3\) of the output layer \(L\). Computing eq. (1) reveals the most relevant nodes in \((L-1)\), up to a significance test \(\alpha=5\%\). To identify the impact of these nodes on the model, we must look at the behavior of causal effects. Negative values explain a drop in class prediction when removing edges or amortizing weights, while positive values explain an improving prediction. Hence, the nodes revealed when the causal effect is significantly below zero are considered necessary for that output (the red ones in this example). In contrast, we discover noisy or distracting nodes when path interventions have a significant positive effect. We thereby select the necessary (red) nodes as targets for the next sub-graph \(\mathcal{G}^{L-2\to L-1}\). We repeat the same process on \(L-2\), but this time we simultaneously intervene on all paths directed from a parent node (green
Figure 4: Path selection.
Figure 3: **Heatmap of the Average Treatment Effect (ATE).** We show the effect of path interventions on the convolution and linear layers of LeNet architecture for digits \(3\), \(7\) and \(8\) respectively. Y-axis indicates the total number of nodes overall layers. For instance, conv1 has 6 nodes (channels) and the last hidden layer \(fc_{2}\) has 84 nodes. The colorbar indicates the relative ATE values w.r.t the original outputs.
node) to the targets. With this process, we can efficiently estimate relevant nodes in all intermediate layers while focusing on meaningful interventions. Algorithm 1 shows the implementation steps for discovering the causal explanatory graphs of a classification neural network. We provide some visualizations of LeNet's causal graphs in the supplementary.
```
Input:\(N(L)\) pre-trained DNN, \(\mathbf{W}\) weights, \(\mathbf{X}\) task-specific examples, \(\hat{\mathbf{Y}}\) model outputs, \((k)\) task index Output:\(\mathcal{G}\) (Dict. of important nodes and their relations), \(\mathcal{D}\) (Dict. of irrelevant nodes) \(l\gets L-1\), \(\beta\leftarrow\{0,1\}\) while\(l>0\)do \(n_{p}\gets dim(l)\) for\(j=1\) to \(n_{p}\)do \(u\leftarrow\beta w_{j}^{l-l+1}\) \(do(w_{j}^{l-l+1}=u)\) Compute \(TE(do(w_{j}^{l-l+1}),\hat{\mathbf{Y}},\mathbf{X},\mathbf{W})\) for all \(\mathbf{X}\) Solve (1) and get nodes \((J^{l},I^{l})\) \(\mathcal{G}^{l\to l+1}\gets J^{l}\), \(\mathcal{D}^{l\to l+1}\gets I^{l}\) endfor \(l\gets l-1\) endwhile
```
**Algorithm 1** Causal Explanatory Graph Inference (\(\mathcal{G}\)) of a DNN
## 4 Explanations from Causal Graphs
The hierarchical structure of the causal graphs enables robust extraction of attributions and high-level semantics. Instead of capturing a single saliency map from all activations, we rely on features response along the causal pathways. We empirically show that these features are more stable and consistent compared to traditional attribution methods. As reported in [14], the reason for these methods to produce noisy and unstable attributions is due to distracting features in DNNs. Our method can remove the features that negatively affect model's prediction, and isolate important neurons in causal graphs/sub-graphs. Formally, given the sub-graph \(\mathcal{G}^{l\to l+1}\), we extract salient interpretations (\(s_{i}^{l+1}\)) at a node \(i\) in \(l+1\) as follows
\[s_{i}^{l+1}=\frac{1}{J^{l}}\sum_{j=1}^{J^{l}}f(w_{ji}^{l\to l+1},a_{j}^{l}), \tag{3}\]
where \(a_{j}^{l}\) is the j-th activated signal of layer \(l\), \(J^{l}\) is the number of parent nodes in \(l\) connected to the child node \(i\) in the layer \(l+1\). The response \(f\) depends on the structure of the parent layer. For convolution layers, \(w_{ji}\) is a filter and \(f\) is a convolution function; whereas for MLPs, \(f\) is linear function. Fig. 5 shows causal sub-graphs, up to _conv2_ layer (for visualization), and the underlying attributions for a LeNet model successfully classified its input.
Note that eq. (3) aggregates at every node \(i\) the responses of its parent nodes to the filters/weights. We may also be interested in analyzing and interpreting the role of each filter between the pairs \((i,j)\). Fig. 6 is an example of the response to the top-1 filters (w.r.t. the amplitude of their causal effects) for a set of relevant nodes in the last convolution layer of ResNet18. Causal attributes (of object parts) are refined by extracting the response's local maxima (and minima).
## 5 Experiments
The experiments section splits into two parts: 1) we evaluate our algorithm's capacity to estimate stable and consistent causal graphs; 2) we evaluate the explanations captured by causal graphs and compare them to various attribution methods using standard explanation metrics.
Models and datasetsWe evaluate our method on the LeNet model trained on MNIST data and the following architectures: ResNet18 [11], ResNet50V2 [12], MobileNetV2 [24], and on the latest architecture ConvNext [18]; the tiny version. These models were trained on the large-scale ImageNet data (ILSVRC-2012) [23]. We also fine-tuned these architectures on CIFAR10 dataset after updating their last classification layer. We divide the validation sets into validation and test sets. We use the samples in validation sets to discover causal explanatory graphs and the test set for evaluating the explanations.
Comparison methodsWe selected the most popular attribution methods from two categories: model-agnostic (black-box) and gradient-based (white-box) methods. We chose RISE [20] and Occlusion [32] as black-box methods, and the following gradient-based methods: Integrated
Figure 5: **Illustration of LeNet’s causal sub-graphs \(\mathcal{G}^{input\to conv1},\mathcal{G}^{conv1\to conv2}\) for class \(3\). The resulted attributes provide visual interpretations for a sample image correctly classified by the model. They are up-sampled and normalized to reflect pixel-wise probabilities (Dark greens correspond to peaks with highest scores.).**
Gradient (IG) [30], Saliency [28], Gradient Shape [2], GradXIInput [26], DeconvNet [33] and Excitation Backprob (MWP) [34].
### Evaluating the Reliability of Causal Graphs
In this experiment, we evaluate the stability and consistency of our estimation of causal graphs. Since the causal effect is based on path interventions, we need to ensure consistency in the statistical test results no matter what intervention values are used (i.e., binary or continuous). We do so by running \(1000\) experiments with an intervention parameter randomly sampled from a uniform distribution \(\mathcal{U}(b-0.01,b+0.01)\). Here, \(b\) changes monotonically every \(10\) runs in the range \((0.01,0.5)\). We report reliability by measuring the frequency of detecting the same important nodes in each layer (in percentage). In Fig 7, we show for a few samples of \(\beta\) the distribution of the nodes versus their appearance rate. As we can see, the stability of the graph does not rely on the value chosen for the intervention parameter. Regardless of the value of \(\beta\), a considerable proportion (\(98\%\) to \(100\%\)) of the nodes appear in every experiment. The stability of causal graphs indicates two facts: (1) the importance of the activated signals, which are affected by weights attenuation. (2) our method is not sensitive to the choice of interventions (binary or continuous). Furthermore, the causal effect is significant even when reducing the strength of the signal along the causal path by only a factor of \(1/2\). These results ensure that the properties of single neurons might indeed be representative of model's behavior.
### Evaluation of causal explanations
The causal graphs estimated by our method summarize knowledge from all hidden layers in the DNN and enable better interpretability. For example, Fig.5 shows that for classifying digit \(3\), there exist \(8\) relevant nodes in the Conv2 layer, each encoding signal activated at different parts of the object. To compare the explanations obtained by our method with existing attribution methods, we aggregate attributions at the relevant nodes in a specific layer. Then, we evaluate the stability and faithfulness of explanations using standard state-of-the-art metrics. The evaluations are performed using the Quantus library [13]. Details on explanation metrics and attributions visualization are provided in the supplementary.
Stability:Stability measures consistency of explanations against local perturbations of inputs. Here, we adopt Lipschitz Estimate (LE) [1], which calculates the maximum variance between an input and its \(\epsilon\)-neighbourhood, where \(\epsilon\) refers to the level of perturbations. We generate perturbations by adding white noise to inputs from the test sets. We compute explanations for every input in specific class and its noisy sample using the graphs estimated from the validation data. The maximum euclidean distance between explanations is then obtained over multiple runs where new perturbations are generated. Fig. 8 reports the results for LeNet trained on MNIST and ResNet18 fine-tuned on CIFAR10, and Fig.9 shows the results for four different architectures trained on ImageNet data.
The results (in Fig. 8 and 9) clearly indicate that the explanations generated from the causal graphs are more stable and consistent compared to other attribution methods. The explanations generated by these methods show higher variance to perturbations depending on the dataset and model. In contrast, the explanation from causal graph show consistent stability. Our method has the lowest variance with significant margin compared to the best method in each experiment.
Faithfulness:Evaluating attributions relevance for the decision obtained by the model is essential to ensure correctness and fidelity of explanations. This is commonly done by measuring the effect of obscuring or removing features from the input on model's prediction. Different techniques have been proposed to score the relevance of explanations [5, 1, 3, 4, 22]. Here, we used iterative removal of
Figure 6: **Visualizing explanations obtained by the top-1 causal filters.** We show four examples for two object classes (from ImageNet). Important neurons belong to the causal sub-graph connecting the last Conv layers \(l=layer4.1.conv1\) and \(l+1=layer4.1.conv2\) of ResNet18. We can observe consistent attributes for similar inputs. The red point indicates the location of the peaks corresponding to the absolute maximum response.
features (IROF) [22]. An image is partitioned into patches using superpixel segmentation. The patches are sorted by their mean importance w.r.t the attributions in each patch. At every iteration, an increasing number of patches with highest relevance are replaced by their mean value. The IROC computes the mean area above the curve for the class probabilities (perturbed vs. original predictions). We applied this metric to evaluate each explanation method including ours. Fig. 8) shows that our method outperforms other methods and is comparable to MWP [34] (with a relatively small margin between their medians). For ResNet18 trained on CIFAR10, most attribution methods show higher scores than LeNet on MNIST. Furthermore, the explanations obtained by our method and MWP show less sensitivity to the different data and models, indicating better trustworthiness. Fig. 9 shows IROF results for different architectures trained on ImageNet. On ImageNet, all methods, including ours, agree on the differences in behavior between the four models and that ConvNeXt is more trustable than standard ConvNets. For interested readers, we refer to [18] for further details about the core design of the ConvNeXt family of architectures.
Figure 8: **Quantitative evaluations of attribution methods for LeNet on MNIST and ResNet18 on CIFAR10. For each metric, we compare 7 attribution methods to the causal explanations obtained by our method using test images. The bars show mean and variance over samples. Lower Lipschitz Estimates (w.r.t. means) indicate higher stability. Higher IROF values (w.r.t. means) indicate strong relation between explanations and predictions.**
Figure 7: **Reliability assessment of causal graphs. We show results on four complex architectures. On the x-axis, we show the frequency of appearance of a node (%). On the y-axis is the portion of all the nodes appearing at least once during the experiments (%).**
Figure 9: **Quantitative evaluations of explanations for complex architectures on ImageNet. We evaluate 9 methods including ours using 10 representative classes from the test set.**
### Fidelity of class-specific causal neurons
The causal neurons discovered as critical (or relevant) through interventions should accurately describe model behavior. We evaluate this by measuring the model accuracy on a specific class when masking out the critical neurons connected to this class. That means high-fidelity neurons should cause a drastic drop in accuracy under discarding them. We illustrate this behavior on four models trained on ImageNet in Fig. 10. First, after discovering class-specific causal graphs, we rank the weights (and filters) in each sub-graph according to their highest effects (as described in eq. (2)). Then, we use these ranks to select the top-k critical neurons in each layer. As we observe in Fig. 10, the accuracy of all four models drastically drops after masking a small portion (\(<20\%\)) of top critical neurons, and it is more evident on smaller architectures such as ResNet18 and MobileNetV2. In addition, these results describe another way of evaluating faithfulness since critical neurons encode the important features for predicting a specific class.
## 6 Applications
Repairing model accuracyIn many practical, real-world cases, we seek fast and effective ways to repair the model's behavior without requiring extensive retraining with large datasets. We can target the proposed explanation method to achieve this goal. Each causal explanatory graph measures the neurons' contributions to a specific class (or task) by intervening on the weights connecting the neurons to the class. More specifically, amortizing the strength of activation signals passing through particular paths that cause a drop in the model's performance or a wrong prediction. It is worth noting that this operation differs from model pruning since we only block these paths at inference time. Practically, we do this by masking out irrelevant weights (and filters for convolutional layers). The experiments show that our method can improve class prediction and correct wrong predictions. To illustrate these facts, we took the four models trained on ImageNet and considered 10 representative (animal) classes for evaluation in addition to the LeNet trained oon MNIST data. For each trained model, we select to mask out a portion of the irrelevant weights discovered by our method and evaluate how they perform on these samples. Figs 11 shows test accuracy under a varying portion of the masked weights in all layers.
## 7 Conclusion and discussions
We have presented a novel method for interpreting neural network behavior based on causal inference. It estimates the causal explanatory graphs that disentangle relevant knowledge hidden in the internal structure of DNNs, which is congenital to their predictions. Our methodology tests the hypothesis that path interventions for a parent neuron connected with target neurons in the subsequent layer will significantly affect the model's output. As a case study, we applied our method to vision models for object classification. The responses of causal filters are used to compare our approach to attribution methods quantitatively. This work is not aimed at extracting high-level abstractions that are interpretable to humans, which might be considered a limitation of our method. However, we seek to understand the inner working of the model and therefore provide a valuable tool for model monitoring and repair. We show that our method can be used to improve and fix the model without retraining, which makes it worthwhile and practical for real-world cases where extensive training data are not accessible, or retraining is computationally expensive. In future work, we will consider investigating further applications of our method. For instance, class-specific important neurons can be used with regularization methods in continual and few-shot learning. Our method's computational cost is rea
Figure 11: **Regular of model performance. We show the test accuracy after masking out (\(n\%\)) of class-specific noisy filters in LeNet model for all categories of MNIST data (right), and in the 4 models for 10 representative (animal) categories of ImageNet data (left). Each color in left figure points to one different category and is fixed for each model.**
Figure 10: **Fidelity of class-specific causal neurons to the model. The left figure shows the test accuracy of four models when masking out the top-k (\(\%\)) portion of causal neurons discovered as critical using our path interventions method. The figure shows the average accuracy over ten representative classes selected from ImageNet. The right figure shows the absolute number of critical neurons at each portion.**
sonable, as shown in the supplementary, which facilitates its integration into other processes. The critical limitations of neuron importance methods are their high computational costs and sensitivity to superior correlations between neurons [10]. Relying on causal inference and path interventions allows for mitigating these limitations and provides robust interpretations.
|
2310.15209 | DeepOrientation: convolutional neural network for fringe pattern
orientation map estimation | Fringe pattern based measurement techniques are the state-of-the-art in
full-field optical metrology. They are crucial both in macroscale, e.g., fringe
projection profilometry, and microscale, e.g., label-free quantitative phase
microscopy. Accurate estimation of the local fringe orientation map can
significantly facilitate the measurement process on various ways, e.g., fringe
filtering (denoising), fringe pattern boundary padding, fringe skeletoning
(contouring/following/tracking), local fringe spatial frequency (fringe period)
estimation and fringe pattern phase demodulation. Considering all of that the
accurate, robust and preferably automatic estimation of local fringe
orientation map is of high importance. In this paper we propose novel numerical
solution for local fringe orientation map estimation based on convolutional
neural network and deep learning called DeepOrientation. Numerical simulations
and experimental results corroborate the effectiveness of the proposed
DeepOrientation comparing it with the representative of the classical approach
to orientation estimation called combined plane fitting/gradient method. The
example proving the effectiveness of DeepOrientation in fringe pattern
analysis, which we present in this paper is the application of DeepOrientation
for guiding the phase demodulation process in Hilbert spiral transform. In
particular, living HeLa cells quantitative phase imaging outcomes verify the
method as an important asset in label-free microscopy. | Maria Cywinska, Mikolaj Rogalski, Filip Brzeski, Krzysztof Patorski, Maciej Trusiak | 2023-10-23T14:36:03Z | http://arxiv.org/abs/2310.15209v1 | # DeepOrientation: convolutional neural network for fringe pattern orientation map estimation
###### Abstract
Fringe pattern based measurement techniques are the state-of-the-art in full-field optical metrology. They are crucial both in macroscale, e.g., fringe projection profilometry, and microscale, e.g., label-free quantitative phase microscopy. Accurate estimation of the local fringe orientation map can significantly facilitate the measurement process on various ways, e.g., fringe filtering (denoising), fringe pattern boundary padding, fringe skeletoning (contouring/following/tracking), local fringe spatial frequency (fringe period) estimation and fringe pattern phase demodulation. Considering all of that the accurate, robust and preferably automatic estimation of local fringe orientation map is of high importance. In this paper we propose novel numerical solution for local fringe orientation map estimation based on convolutional neural network and deep learning called DeepOrientation. Numerical simulations and experimental results corroborate the effectiveness of the proposed DeepOrientation comparing it with the representative of the classical approach to orientation estimation called combined plane fitting/gradient method. The example proving the effectiveness of DeepOrientation in fringe pattern analysis, which we present in this paper is the application of DeepOrientation for guiding the phase demodulation process in Hilbert spiral transform. In particular, living HeLa cells quantitative phase imaging outcomes verify the method as an important asset in label-free microscopy.
Phase measurements; Fringe orientation map; Fringe direction map; Convolutional neural network; Supervised learning; Full-field optical measurements; Spatially self-similar patterns; Hilbert spiral transform; Phase demodulation
## 1 Introduction
The full-field optical measurement techniques, such as interferometry [1-3], holographic microscopy [4-6], fringe projection [7,8] or moire technique [9], are considered to be highly accurate, non-invasive and fast ones. In all mentioned techniques the measurement result is received in the form of a fringe pattern (interferogram/hologram/moiregram), where the phase function (or less frequently amplitude function) stores information about studied specimen. For that reason, the whole process resulting in information retrieval from recorded fringe pattern can be divided into two steps: opto-electronic measurement leading to capturing the fringe pattern and numerical processing leading to the fringe pattern phase map calculation. In general, recorded fringe pattern can be described as:
\[I(x,y)=a(x,y)+b(x,y)cos\big{(}\varphi(x,y)\big{)}+n(x,y), \tag{1}\]
where \(a(x,y)\) describes background intensity, \(n(x,y)\) represents noise, \(b(x,y)\) and \(\varphi(x,y)\) denote amplitude and phase modulation (measurand), respectively.
There are generally two main classes of algorithms enabling phase map demodulation, i.e., multi- and single-frame methods. The first one is known as the most accurate, but difficult to apply in the case of studying transient events or performing measurement in an unstable environment, as generally large number of frames is needed (3+). Because of that the
development of single-frame algorithms is needed and important. The Fourier transform (FT) method [10] is a well-known representative of such a technique but it has limitations in terms of the carrier spatial frequency and global spectrum filtering. The FT localized relatives, such as the windowed Fourier transform (WFT) [11], continuous wavelet transform (CWT) [12] and empirical wavelet transform [13], or other approaches including spatial carrier phase-shifting (SCPS) [14], and regularized phase tracking [15], are generally very capable but require a set of parameters to be fixed. They can be computationally and algorithmically demanding, and exhibit characteristic errors (e.g., the CWT method introduces errors in areas of strong phase gradients correctable for an especially tailored numerical scheme). Other solutions escaping so-called off-axis interferogram regime are Kramers-Kronig relation [16], Riesz transform approach [17, 18], Hilbert Phase Microscopy [19-21] or two-frame Hilbert transform approach [22]. The approaches based on Hilbert spiral transform (HST) [23-25] enable the single-frame phase analysis in the widest range of fringe pattern carrier frequencies, however they do need the fringe orientation map for guiding the phase demodulation process.
It is to be highlighted that the fringe orientation map is essential in various fringe processing and analysis tasks, where it enables or greatly enhances the calculations. The examples are: fringe filtering (denoising) [26-43], fringe pattern boundary padding [41, 44], fringe skeletoning (contouring/following/tracking) [27, 29, 32, 33, 36, 37, 39, 40, 44, 45, 46], local fringe spatial frequency (fringe period) estimation [30, 34, 47, 48] and fringe pattern phase demodulation [28, 30, 32, 36, 38, 47, 48].
To be precise we would like to introduce the concept of local fringe direction (LFD) map and explain the difference between local direction and orientation maps. The LFD map (\(\beta(x,y)\)) stores the information about the azimuth of vector locally normal to fringes as well as its direction (e.g., up or down for vertical azimuth). It is a modulo \(2\pi\) indicator, therefore. The LFD map cannot be calculated in the straightforward way from recorded pattern as carrier fringes with opposite directions visually are the same. The quantity, which we can calculate directly from the fringe pattern is called fringes orientation (FO) [60] and it is a modulo \(\pi\) indicator. It stores the information only about the azimuth of the vector locally normal to fringes. To move from the fringes orientation to fringes direction one needs to apply the unwrapping procedure (with the use of phase unwrapping algorithms [61]). The difference between the phase unwrapping and fringe orientation unwrapping procedures is the need of multiplying by 2 the modulo \(\pi\) steps, dividing the resultant unwrapped map by 2 and bringing it down to the range of LFD map, i.e., modulo \(2\pi\). From the definition, in which \(\beta(x,y)\) is the map of angles between vector locally normal to fringes and x axis, fringes orientation can be estimated as arctangent of the orthogonal spatial derivatives of phase function:
\[\tan\left(\beta(x,y)\right)=\frac{\partial\varphi(x,y)}{\partial x}\big{/} \frac{\partial\varphi(x,y)}{\partial y},\quad 0\leq\beta(x,y)<2\pi, \tag{2}\]
\[FO(x,y)=arctan\left(\frac{\partial\varphi(x,y)}{\partial x}\big{/}\frac{ \partial\varphi(x,y)}{\partial y}\right),0\leq FO(x,y)<\pi. \tag{3}\]
At this point it can be clearly seen that the local fringe direction map estimation is not an easy task since (1) it requires two-steps calculations and (2) the phase function needed for precise orientation calculation is encoded in the fringe pattern in the argument of cosine function, and simply it is not directly accessible in experimental reality. For that reason the orientation map cannot be calculated from the definition in the measurement reality. Instead of estimating the orthogonal spatial derivatives of phase function one can estimate the intensity gradients of the recorded fringe pattern. In the case of prefiltered fringe pattern (with uniform background, contrast and minimized noise) the intensity gradient vector has the same direction as phase gradient vector. That way the orientation map can be calculated directly from the orthogonal derivatives of the fringe pattern intensities, which is a working principle of gradient methods [39, 45, 57, 62]. Another solution called plane fit method [31] is based on the fitting a plane polynomial (within a given window) to the gray levels of local fringes. The zero-direction derivative of the fitted plane is defined as the local fringe orientation (FO). The combined method uses both the plane-fit algorithm and gradient method [36]. Firstly the local phase
gradients are approximated by plane-fitting to fringes and then those gradients are used to estimate FO. Nevertheless, the use of gradient and plane-fit algorithms requires careful adjusting of calculation window size, which is connected with the trade-off between the noise resistance (gained in the case of big window size) and higher resolution (achieved for small window size). In order to determine the local fringes orientation spin filters [26,28,29,32,33] and binary sign-maps [27,29] may be also used. Since in the experimental reality we are always dealing with the presence of noise some regularized methods [30,41,49,49,50,51,52] were proposed to smooth the estimated orientation maps. Other exemplifying approaches to the local fringes orientation map estimation are connected with the use of 2D energy operators [58], accumulate differences [34], Fourier transform [42], Windowed Fourier Transform [57], Principal Component Analysis [46,56] and two frame methods, e.g., optical flow [63].
However, currently proposed methods do not provide a satisfactory robustness of the fringe orientation estimation and may struggle when applying to more complex fringes (with higher local orientation variability and intensity noise). The results provided by the classical approaches strongly depend on the choice of the specific algorithm parameters. To address these issues, we propose a new, fast and robust method for fringe orientation map estimation based on convolutional neural network (CNN) called DeepOrientation. The neural networks are highly capable numerical tools for finding the relationship between their input and output signals, even though this relationship is complicated or even impossible to define analytically [64]. Additionally, the convolution is a basic operation to describe imaging process, so the CNN is an obvious choice for the task developed in this paper. CNNs were already successfully adapted in the fringe pattern analysis at different stages, i.e., conducting fringe pattern filtration [65-68], defining the optimal window for Fourier transform approach [69-71], performing phase extraction [72-76], phase unwrapping [77-82] and local fringe density map estimation [83]. Inspired by their success we decided to apply CNN to the FO map estimation. In the literature there is a neural network-based solution for fringe pattern orientation estimation [84], but it is specialized to the electronic speckle pattern interferometry (ESPI) fringe patterns. The construction of the output definition of the neural network training dataset determines that the maximum achievable accuracy is the one of the gradient method [39,62] with denoising. Considering that CNN itself is approaching the output labels with some level of error the limit defined by denoised version of gradient method not only cannot be surpassed but also reached. Since in our approach the output will be defined using the definition of the FO map from known simulated phase function the proposed DeepOrientation is a standalone and versatile solution. Additionally, in our approach input data size is preserved by DeepOrientation architecture so FO map is estimated in every pixel without reducing the analysis resolution.
The paper is structured as follows. Section 2 introduces the issue of determining fringe orientation using convolutional neural network. Section 3 contains numerical evaluation of the proposed novel neural network-based technique for the local fringe pattern orientation estimation using experimental and simulated data comparing it with the combined plane-fit/gradient method (CPFG) [36]. Section 4 contains the application of DeepOrientation to HST-based fringe pattern phase estimation comparing the obtained results with the reference TPS-based phase maps. Section 5 concludes the paper.
## 2 DeepOrientation-based fringe orientation map estimation
Facing the numerical task of transforming data input into the sought output, the solution may be found by analytic definition of the searched relationship. Naturally, this approach is connected with the full understanding of analyzed data and is mathematically solid. On the other hand, in many cases the straightforward definition of the relationship between data input and sought output may not be easy or even possible. As in the case of FO map estimation the simple definition of the relationship between the input intensity of the fringe pattern and the output orientation map is not possible since the fringe orientation by definition can be calculated from orthogonal derivatives of phase function and phase function is hidden in the intensity
distribution of fringe pattern. Deep learning approach opens new possibilities for the development of algorithms solving the numerical problems one can encounter during scientific research. Deep neural networks during the supervised learning process can be taught to map the searched relationship without the need of its analytical definition. The relationship itself is defined by neural network layers parameters and algorithmic solution resolved that way works as a "black box". We can put new, unseen before by the network data instances and receive the corresponding outputs without the need of manually defining any parameter values, which is a meaningful advancement over majority of classical analytical methods. Nevertheless, because of this "black box" property neural network-based solutions raised legitimate concerns among the metrology community to use them to directly define the measurement output. For that reason, in our work, we are highlighting the use of neural network not to fully replace the mathematically sound phase estimation solutions (e.g., via HST method) but to support them. The example which is going to be discussed in this paper is the use of DeepOrientation to support the HST technique. Even if there could be some neural network-based artifacts introduced within the retrieved FO map they should not jeopardize the final HST-based phase demodulation result, as shown in our previous studies [85].
### Definition of the training dataset
DeepOrientation network training is performed using especially tailored, simulated dataset. We decided to simulate training dataset with the uniform background modulation and without any intensity noise. That assumption was made based on the existence of robust fringe pattern filtering (denoising and detrending) algorithms [86, 87, 24, 88, 89]. Therefore, in experimental reality, well-filtered fringe patterns may be obtained. In general, the local fringe direction map is more interesting (and informative) for fringe pattern analysis and for that reason its direct estimation by neural network may seem like the most attractive solution. Nevertheless, in the case of carrier fringe pattern the fringe with the direction difference equal to \(\pi\) visually appear the same, which would be confusing for the convolutional neural network during the learning process.
The process of DeepOrientation training dataset preparation is presented in Fig. 1. Using the known simulated phase function the fringe orientation map matching the simulated input fringe pattern may be calculated by the definition from orthogonal derivatives of simulated phase function (Eq. 3). The important aspect to mention at this point is the fact that in some applications (e.g., HST phase demodulation) FO map in the form of modulo \(\pi\) needs to be further unwrapped to its modulo \(2\pi\) form - local fringe direction map. To be able to correctly perform the unwrapping procedure the step value equal to \(\pi\) must be preserved. The CNN due to the multiple convolution operations performed one after another will blur out the crucial discontinuity lines in fringe orientation map. This effect can be slightly minimized but never fully eradicated. For that reason, FO map cannot be set directly as the DeepOrientation output, because it would make the unwrapping to local fringe direction map impossible.
Now the first idea, which may come to mind is to use the known phase function orthogonal derivatives as the DeepOrientation training data output. The approach although seems very attractive is a troublesome one for the neural network learning process, because of the evenness of the cosine function. With the change of sign of the phase function the signs of its orthogonal derivatives also change while the cosines of both phase functions visually are the same. For that reason, the interpretation of the data would be confusing for neural network. Instead, another idea was formulated. The orientation angle in any point of fringe orientation map can be described in the complex form using vectorial notation. The troublesome discontinuities of the fringe orientation map can be removed by encoding it in the abovementioned way - in the form of two 2D matrixes of cosine and sine functions of the orientation angle. Since the local fringe orientation (FO) map is the modulo \(\pi\) indicator thus in order to use the full periodicity of sine and cosine functions the doubled fringe orientation map was encoded in their argument:
\[FO(x,y)=\frac{arg(\cos(2FO(x,y))+i+\sin{(2FO(x,y))})}{2}. \tag{4}\]
Thus, two maps of \(\cos(2FO)\) and \(\sin(2FO)\) define the neural network output. DeepOrientation inputs (I(x,y), see exemplary fringe patterns in Fig. 1) were generated as in (Eq. 5):
\[I(x,y)=\cos(\varphi_{obj}(x,y)+\varphi_{carrier}(x,y))\,, \tag{5}\]
where \(\varphi_{obj}(x,y)\) is the object phase function simulated as a sum of dozens (up to 50) 2D Gaussian kernels, each one with random standard deviation and \((x,y)\) location, \(\varphi_{carrier}(x,y)\) is the factor that generates carrier fringes with random orientation (\(\theta\)) and period (\(T\)):
\[\varphi_{carrier}(x,y)=x\,\frac{\cos(\theta)\,2\pi}{\tau}+\,y\,\frac{\sin( \theta)\,2\pi}{\tau}. \tag{6}\]
### Proposed network architecture
The DeepOrientation network architecture schematically presented in Fig. 2 was inspired by the work [72] and already successfully adaptation to somewhat similarly challenging task of local fringe density map estimation [84]. DeepOrientation data input is a grayscale image, in other words one-channel 2D matrix. The network architecture is built by convolutional layers and residual blocks. It is divided into different paths where the input image dimensionality is changed by the maxpooling layers. By the end of each path the results are upsampled to match the input image height and width and then results from all paths are concatenated to define the input for final convolutional layer. The last convolutional layer defines the DeepOrientation data output to have two-channels with height and width matched to the input image. During further analysis two parameters will be adjusted to optimize the network architecture and adapt it to the specific task of FO map estimation: number of paths and number of filters in convolutional layers (including those building the residual blocks). Increasement of those two parameters makes the network architecture more complex. Because in our approach the training dataset is simple and was used for grasping the general relationship between the fringe pattern and underlying orientation map it was crucial to prevent the network from overfitting to the trained data. In order to do that the residual blocks with skip connections were chosen.
Training process was performed on a training dataset containing 2400 512x512 px images. During the training, the mini batch size was equal to 1 and initial learning rate was \(10^{-4}\). Learning rate was updated each 5 epochs and reduced by the factor of 5 to help the loss function get out of local minima. The ADAM optimizer was used as a solver for training network and the mean-squared-error function was used as the loss function. Learning process lasted for 30 epochs, which was enough for the networks to train since no significant further decrease of loss function was observed afterwards. Networks were trained on a computer with AMD Ryzen 9 5900X 12-Core 3.70 GHz processor and NVIDIA GeForce RTX 3080 graphics card with 12 GB of memory, that allowed to train a single network in the time between 200 and 2000 minutes, depending on the architecture complexity. It is worth to highlight that this time-consuming training process needs to be performed only once for a given architecture. After the training, networks can reconstruct the orientation of a 512x512 px fringe pattern image in less
Figure 1: Training and working principle of DeepOrientation convolutional neural network.
than a second. Considering available memory on our GPU, networks with bigger number of filters and paths could only be trained with a mini batch size equal to 1. To keep the learning process consistent among all networks we used the same mini batch size for all trainings.
### Influence of the neural network architecture complexity on the learning accuracy
In a pursuit to find the optimal neural network architecture for DeepOrientation two parameters were considered - number of paths with different downsampling and number of filters in convolutional layers. Increase of each of those parameters caused the increase of the neural network architecture complexity. In total 24 different configurations were tested with the number of paths varying from 2 to 5 and the number of filters (per path) varying from 30 to 130 with the step of 20, which as can be seen in Fig. 3. Our study allowed to understand general relationships between the network complexity, accuracy and calculation time. The performance of developed neural networks was tested with the use of two datasets with different definition of data instances. The dataset called validation set (600 512x512 px images) was used to test the performance of neural networks during training and is of the same origin as training dataset. Second dataset called test set is also based on simulations (Eq. 5), but the object phase functions included there were simulated in a completely different manner in order to validate the generalization ability of proposed DeepOrientation network. Test set consisted of 5 different \(\varphi_{obj}(x,y)\) functions: (1) a 2D function with 3 maxima and 2 minima (simulated using MATLAB 'peaks' function obtained by translating and scaling Gaussian distributions), (2) a group of 5 HeLa cells with shapes that were close to spherical, (3) a group of 2 HeLa cells with oblong shapes, (4) a blurred binary mask of human hand and (5) a group of 23 grains of rice. For each of those functions, there were generated a 140 fringe patterns with different carrier fringes period and orientation, and with different fringes curvature (introduced by changing the dynamic range of the \(\varphi_{obj}(x,y)\) function). Exemplary test set image may be seen in Fig. 4(a).
Choosing the optimal neural network architecture for the specific task of local fringe pattern orientation map estimation is a complex issue, which needs to be carefully analyzed. The training strategy picked for DeepOrientation was based on the assumption of the simple simulated training dataset (without noise, background and amplitude modulation). Subsequently trained network is supposed to work for a wide range of fringe pattern characteristics, where phase function may not necessarily be describable the same way as phase functions included in training dataset. For that reason, we need to be especially careful to not introduce overfitting in wider sense that during the standard neural network training. Even if the neural network is not overfitted in the sense of being able to successfully analyze the data, which was introduced during the training, it can still 'overfit' assuming that all data outside the
Figure 2: Scheme of the developed DeepOrientation convolutional neural network architecture.
training dataset is of the same characteristics and origin (shape of fringes, optical measurement method used and studied object type). In other words, we want to find the solution leading to the estimation of the FO map from the cosine pattern, but without the strong restriction that the phase function needs to be describable the way proposed in training dataset simulation.
In Fig. 3 the results of the performance analysis for different levels of neural network architecture complexity are presented. Looking at the curves in Fig. 3(a) estimated with the use of a validation dataset one can notice that with the increase of filters number adding the extra paths does not influence the results accuracy. For the filters number greater than 90 all neural networks achieved similar accuracy regardless the number of paths. Nevertheless, it needs to be highlighted that with the increase of the architecture complexity the neural network ability to fit to the training dataset increases. As it was just discussed with the chosen training strategy, we do not want to fit perfectly only to the training dataset. Observing the Fig. 3(a) curves estimated for the test dataset the first aspect one can notice is the increase of the RMSE value, which is perfectly understandable since the origin of the test data is different than training dataset (as it would be in different experimental realities - setups, objects) and some of the data included in the test dataset featured higher phase gradients than the validation dataset. It can be clearly seen in error maps presented in Fig. 4, in which the highest errors are visible around the edges of HeLa cells where phase gradients are the highest. Nevertheless, the error values are still on the reasonable level especially considering the main planned application of DeepOrientation network, which is to support HST-based phase estimation. Despite the obvious change in the error values the test curves shape also changed in comparison with the validation curves. The minimum RMSE was achieved for the neural network with two paths and 110 filters, therefore this configuration was chosen for the final DeepOrientation architecture. Two paths architecture limits the complexity of possible neural network input-output relationship preventing too strong fitting to the training dataset structure, while 110 filters grant that the network architecture is complex enough to capture the general relationship (since for that number of filters there was no noticeable error difference obtained on validation dataset for different number of paths).
The detailed error analysis of the neural networks' outputs generated by exemplary fringe pattern from test dataset is presented in Fig. 4. One can notice that in general with the increase of neural network complexity, either implemented by increasing the number of filters or paths, the presented error maps become darker, which indicates that mean error value is decreasing. On the other hand, error map estimated for DeepOrientation architecture (i.e., 110 filters and 2 paths) has lower errors in the regions of high phase gradient (see circular cell fragment visible at the bottom). Presented error maps are estimated as absolute value of difference between the sine of known, ground truth doubled FO map and sine output of neural networks. We demonstrate the results connected only with sine output, because maps estimated for cosine output are complementary and do not contribute new information to the discussion.
Figure 3: The performance of neural network architectures with different level of complexity trained to estimate fringe pattern orientation maps: (a) the mean RMSE values calculated on validation and test datasets and (b) calculation time of single data instance.
Additional factor, which was considered while choosing the DeepOrientation network architecture was calculation time. From the algorithm's user perspective, one of the most important information is to know how long it would take to process their data. For that reason, in Fig. 3(b) the time needed for the calculations of the single data instance was presented. Reported calculation times were estimated with the use of typical computing unit represented by personal laptop (Intel Core i7-7700HQ 2.80 GHz processor and NVIDIA GeForce GTX 1060 graphics card). Obtained values confirm that unnecessary augmentation of neural network architecture complexity is undesirable.
## 3 Numerical evaluation of DeepOrientation
The analysis comparing our proposed DeepOrientation approach with classical CPFG method [35] using simulated data is presented in Fig. 5 and using experimental data in Figs. 6 and 7. Since the local orientation maps consist of the angle information, in order to preserve its periodic nature, we introduced the orientation error (OE) as:
\[OE=\sqrt{\frac{1}{N_{x}N_{y^{-1}}}\sum_{x=1}^{N_{x}}\sum_{y=1}^{N_{y}}\left[\sin \left(FO(x,y)-FO_{ref}(x,y)\right)-\mu\right]^{2}}, \tag{7}\]
where \(N_{x}\) and \(N_{y}\) are image size, \(FO_{ref}(x,y)\) is a reference local fringe orientation map and \(\mu\) is mean of \(\sin\left(FO(x,y)-FO_{ref}(x,y)\right)\). In other words orientation error may be considered as modified RMSE, where the straightforward difference between retrieved map and its ground truth was replaced by the sine of that difference. The orientation error converges to 0 if the
Figure 4: Error analysis of developed neural networks. (a) Analyzed fringe pattern from test dataset; (b) underlying phase function; ground truth outputs of DeepOrientation neural network: (c) sine and (d) cosine of 2FO; (e) ground truth FO map and (f) its unwrapped version: local fringe direction map; (g) error maps of sin(2FO) output for all analyzed neural network architectures.
\(FO(x,y)-FO_{ref}(x,y)\) is equal to an integer multiple of \(\pi\), which is a desirable feature since orientation map is in the form of modulo \(\pi\).
### Comparison of DeepOrientation with classical approach on simulated data
The fringe pattern series used for analysis in Fig. 5 were simulated according to the (Eq. 5) and (Eq. 6), where T=14, \(\theta\) = 0 and \(\varphi_{obj}(x,y)\) is described by Matlab peaks function with dynamic range controlled by multiplication by \(a\) coefficient varying from 0 to 10.
In the case of CPFG method the parameter, which needs to be set is the size of the window in which the orientation angle will be estimated. The smaller the window size, the greater the accuracy of local orientation estimation. Nevertheless, small window size is not immune to the noise presence and for that reason in many cases it is recommended to set the bigger window sizes. Since the DeepOrientation works on prefiltered data in order to provide a fair comparison between two algorithms throughout the paper we are going to use the prefiltered data also for the classical approach. This can be considered as novel modification of CPFG aimed at its automation (no need for tailoring the window size) and increasement of robustness via unsupervised variational image decomposition (uVID) fringe prefiltering [87] and HST-based fringes normalization [23]. For that reason the window size can be chosen arbitrarily small so the value 2 was used in all presented cases. We have tested the CPFG accuracy using different window sizes and in majority of cases (if the denoising was correctly performed) the window
Fig. 5: Comparison of the performance of DeepOrientation approach and classical one (CPFG [35]) using simulated fringe patterns. (a) The orientation errors of both methods calculated for different levels of phase modulation, (b), (c), (d) exemplary fringe patterns with high (a=10), medium (a=5) and low (a=0) phase modulation, respectively, (e), (f), (g) noisy versions of fringe patterns from (b), (c), (d), respectively, (h), (i), (j) the ground truth FO maps for (b), (c), (d), respectively, orientation error maps estimated by (k), (l), (m) DeepOrientation and (n), (o), (p) CPFG method for (b), (c), (d), respectively and orientation error maps estimated by (q), (r), (s) DeepOrientation and (t), (u), (w) CPFG method for (e), (f), (g), respectively.
size equal to 2 provided the best results. It can be seen that for low level of phase modulation (\(a<1\)) CPFG method provides higher accuracy of the retrieved local orientation maps. As it is shown in Figs. 5(d), 5(j), 5(m) and 5(p) DeepOrientation-based results have a small fringe-like error, while for such simple cases and perfectly fitted window size classical CPFG approach provides error-free result. Nevertheless, with the increase of phase modulation level (and therefore complication of the fringe pattern shape itself) the predominance of DeepOrientation approach is clearly visible. It is also worth to mention that the orientation errors values presented in Fig. 5(a) were calculated after neglecting the border effects, which are obvious in the case of CPFG method even in the case of small window size. Additionally, DeepOrientation is more resistant to noise errors than CPFG method, which can be clearly see in Fig. 5(a). If there is noise present as in the case of Figs. 5(e)-5(g), where the Gaussian noise of std=0.1 was added to the data from Figs. 5(a)-5(d), DeepOrientation provides smoother orientation maps than CPFG method with smallest window size. The CPFG method error could be minimized by adjusting the window size and match the DeepOrientation accuracy, which shows how troublesome and crucial parameter's adjusting could be for a classical method.
Experimental verification of the accuracy of DeepOrientation-based local fringe orientation map estimation
The performance of proposed DeepOrientation solution was also tested using the experimentally recorded fringe patterns and compared with classical, well-developed solution represented by CPFG method [35]. All analyzed experimentally recorded data was prefiltered with the use of uVID [87] (where the noise part of the decomposition is estimated with the use of BM3D) and normalized in 0-1 range with the use of HST approach [23] before calculating the orientation map either with the use of the DeepOrientation or the CPFG. The first real-life example we have chosen contains complicated, low frequency fringe patterns recorded during the temporal phase shifting (TPS) study of glass plate in Twyman-Green interferometer; fringe patterns are presented in Figs. 6(a)-6(e). Having the complete TPS series we were able to precisely calculate the reference phase map since the TPS algorithm (as the multi-frame fringe pattern analysis algorithm) is the most accurate phase demodulation method, especially in the case of sparse closed fringes. Using this reference phase map and the definition of the FO map (Eq. 3) the reference FO map was calculated and can be seen in Fig. 6(p). One can notice that presented FO map is very noisy. It is due to the fact that 5-frames TPS algorithm is not fully resistant to the presence of noise and unfiltered intensity noise is transferred to the retrieved phase map. The noise effect is further amplified in the case of FO map estimation because of the needed numerical gradients calculation. For that reason, the denoised (using block-matching 3D denoising (BM3D) algorithm [86] on every analyzed intensity frame) version of estimated FO map is presented in Fig. 6(r) and that map will be further deployed as the reference for estimating the orientation error values. As it can be clearly seen analyzing the orientation error values shown in Table 1 in all cases (for all single-shot fringe pattern frames) the DeepOrientation provided better results than the CPFG method. Additionally, comparing the DeepOrientation results (Fig. 6(f)-6(j)) and the classical approach results (Fig. 6(k)-6(o)) the first ones have better preserved edges (on the modulo \(\pi\) steps), which is especially important as one of the planned use of DeepOrientation is a support for single-fringe-pattern HST-base phase estimation. The reason is that FO map unwrapping procedure [61] needs a clear, well-preserved steps values to provide a correct unwrapping.
To evaluate DeepOrientation on the biological data, Fig. 7, we collected 10, phase-shifted interferograms of a group of HeLa cells on a Linnik interferometer [90]. Similarly as above, we used the TPS method aided with BM3D denoising [86] to reconstruct cells phase, which was then used to obtain reference FO map, Fig. 7(b). Next, we prefiltered one of the collected interferograms with the uVID algorithm and obtained orientation maps with DeepOrientation, Fig. 7(c), and CPFG, Fig. 7(d), algorithms. Both methods returned results that were close to the reference map with orientation error equal to 0.1843 for Fig. 7(c), 0.1925 for Fig. 7(d), 0.1191
for Fig. 7(g), 0.1579 for Fig. 7(h), 0.1672 for Fig. 7(k) and 0.1916 for Fig. 7(l). However, as can be observed on a zoomed parts of the reconstructed maps (Figs. 7(f)-7(h) and 7(j)-(l)), the CPFG reconstruction has some unexpected orientation jumps along the fringe profile, whereas DeepOrientation reconstruction is much smoother. This indicates that DeepOrientation is more robust to fringe patterns being transferred to the orientation map than the CPFG method.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \begin{tabular}{c} \begin{tabular}{c}
\begin{tabular} \end{tabular} \\ \end{tabular} & 1 & 2 & 3 & 4 & 5 \\ \hline DeepOrientation & 0.1627 & 0.1562 & 0.1722 & 0.1802 & 0.1764 \\ \hline CPFG & 0.1684 & 0.1628 & 0.1764 & 0.1893 & 0.1806 \\ \hline \end{tabular}
\end{table}
Table 1: **Numerical analysis of the accuracy of estimated results from Fig. 6.**
Figure 6: Experimentally recorded TPS series of interferograms with phase shift equal to \(\pi\)/2: (a-e) subsequent interferograms, (f-j) FO maps calculated by DeepOrientation, (k-o) FO maps calculated by CPFG method [35], (p) FO map calculated from TPS estimated phase function, (r) FO map calculated from TPS estimated phase function with BM3D denoising.
Figure 7: One of the recorded fringe pattern images of the HeLa cells (a), reference local orientation map obtained from the TPS retrieved phase (b), reconstructed local orientation maps from the single prefiltered fringe pattern image with the use of DeepOrientation (c) and CPFG (d) methods. Zoomed parts of the (a)-(d) images inside red (e)-(h) and green (i)-(l) boxes.
The influence of DeepOrientation onto the accuracy of the HST-based single-shot fringe-pattern phase estimation
The one of possible applications of DeepOrientation is guiding the phase demodulation process for Hilbert spiral transform [23]. As a result of HST the quadrature fringe function is obtained with phase shift equal to 0.5\(\pi\) introduced between input \(s(x,y)\) and output \(s_{H}(x,y)\). The important thing worth to emphasize is that HST needs a zero mean value signal as an input, therefore successful fringe pattern background removal is of the essence. Additionally, it is recommended to minimize the intensity noise for the retrieved phase map quality improvement. Therefore, the HST input signal can be described as:
\[s(x,y)=b(x,y)cos\big{(}\varphi(x,y)\big{)}, \tag{8}\]
and then output signal follows as:
\[s_{H}(x,y)=-b(x,y)sin\big{(}\varphi(x,y)\big{)}. \tag{9}\]
Finally, the phase function can be calculated as:
\[\varphi(x,y)=\tan^{-1}\left(\frac{s_{H}(x,y)}{s(x,y)}\right). \tag{10}\]
Using the HST nomenclature [23] the quadrature function can be described as:
\[s_{H}(x,y)=-iexp[-\beta(x,y)]F^{-1}\big{\{}S(u,v)F\big{\{}S(x,y)\big{\}}\big{\}}, \tag{11}\]
where \(F\) denotes Fourier transform, \(F^{-1}\) denotes inverse Fourier transform, \(S(u,v)\) is spiral phase function defined in spatial frequencies \((u,v)\) domain and \(\beta(x,y)\) is LFD map. The LFD map is instrumental as it guides the phase demodulation process. It is especially important in the case of very complicated, overlapping fringe pattern spectrum. Correct LFD map helps to avoid sign ambiguity errors in closed (concentric) fringe pattern phase demodulation.
We would like to highlight that the DeepOrientation is not employed here to directly determine the phase function, the outcome of the optical measurement. The use of neural network to replace the mathematically rigorous phase estimation algorithmic derivation may raise legitimate metrological concerns. For that reason, in our work the HST phase calculations are only supported by DeepOrientation neural network, which constitutes our novel approach. DeepOrientation allows the estimation of the FO map, which afterwards is unwrapped [61] to local fringe direction map and used to guide the HST-driven phase estimation process.
To prove that DeepOrientation is a valuable tool in terms of aiding HST algorithm with phase retrieval, Fig. 8, we collected a 3 data series consisting of 5 phase-shifted interferograms of HeLa cells, exemplifying one shown in Fig. 8(a), LSEC cells, exemplifying one presented in Fig. 8(e), and phase test target, exemplifying one depicted in Fig. 8(i).
Figure 8: One of the recorded interferograms of HeLa cells (a), LSEC cells (e) and phase test target (i). Reconstructed reference phase maps from TPS data (b),(f),(j), reconstructed phase maps by HST (c),(g),(k) and reconstructed local fringe direction maps with the use of DeepOrientation algorithm (d),(h),(l). Phase maps are given in range 0-18 (b),(c), 0-6 (f),(g) and 0-8 (j),(k) rad.
Next, from those interferograms we retrieved the reference phase maps with the use of TPS algorithm aided with BM3D method, Figs. 8(b), 8(f) and 8(j), respectively. After that, from each data series, we filtered a single interferogram with the uVID algorithm [87], which was then provided to DeepOrientation to reconstruct local fringe pattern orientation map. Those maps were then unwrapped with the use of phase unwrapping algorithm presented in [61] to obtain local fringe direction maps, Figs. 8(d), 8(h) and 8(l). At the end, filtered fringe patterns along with obtained fringe direction maps were supplied to the HST algorithm to reconstruct phase maps, Figs. 8(c), 8(g) and 8(k). One can noticed that HST-based results estimated with the use of single-frame approach compare favorably with the highly accurate multi-frame approach. To be exact the RMSE for HST-based results is equal to 0.0132 rad for Fig. 8(c), 0.0132 rad for Fig. 8(g) and 0.0521 rad for Fig. 8(k). This fact corroborated DeepOrientation guided HST for quantitative phase imaging of living biosamples and challenging technical objects.
## 5 Conclusions
In this paper, we have proposed an accurate, robust, and fast numerical solution for the local fringe orientation map estimation called DeepOrientation based on neural networks and deep learning. The fringe patterns themselves are the example of ideal data for neural network training process. Even if the underlying phase function varies drastically between different measurements, fringe patterns generally have a similar structure as most of them can be described by a spatially self-similar cosine function. That makes the learning process easier, and we have shown that reliable network parameters can be learned based on a relatively small training dataset, not highly diverse in the meaning of phase function characteristic. The DeepOrientation works well even for the data, where underlying phase function significantly differs from the ones included in the training dataset, due to general self-similarity of all fringe patterns. The validity and effectiveness of the DeepOrientation were corroborated both on simulated and experimental data and compared favorably with the classical approach. It should be noted that once the DeepOrientation training is finished, the parameters do not need to be further adjusted, as the trained network generalizes sufficiently. We have provided a solution, which was tested on a wide range of fringe pattern and can be used on the new fringe data instances without additional adjusting or retraining. Additionally, DeepOrientation fills the gap in the search for increasingly accurate fringe pattern analysis tools. As it was shown it can be successfully employed for guidance of single-shot phase demodulation process in Hilbert spiral transform and there are plenty of other possible applications for it [26-59].
### Funding
This work has been partially funded by the National Science Center Poland (OPUS 2020/37/B/ST7/03629 and PRELUDIUM 2021/41/N/ST7/04057). Studies were funded by FOTECH-1 project granted by Warsaw University of Technology under the program Excellence Initiative: Research University (ID-UB). MC work was supported by the Foundation for Polish Science (FNP) and by the Polish National Agency for Academic Exchange under the Iwanowska programme.
### Disclosures
The author declares no conflicts of interest.
### Data Availability.
Data may be obtained from the authors upon reasonable request. Trained DeepOrientation model is made freely available in Ref. [91]. |
2303.07735 | Can neural networks do arithmetic? A survey on the elementary numerical
skills of state-of-the-art deep learning models | Creating learning models that can exhibit sophisticated reasoning skills is
one of the greatest challenges in deep learning research, and mathematics is
rapidly becoming one of the target domains for assessing scientific progress in
this direction. In the past few years there has been an explosion of neural
network architectures, data sets, and benchmarks specifically designed to
tackle mathematical problems, reporting notable success in disparate fields
such as automated theorem proving, numerical integration, and discovery of new
conjectures or matrix multiplication algorithms. However, despite these
impressive achievements it is still unclear whether deep learning models
possess an elementary understanding of quantities and symbolic numbers. In this
survey we critically examine the recent literature, concluding that even
state-of-the-art architectures often fall short when probed with relatively
simple tasks designed to test basic numerical and arithmetic knowledge. | Alberto Testolin | 2023-03-14T09:30:52Z | http://arxiv.org/abs/2303.07735v1 | Can neural networks do arithmetic? A survey on the elementary numerical skills of state-of-the-art deep learning models
###### Abstract
Creating learning models that can exhibit sophisticated reasoning skills is one of the greatest challenges in deep learning research, and mathematics is rapidly becoming one of the target domains for assessing scientific progress in this direction. In the past few years there has been an explosion of neural network architectures, tasks, and benchmark data sets specifically designed to tackle mathematical problems, reporting notable success in disparate fields such as automated theorem proving, numerical integration, and discovery of new conjectures or matrix multiplication algorithms. However, despite these impressive achievements it is still unclear whether deep learning models possess an elementary understanding of quantities and symbolic numbers. In this survey we critically examine the recent literature, concluding that even state-of-the-art architectures often fall short when probed with relatively simple tasks designed to test basic numerical and arithmetic knowledge.
**Keywords**: artificial intelligence; neuro-symbolic systems; large language models; number embeddings; numerical cognition; numeracy; mathematical reasoning
## 1 Introduction
Despite many animal species exhibit an approximate understanding of numbers and quantities (Dehaene, 2011; Nieder, 2016), formal mathematics is a peculiarity of _Homo Sapiens_ that emerged through thousands of years of cultural evolution (Nunez, 2017; Beller et al., 2018; O'Shaughnessy et al., 2022). Mathematical reasoning requires to deploy most of our finer-grained cognitive abilities, including sophisticated pattern recognition skills, language understanding, symbolic processing, and abstract thinking, making it one of the highest achievements of human intellect. It is therefore not surprising that the scientific community has always regarded mathematical and logical reasoning as crucial steps in building intelligent machines (Newell and Simon, 1956; Bundy, 1983).
However, although computers excel at crunching numbers, solving mathematical problems remains a formidable challenge for artificial intelligence (Choi, 2021). On the one hand, grounding structured mathematical knowledge into some form of intrinsic meaning is a long-standing problem in symbolic AI (Searle, 1980; Harnad, 1990). On the other hand, neural networks always lagged in learning math, and such limitation has been traditionally considered an essential feature of their very nature, which is rooted on statistical pattern recognition abilities rather than the use of explicit syntactic rules (Fodor and Pylyshyn, 1988; Marcus, 2018). Mathematical reasoning poses well-known challenges for connectionist models: the symbols used in mathematical formulas appear as arbitrary tokens, which need to be manipulated according to well-defined rules entailing compositionality and systematicity. Furthermore, mathematical knowledge extracted from a set of examples should generalize well beyond the observed distribution, enabling extrapolation through the discovery of 'first principles'.
Despite these challenges, the recent successes of deep learning have rekindled interest in the idea that neural networks might be able to acquire high-level reasoning abilities and thus exhibit symbolic behavior Santoro et al. (2021). Indeed, although deep networks struggle to grasp even basic concepts such as the meaning of 'integer number' Trask et al. (2018), in the past few years several models demonstrated impressive capabilities in solving complicated mathematical tasks. For example, sequence-to-sequence architectures were shown able to learn to perform function integration and solve ordinary differential equations, sometimes with greater accuracy than popular math software packages Lample and Charton (2019). Deep learning models have also been successfully used for automated theorem proving Lee et al. (2019); Polu and Sutskever (2020); Wang and Deng (2020) or to assist expert mathematicians to formulate conjectures and establish new fundamental results in pure mathematics Davies et al. (2021). Last year, deep reinforcement learning discovered a more efficient algorithm to perform matrix multiplication Fawzi et al. (2022), while fine-tuning a pre-trained language model on computer code allowed to solve university-level math questions at a human level Drori et al. (2022).
These stunning achievements partially stem from the introduction of well-curated, large-scale data sets containing mathematical problems annotated with the corresponding solutions, but also to the design of novel (often _ad hoc_) architectures that can more effectively process numerical symbols and mathematical notation. Improvements in many tasks have also been enabled by the creation of large-scale language models, which exhibit surprising 'out of the box' numerical abilities that can be further refined through fine-tuning and prompting strategies. However, as I will highlight in the present survey, these findings do not imply that such models fully understand the semantics of numbers and basic arithmetic. In fact, their performance on relatively simple numerical tasks is often brittle, suggesting that we might need to improve the elementary skills of these models to bootstrap more reliable mathematical capabilities. This hypothesis is also supported by the extended literature on child development and education, which has shown that basic numeracy skills such as counting, quantity comparison, understanding of number ordering, and the base-ten positional numeral system are strong predictors of later mathematics achievement Jordan et al. (2009); Claessens and Engel (2013); Nguyen et al. (2016).
The survey is structured as follows: I will initially review the main tasks and data sets that have been proposed to train and test the elementary arithmetic abilities of deep learning models. These include numerical problems encoded using natural language ('math word problems') or simple mathematical formalism (e.g., multi-digit addition and subtraction), as well as other basic numeracy tasks. I will then present the main neural network architectures that have been proposed to solve this kind of problems, which include general-purpose deep learning models, but also _ad hoc_ modules specifically tailored to process mathematical notation. I will finally review the arithmetic abilities of large language models and the main strategies that have been adopted to inject number semantics into word embeddings1.
Footnote 1: This survey focuses on basic numerical and arithmetic skills; for a more general overview of deep learning models for mathematical reasoning the reader could also refer to Lu et al. (2022).
## 2 Elementary numerical tasks and data sets
### Math word problems
Math Word Problems (MWPs) are rapidly becoming a standard benchmark for AI models (for a recent review, see Faldu et al., 2021). They are narrative problems that are used to assess the general understanding of numbers and operations through everyday life situations and are one of the most common types of numerical task encountered by children in primary schools. MWPs are
framed in natural language and vary in complexity depending on the type of arithmetic operations they require, and whether such operations involve small or large operands.
Most MWPs data sets used in AI research are either curated from educational or online resources, derived by processing other available data sets, or synthetically created. One of the first large-scale data sets has been Dolphin18K (Huang et al., 2016); it contains 18,000 elementary math word problems taken from an online math repository, annotated with ground truth information using a semi-automatic procedure. A similar data set is Math23K (Wang et al., 2017), which contains 23,161 MWPs in Chinese language crawled from online education websites. A step forward in data set creation was implemented in the AQuA data set (Ling et al., 2017), which contains 100,000 problems annotated not only with the correct answer, but also with the sequence of steps ('answer rationale') required to derive the solution. Problems were generated by starting from a set of seed problems, which were then modified through crowdsourcing. This approach for building a large-scale MWPs data set was further refined in MathQA (Amini et al., 2019) by introducing a more precise language to define operation programs representing the intermediate problem solution steps. It contains 37,200 problems taken from the AQuA data set, annotated with the corresponding lists of multiple-choice options and aligned operation programs, again using a crowdsourcing platform. An example of such problems is given in Fig. 1A.
In most of the cases discussed above, the authors implemented some control procedures to limit the possibility that crowdsources could create duplicate versions of the same problem (e.g., by copying and pasting online problems or by proposing trivial modifications to the problem text). However, the research community recently pointed out that these data sets actually contain many problems with overlapping structure or very similar content (Miao et al., 2020). This makes a large fraction of MWPs solvable through simple heuristics, for example by treating the text as bag-of-words or even ignoring the question during the generation of the answer, leading to a significant overestimation of model performance (Patel et al., 2021). These shortcomings called for the design of more controlled data sets. One of them is ASDiv (Miao et al., 2020), which contains only 2,305 problem instances which, however, are provably more heterogeneous compared to previous corpora thanks to the use of a metric introduced to measure the lexicon usage diversity. This design principle was also adopted for the creation of a larger-scale benchmark called GSM8K (Cobbe et al., 2021), which contains 8,500 high quality math problems at the elementary school level, created by human problem writers. This data set was similarly built with the goal of featuring high linguistic diversity, while relying on relatively simple math concepts: problems take between 2 and 8 steps to solve, and solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations. Another recently proposed benchmark is SVAMP (Patel et al., 2021), a data set containing 1,000 simple (one-unknown) arithmetic word problems of grade level up to 4. It was created by applying a set of variations to existing problems sampled from the ASDiv data set (Miao et al., 2020), which were carefully designed to probe whether the model's answer actually depends on the question ('question sensitivity'), whether the model has the capacity to determine a change in reasoning arising from subtle changes in the problem text ('reasoning ability') and whether the answer remains invariant to superficial structural changes that do not alter the problem semantics ('structural invariance').
### Problems encoded using simple mathematical notation
Other benchmarking approaches directly probe mathematical knowledge using numerical symbols and formal notation. In this case, the challenge is mostly to demonstrate that the model can extrapolate beyond the range of numbers and operations encountered during training, for example by counting a greater number of items or solving arithmetic problems with more (and possibly longer) operands.
In this setting, test problems are normally generated using synthetic procedures. One landmark work (Trask et al., 2018) proposed to evaluate neural networks using simple function learning tasks (i.e., arithmetic operations), in some cases requiring an initial step of perceptual processing, as in the 'MNIST Digit Counting' and 'MNIST Digit Addition' tasks. Model performance is then assessed on held-out values from within the training range (interpolation) or from outside of the training range (extrapolation). A similar procedure was also adopted in more recent studies, with the goal of systematically characterizing extrapolation capabilities across numerical ranges (Madsen and Johansen, 2020; Cognolato and Testolin, 2022; Fujisawa and Kanai, 2022) or to investigate 'length generalization' as the ability to extrapolate from short problem instances to longer ones (Anil et al., 2022). Another class of basic tasks that are particularly challenging in the extrapolation regimen involves the translation from number words to quantities, as in the 'language to number translation task' (Trask et al., 2018) or, _vice versa_, from quantities to number words. These tasks probe whether the compositional structure of number words and number symbols is learned in a systematic way. An example of such problems is given in Fig. 1B.
One of the most popular synthetic benchmarks containing math problems encoded using formal notation is the Mathematics data set (Saxton et al., 2019). It was built with the main goal of providing a controlled set of problems, for example by making sure that test problems contained questions that were never seen during training. The generation method allows to classify problems according to math domain, difficulty, and the need to perform interpolation or extrapolation. The data set contains 2 million free-form questions/answer pairs, encoded as sequences of characters, spanning problems in algebra, arithmetic, calculus, number comparison, measurement, and probability. According to the authors, extrapolation abilities can be measured along different axes, for example by introducing problems involving larger numbers, more numbers, and more compositions of basic operations. Examples of such problems are given in Fig. 1C.
### Higher-level math problems
Questions requiring numerical reasoning have also recently been included in general'reading comprehension' benchmarks (Dua et al., 2019; Lin et al., 2020). Compared to MPWs, this kind of benchmark usually contains much longer language contexts, involves more open domain questions, and requires deeper paragraph understanding, for example, by entailing numerical reasoning over dates or ordering of events in the text paragraph. An example of such problems is given in Fig. 1D. One of the most challenging benchmarks of this kind is NumGLUE (Mishra et al., 2022b), which contains approximately 100,000 problems covering eight different types of tasks, all of which require simple arithmetic understanding. Some problems are self-contained, while others require additional background knowledge to produce the final solution, such as commonsense reasoning (e.g., _"How many faces do 10 dice have?"_).
Another recently introduced data set is MATH (Hendrycks et al., 2021), which consists of 12,500 problems taken from high-school math competitions that span a variety of math domains, carefully annotated with step-by-step solutions. Given the difficulty of these problems, the authors also released a large-scale pretraining data set called Auxiliary Mathematics Problems and Solutions (AMPS), which contains over 100,000 Khan Academy problems with step-by-step solutions in Latex format; these exercises are used to teach human students concepts ranging from elementary math to multivariate calculus.
The authors of NumGLUE also recently proposed LILA (Mishra et al., 2022a), a 'unified' mathematical reasoning benchmark consisting of 23 mathematical reasoning tasks. It has been built by extending 20 existing data sets spanning a wide range of topics in mathematics, matched (in a semi-automatic way) with corresponding Python programs that serve as reasoning chains for each question in the benchmark. The authors also include an additional benchmark data set to
measure performance specifically on out-of-distribution examples and to test model robustness to language perturbations, similar to Patel et al. (2021).
Overall, the performance level on these more challenging benchmarks is far from ceiling and neural network models perform significantly worse than humans (Lewkowycz et al., 2022; Frieder et al., 2023). However, it should be noted that these data sets are challenging also for humans: university students have been estimated to reach around 40% on MATH, while a three-time IMO gold medalist attained 90% (Hendrycks et al., 2021).
## 3 Neural network models for numerical reasoning
### Generic deep learning architectures
Over the years, researchers have attacked numerical reasoning tasks using a variety of neural network approaches. Most of the initial work was based on generic architectures, such as long-short term memory networks (Hochreiter and Schmidhuber, 1997) and sequence-to-sequence models (Sutskever et al., 2014). The rationale for using a generic architecture is that learning to perform numerical reasoning might not be qualitatively different from the acquisition of domain-general 'abstract reasoning' skills and should thus be treated as a general (though particularly challenging) learning problem. However, it has been repeatedly pointed out that neural networks exhibit poor numerical extrapolation capabilities (Trask et al., 2018), and sequence-to-sequence or even advanced transformer-based architectures often fail in tasks requiring several intermediate calculations, where humans can instead easily find a solution (Saxton et al., 2019). The related question of whether and how recurrent networks generalize to sequences longer than those encountered during training has also been of enduring interest, and definitive solutions to such challenging issue have not yet been found (Anil et al., 2022).
Figure 1: Examples of problems from representative data sets. A) MathQA (Amini et al., 2019). B) Language to number translation task (Trask et al., 2018). C) Mathematics (Saxton et al., 2019). D) NumGLUE (Mishra et al., 2022b).
The research community has been actively exploring novel domain-general neural architectures that might overcome these limitations. One interesting approach to tackle algorithmic learning (a.k.a. 'program synthesis') has been to augment neural networks with external memory modules, which enable systematic abstraction even from few examples and can (at least theoretically) learn to approximate any algorithmic procedure (Graves et al., 2016; Santoro et al., 2016). Interestingly, neural models equipped with external memory have indeed shown able to generalize well beyond their training range in binary addition and multiplication problems (Kaiser and Sutskever, 2015). Subsequent work has further refined this results by improving the format of the external memory, for example by showing that a grid-like memory representation can significantly improve out-of-distribution generalization on decimal addition, both in terms of digit length and number of operands (Kim et al., 2021; Cognolato and Testolin, 2022).
Interestingly, the idea of granting access to an external memory to solve complex problems is reminiscent of the notion of'material representation' introduced in anthropology, which has been recently elaborated in the context of numerical cognition (Overmann, 2016). According to this view, abstract mathematical concepts would be a relatively recent cultural achievement, which emerged thanks to the spread of numerical manipulation tools (d'Errico et al., 2018). This perspective has recently been explored in computational models based on deep reinforcement learning, which can simulate the active interaction of a learning agent with external numerical representation devices (Sabathiel et al., 2022; Petruzzellis et al., 2023). A related stream of research seeks to improve large-scale models by granting them access to external tools (Parisi et al., 2022) or by constructing modular architectures equipped with discrete knowledge bases and reasoning components (Karpas et al., 2022).
### _Ad hoc_ deep learning architectures
Given the difficulty of building generic deep learning architectures that can achieve algorithmic generalization, many authors have instead focused on creating 'neuro-symbolic' systems that combine neural networks with rule-based numerical solvers, or to design neural network modules specifically tailored for numerical reasoning.
One of the earliest attempts to solve math word problems using a neuro-symbolic approach consisted of a vanilla sequence-to-sequence model combined with a similarity-based retrieval model, with the goal of creating a hybrid MWPs solver (Wang et al., 2017). Several subsequent approaches exploited graph-based problem representations, which facilitate the solution of MWPs through the use of structured descriptions reflecting a tree-like algorithmic decomposition of the problem (Wang et al., 2018; Xie and Sun, 2019) or explicitly capturing the relationship between quantities and their attributes (Zhang et al., 2020). Another recent _ad hoc_ architecture incorporates numerical reasoning into the system by exploiting a 'numerically-aware' graph neural network (Ran et al., 2019): the model is composed by standard neural encoding and decoding modules, but also includes a'reasoning module' that represents the MWP quantities using a graph, where the nodes correspond to the numbers appearing in the text and the edges encode magnitude relationships (i.e., 'greater than'). However, since the graph is pre-defined for each problem, such model cannot deal with tasks entailing the generation of intermediate numerical results and has limited arithmetic reasoning capabilities. Overall, we can argue that models incorporating _ad hoc_ graph-based representations generally improve over standard seq2seq architectures, but their performance is still poor when tested using more controlled (though relatively simple) data sets (Miao et al., 2020; Patel et al., 2021).
A modular design was also adopted by a recent neuro-symbolic model developed to tackle reading comprehension problems involving numerical reasoning (Dua et al., 2019). In this architecture, a transformer-based module first predicts whether the problem answer is a count or an arithmetic expression, and then identifies the specific numbers involved in the expression. Once a proper
arithmetic expression has been formed, it is given as input to a symbolic solver to produce the final answer. The model improved over the considered baselines; however, its testing accuracy was still far from the human level in the data set considered by the authors (44% _vs._ 94%).
Inspired by the fact that neural networks fail to learn to represent numbers outside of the range seen during training, others proposed to augment standard artificial neurons with _ad hoc_ modules biased to learn systematic numerical computation. For example, the 'Neural Arithmetic Logic Unit' is augmented with operators that can represent simple functions such as addition, multiplication, and power functions (Trask et al., 2018). The 'Neural Arithmetic Unit' generalizes this idea and achieves higher extrapolation capabilities by introducing a simplification of the parameter matrix, a sparsity regularizer, and a new multiplication unit (Madsen and Johansen, 2020). In general, these models demonstrate some extrapolation capabilities in simple arithmetic tasks, for example by accumulating moderate test errors on additions involving two 1000-digit numbers when being trained only with 10-digit numbers. However, accuracy is still far from ceiling, and these models do not easily generalize to problems involving a higher number of operands.
### Large language models
One of the most exciting discoveries of the past few years has been to realize that large language models (LLMs) trained in an autoregressive way can exhibit a surprising level of competence in a variety of tasks 'out of the box', that is, without having been explicitly trained to solve such problems. This seems to be the case also for numerical reasoning: for example, GPT-3 is able to carry out basic arithmetic tasks such as two-digit addition without any additional fine tuning, with performance becoming progressively stronger moving from the zero-shot to one-shot to few-shot setting (Brown et al., 2020).
However, it turns out that the numerical knowledge of LLMs is often superficial and inaccurate: the calculation accuracy of GPT-3 rapidly decreases as the number of digits increases (Brown et al., 2020) and further growing the model size does not necessarily improve extrapolation capabilities (Henighan et al., 2020; Rae et al., 2021); the text-to-text T5 model fails in simple numerical tasks, such as number comparison and number sorting, when probed outside the interpolation range (Pal and Baral, 2021); the mathematical abilities of the popular ChatGPT model are significantly below those of an average mathematics graduate student (Frieder et al., 2023). In general, even the largest models falter when required to perform multi-step mathematical reasoning, such as those included in the MATH benchmark. It has been shown that the performance of LLMs on mathematical calculations strongly correlates with term frequency in the training data (Razeghi et al., 2022), suggesting that these gigantic models might obtain seemingly high performance from surface-level memorization, rather than understanding of arithmetic procedures.
One possibility to improve the numerical reasoning in LLMs is to fine-tune them using domain-specific data sets. For example, GPT-style models can solve many of the problems in the Mathematics benchmark once trained on math data sets (Henighan et al., 2020), and fine-tuned BERT-style and T5-style models achieve significant improvements also on more challenging benchmarks (Geva et al., 2020; Yang et al., 2021). Yet, these LLMs still fail even in simple numerical tasks that probe extrapolation capabilities, and sometimes performance is even worse than the original architectures (Pal and Baral, 2021).
Performance can also be improved by combining generative LLMs with post-processing verifiers, which are trained to judge the correctness of model-generated solutions: at test time, a fixed number of candidate solutions is sampled and the one ranked highest by the verifier is selected as final output (Cobbe et al., 2021). In this specific case, the system was also further
trained to rely on a calculator for solving arithmetic problems, by injecting calculation annotations into the training set; at test time, the calculator overrides sampling when the model produces the annotations. The idea of producing annotations that can be subsequently processed by a software calculator is also pursued in more sophisticated approaches, which train LLMs to generate computer code to solve complicated algorithmic tasks, rather than directly asking for the solution. In one recent demonstration of this method (Drori et al., 2022), the authors exploited an LLMs pre-trained on text and fine-tuned on computer code (Chen et al., 2021) to generate computer programs that solve challenging (university-level) problems from the MATH benchmark. The performance gain was significant, though it should be emphasized that in these cases the task of producing the final solution is partially delegated to an external (e.g., Python) interpreter, which can also take advantage of external libraries to perform mathematical operations such as solving equations or taking limits.
#### Promoting step-by-step numerical reasoning
Another promising approach to improve the numerical abilities of LLMs involves the use of advanced _prompting_ strategies (a.k.a. 'in-context learning'), which allow shaping the model's behavior by providing a few input-output exemplars demonstrating the task, without actually tuning any model parameter (Brown et al., 2020). A recent line of work has shown that prompting strategies can be improved using'scratchpads' (Nye et al., 2021) and 'chain-of-thought reasoning' (Wei et al., 2022), which allow to significantly increase accuracy on multi-step mathematical reasoning problems. The idea is that rather than having to generate a final answer immediately, the model can first generate solutions that may contain intermediate computations (see examples in Fig. 2).
To achieve this, the scratchpad technique introduced by Nye et al. (2021) allows the model to produce an arbitrary sequence of intermediate tokens that are stored in a buffer (the scratchpad) and that can be further processed before producing the final answer. The authors considered the task of learning long integer addition, showing that the use of scratchpads improves the calculation accuracy in the extrapolation interval. This idea was taken further by Wei et al. (2022): the chain-of-thought is a series of intermediate natural language reasoning steps that are provided as input to the model during prompting to explicitly demonstrate how to produce the final output. This technique was tested in arithmetic reasoning tasks, achieving striking performance gains compared to baseline models. Interestingly, it turns out that chain-of-thought prompting has larger performance gains for more complicated problems, and mostly when it is applied to larger-scale models. At the time of publication, it established a new state-of-the-art performance on the challenging GSM8K and SVAMP benchmarks.
Similarly to what happens in recurrent models with adaptive computation time (Graves, 2016; Banino et al., 2021), these advanced prompting techniques allow the neural network to process the input information for as long as needed, depending on the complexity of the current problem. By encouraging the model to produce an explanation along with the answer, we also steer it towards solving problems by breaking them into smaller steps that logically follow from each other. Furthermore, the buffer allows to store intermediate information for an arbitrary amount of processing steps, removing the need to memorize all intermediate states in the network's activations. Last, but not least, this approach provides an interpretable window into the behavior of the model, suggesting how it might have arrived at a particular answer and providing opportunities to debug where the reasoning path went wrong. In fact, structured prompting techniques are reminiscent of the educational process adopted by teachers in primary schools, where the elementary steps for solving algorithmic problems (such as carrying out long additions) are explicitly written out to facilitate understanding.
These methods have only scratched the surface of the potential of prompting for eliciting high-level reasoning. One of the most advanced LLMs designed for solving mathematical reasoning problems is currently Minerva (Lewkowycz et al., 2022), which is based on a version of the PaLM language model that was fine-tuned on a high-quality dataset containing scientific and mathematical data, specifically built by crawling arXiv papers and web pages containing math formulas. The model exploits chain-of-thought prompting, and also generates multiple candidate solutions that are selected using a majority voting scheme. Minerva significantly outperforms the original PaLM model and established a new state-of-the-art performance on the GSM8K and SVAMP benchmarks, although its average accuracy remains below the human level.
Another recent improvement has been the introduction of algorithmic prompting methods (Zhou et al., 2022), which involves providing a detailed description of the algorithm execution on running examples and using explicit explanations and natural language instructions to further remove ambiguity. This allows to drastically increase the amount of detail included in the input rationales, at the same time promoting the composition of basic skills to solve complex reasoning tasks. Such a prompting strategy outperforms existing prompting techniques on several algorithmic tasks, including arithmetic, and achieves much higher performance on addition problems, also in the extrapolation range.
#### 3.2.1 Injecting numerical semantics into word embeddings
It turns out that the way numbers are represented in their'surface form' has a strong influence on the model performance in numerical reasoning tasks. Although some initial investigations suggested that standard embedding techniques could capture number semantic fairly well (Wallace et al., 2019), subsequent studies pointed out that these representations are in fact inadequate at dealing precisely with numbers (Naik et al., 2019) and that commonly used subword tokenization techniques disrupt magnitude information (Nogueira et al., 2021). For
Figure 2: Examples of numerical and math word problems solved using advanced prompting strategies. A) The scratchpad method forces the model to explicitly produce intermediate computation steps, which are iteratively refined to generate the final answer (Nye et al., 2021). B) A subset of chain-of-thought prompting exemplars that were given as input to the model to elicit step-by-step reasoning during problem solution (Wei et al., 2022).
example, in GPT-3 a number like 1598 is tokenized as '15' and '98', while another format like 1,598 is split as three different tokens: '1', ',', and '598'.
Simple tricks to improve model performance are to adopt a character-level encoding of numbers, or to replace numbers with their corresponding scientific notation (Zhang et al., 2020b). Another effective workaround consists of using more meaningful surface representations, for example by explicitly providing the 10-base number decomposition (e.g., '832' becomes '8 100 3 10 2') (Kim et al., 2021a; Nogueira et al., 2021). More advanced approaches have tried to augment the standard embeddings of number words by explicitly injecting semantic information representing quantities (for a survey, see Thawani et al., 2021). One of the first attempts at learning better numeral embeddings (Jiang et al., 2020) proposed to first map numbers into a log-space (to compress larger values) and train either a self-organizing map or a Gaussian mixture model with the goal of creating latent vectors encoding 'number prototypes'. A similarity function is then used to project the input number into the closest prototype. Another method forced the cosine similarity between word embeddings of two numbers to reflect their actual distance on the number line (Sundararaman et al., 2020). Further refinements combine the prototype-based approach with the scientific notation encoding (Jin et al., 2021). Another interesting technique exploits the Online Encyclopedia of Integer Sequences, which includes notable series of numbers that are of particular mathematical interest, to train more meaningful number embeddings (Ryskina and Knight, 2021). The rationale was to learn number embeddings by looking at number co-occurrence across well-structured, regular number sequences, to see if this would lead to the emergence of encodings representing useful relational properties of the integers. For example, the authors discovered that one specific neuron in the embedding vector was positively activated for even numbers and negatively activated for odd numbers, suggesting the emergence of a localistic encoding of an 'evenness' feature.
Overall, all these approaches for augmenting numerical embeddings allow to improve processing of small and medium-sized numbers, especially on linguistic tasks such as math word problems; however, none of them allows to perform accurate manipulation (e.g., arithmetic) over larger quantities or numbers that are not contained in the training vocabulary.
## 4 Conclusion
To summarize the main findings from the recent literature, we can generally argue that although neural network models can exhibit impressive mathematical reasoning skills, their basic abilities for representing and manipulating numbers are still unsatisfactory. Even the most advanced deep learning architectures, including large language models, often fail when probed with numerical problems that are posed in a different manner than the training cases or that involve quantities well outside the training distribution.
The state-of-the-art performance on GSM8K is still far from the human level, ranging from 58% for PaLM with 540 billion parameters, chain-of-thought prompting and access to an external calculator (Chowdhery et al., 2022) to 78% for Minerva (Lewkowycz et al., 2022). The top accuracy on ASDiv and SVAMP is higher, achieving almost 90% for large language models with refined chain-of-thought prompting strategies (Wang et al., 2022). However, achieving high performance on these benchmarks might not be enough to demonstrate numerical understanding, as also pointed out by others (Davis, 2023). Not surprisingly, indeed, the extrapolation capabilities of these gigantic neural networks are still poor: when models are not equipped with external calculators, generalization to unseen (out-of-distribution) numerical ranges requires the discovery of algorithmic procedures, which still constitutes a formidable challenge for deep learning (Welleck et al., 2022) and is indeed considered one of the frontiers in neuro-symbolic research (Hitzler and Sarker, 2021). For example, Minerva achieves over 80% accuracy on 10-digit addition and over 20% accuracy on 18-digit addition, but calculation accuracy is never at ceiling
even for easy (e.g., 5-digit) problems, which highlights that even such a sophisticated model still has a very limited understanding of elementary arithmetic. A similar conclusion applies to the extrapolation results presented in Zhou et al. (2022): a finer-grained algorithmic prompting strategy allows to consistently solve addition problems having answers of up to 19 digits in length, even if training examples were limited to 5 digits. However, the accuracy never reaches 100% and after 19 digits the model runs out of context.
As others advocate, I believe that building deep learning systems that can more reliably master basic numerical concepts and simple arithmetic should be a mandatory first step towards successfully tackling complex mathematical reasoning tasks (Mishra et al., 2022b). I further argue that the perspective offered by decades of research in educational, developmental, and cognitive sciences could provide important insights for the design of such learning systems, especially in terms of the design of training regimens and testing procedures. For example, primary education in mathematics places a strong emphasis on the acquisition of elementary algorithms as the basis for understanding the arithmetic of natural numbers, integers, fractions, and decimals, using a variety of representation formats that exemplify the semantics of the decimal place-value system (Sarama and Clements, 2009). Importantly, such knowledge is developed on top of the acquisition of number words, numeral systems and counting procedures (Rittle-Johnson et al., 2001; Carey and Barner, 2019), which are fostered even during the pre-school period (Anders et al., 2012). Furthermore, although language has a central role in this arduous learning process, it is not the only medium through which numerical knowledge is acquired (Gelman and Butterworth, 2005): a key role is also played by the development of basic visuo-spatial perceptual skills and geometrical intuitions (Dehaene, 2009; Kellman et al., 2010; Piazza, 2010), as well as by the mastery of embodied and material representational systems (Lakoff and Nunez, 2000; Bender and Beller, 2012; Overmann, 2016). Cognitive science also provides well-established operational definitions and testing procedures to assess the many facets of numerical understanding (Delazer et al., 2003; Clements et al., 2008; Purpura and Lonigan, 2015); the adoption of similar evaluation batteries in AI research would allow to better characterize the numerical skills of deep learning models and to more systematically identify their weaknesses.
At the same time, implementing computational models that more faithfully simulate the acquisition of basic numerical skills would constitute an important step toward improving the scientific understanding of our own mathematical learning abilities (Testolin, 2020). Deep learning has already provided key insights into the origin of our 'number sense', for example by demonstrating that approximate numerical representations can emerge in a variety of generative architectures (Stoianov and Zorzi, 2012; Zhao et al., 2018; Testolin et al., 2020; Boccato et al., 2021) and that number acuity becomes gradually refined through unsupervised learning (Testolin et al., 2020). Although these modeling efforts have not yet been successfully extended into the realm of symbolic mathematics, the rapid progress in artificial intelligence is opening exciting prospects for bridging this gap.
|
2304.00957 | Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks | Random functional-linked types of neural networks (RFLNNs), e.g., the extreme
learning machine (ELM) and broad learning system (BLS), which avoid suffering
from a time-consuming training process, offer an alternative way of learning in
deep structure. The RFLNNs have achieved excellent performance in various
classification and regression tasks, however, the properties and explanations
of these networks are ignored in previous research. This paper gives some
insights into the properties of RFLNNs from the viewpoints of frequency domain,
and discovers the presence of frequency principle in these networks, that is,
they preferentially capture low-frequencies quickly and then fit the high
frequency components during the training process. These findings are valuable
for understanding the RFLNNs and expanding their applications. Guided by the
frequency principle, we propose a method to generate a BLS network with better
performance, and design an efficient algorithm for solving Poison's equation in
view of the different frequency principle presenting in the Jacobi iterative
method and BLS network. | Guang-Yong Chen, Yong-Hang Yu, Min Gan, C. L. Philip Chen, Wenzhong Guo | 2023-04-03T13:25:22Z | http://arxiv.org/abs/2304.00957v1 | # Properties and Potential Applications of Random Functional-Linked Types of Neural Networks
###### Abstract
Random functional-linked types of neural networks (RFLNNs), e.g., the extreme learning machine (ELM) and broad learning system (BLS), which avoid suffering from a time-consuming training process, offer an alternative way of learning in deep structure. The RFLNNs have achieved excellent performance in various classification and regression tasks, however, the properties and explanations of these networks are ignored in previous research. This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain, and discovers the presence of frequency principle in these networks, that is, they preferentially capture low-frequencies quickly and then fit the high frequency components during the training process. These findings are valuable for understanding the RFLNNs and expanding their applications. Guided by the frequency principle, we propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation in view of the different frequency principle presenting in the Jacobi iterative method and BLS network.
Random functional-linked types of neural networks (RFLNNs), Broad learning system (BLS), Frequency principle, Fourier domain, Jacobi iterative method.
## I Introduction
### _Background and related work_
Deep structure neural networks have achieved breakthrough success in a wide range of tasks such as image classification [1], segmentation [2], and speech recognition [3]. However, deep neural networks (DNNs) usually suffer from a time-consuming training process, involving tuning a huge number of hyperparameters and complicated structure. Moreover, a complete retraining process is required when the structure is not sufficient to model the dataset. RFLNNs [4, 5, 6] provide an alternative scheme that removes the drawbacks of a long learning and retraining process, and have achieved excellent performance in various application fields [7, 8, 9]. At early stage, RFLNNs were developed from flat networks. Chen _et al._[10] proposed a random vector functional linked neural network (RVFLNN) based on the study of single layer feedforward neural networks (SLFNNs), and designed a fast learning algorithm to find the optimal weights of the networks. RVFLNN not only avoids a time-consuming training process but also achieves a universal approximation performance in function approximation [11, 4]. To adapt to the explosive growth of data in size and the sharp increase in dimension, Chen & Liu [5, 12] proposed a broad learning system (BLS) based on the basic idea of RVFLNN, and developed an incremental learning algorithm for fast remodeling in broad expansion without a retraining process. Huang _et al._[6] chose infinitely differentiable functions as the activation functions in the hidden layer of the SLFNNs, and proposed the extreme learning machine (ELM), which only tunes the parameters of output layer (linking the hidden layer to the output layer). The RVFLNN, ELM and BLS are different types of RFLNNs, which have been widely used for various tasks such as data analysis [13, 14], traffic predictor [15, 16], time series analysis [17], image detection and processing [18, 19, 20].
Recently, various improvements of the BLS and ELM have been presented to adapt to complex learning tasks. Feng & Chen [21] merged the Takagi-Sugeno fuzzy system into BLS, which achieved state-of-the-art performance compared to the nonfuzzy and neuro-fuzzy methods. Jin _et al._[22, 23], Bal _et al._[24], and Gan _et al._[25] introduced different regularizers to obtain robust BLS networks and robust ELM networks for different learning tasks. Due to the powerful capacity of the deep structure, various variants of RFLNN combined with deep structure are proposed in different application fields. For example, in [12, 26, 27], the BLS variants that use the recursive feature nodes and the cascade of feature mapping, resulting in the recurrent-BLS, gated-BLS, and CFEBLS. Yao _et al._[28, 29] proposed a deep structure of ELM, which consists of several different level networks, to improve the effectiveness to deal with noisy data. Liu _et al._[30] developed a stacked BLS by adding the "neurons" and "layers" dynamically during the training process for multilayer neural networks.
These RFLNNs (flat or deep) have achieved excellent performance in nature language processing, image classification, time-series forcasting and etc., however, the underlying principle of these random networks and why they can generalize well are still unclear. Exploring these problems is of great significance for understanding different types of RFLNNs, and has important enlightenment for expanding them to a wider range of applications.
### _Motivation and Contribution_
Similar to DNNs, the RLFNNs are regarded as a black-box inference. Although researchers have made different structural modifications to random networks to adapt different applications, the lack of theoretical analysis remains an important factor limiting their development. This motivates us to give some insights into the widely used RFLNNs from frequency domain, including the ELM, BLS and stacked BLS with deep structure.
Xu _et al._[31, 32] found that the training process of DNNs initialized with small values of parameters usually fits the target functions from low-frequencies to high-frequencies, named frequency principle. Based on this valuable principle, researchers gave some explanations for the phenomenon of early stop and generalization puzzle in deep networks, and designed various efficient techniques for scientific computing problems [33]. Motivated by this principle in deep networks, in this paper we explore the properties of the RFLNNs from the perspective of the frequency domain. This is valuable for readers to understand how different RFLNNs (e.g., the BLS and stacked BLS) work, and their differences, and their shortcomings compared to the DNNs. Moreover, the underlying principle is of great inspiration for extending RFLNNs to a wider range of applications.
The main contributions of this paper are listed as follows:
1) Explore the properties of the ELM, BLS and stacked BLS from the perspective of frequency domain, and find that the frequency principle holds in these random neural networks, i.e., they preferentially capture the low-frequencies and then gradually fit the high-frequency components. In addition, we find that these random neural networks are prone to instability in fitting the high-frequencies compared to the DNNs; The fitting accuracy of the stacked BLS with deep structure is improved when adding the first two BLS blocks, but the fitting results of each frequency are not improved by adding other BLS blocks, which may be a key problem of the deep BLS networks to be solved in future research.
2) By the frequency principle in RFLNNs, we design a method to generate the BLS network with better prediction performance using the fact that the BLS gradually captures the high-frequency components during the training process.
3) According to the different frequency principles presenting in Jacobi iterative method and random neural networks, we propose a more efficient algorithm (denoted as BLS-Jacobi) for solving Poisson's equation.
The rest of this paper proceeds as follows. In Section II, different RFLNNs, including the ELM, BLS and stacked BLS are introduced, and we explore the properties of these random neural networks from the perspective of frequency domain in Section III. According to the frequency principle presenting in RFLNNs, we propose a method to generate a BLS network with better performance in Section IV. In Section V, an efficient algorithm combining the advantages of BLS and the iterative Jacobi method is proposed for solving the Poison equations. Finally, the main conclusion and further discussions of the RFLNNs are presented.
## II Random functional-linked neural networks
In this section, we briefly introduce some random neural networks, and the corresponding learning algorithms.
### _Random vector functional-linked neural networks_
The RVFLNN [11, 4] is first proposed to overcome the drawback of trapping in a local minimum and long time training of single layer feedforward neural networks, and achieves a universal approximation for continuous functions with fast learning property [34]. Chen _et al._[10] formulated the random flat networks as linear systems and proposed a step-wise updating algorithm to remodel the high-volume and time-variety data in modern large data era.
Fig. 1 shows the basic characteristics of the RVFLNN. The weights \(\mathbf{W}_{h}\) that link the input nodes and enhance nodes and the bias \(\boldsymbol{\beta}_{h}\) are randomly generated and remain unchanged thereafter. The RVFLNNs can be formulated as:
\[\mathbf{Y}=\left[\mathbf{X},\ \xi(\mathbf{X}\mathbf{W}_{h}+\boldsymbol{e}_{N} \otimes\boldsymbol{\beta}_{h})\right]\mathbf{W}, \tag{1}\]
where \(\xi(\cdot)\) is a nonlinear activation function which acts on the element-wise, \(\mathbf{X}\in R^{N\times D}\) is the input matrix, \(\mathbf{Y}\in R^{N\times K}\) is the output matrix, \(\boldsymbol{e}_{N}=(1,\cdots,1)^{\mathrm{T}}\) is an \(N\)-dimensional vector, \(\otimes\) represents the Kronecker product, and the weights \(\mathbf{W}\) from the input nodes and enhance nodes to the output nodes are required to be optimized. Pao _et al._[11] utilized a conjugate gradient search algorithm to find the optimal weights. In [10], Chen _et al._ proposed a fast stepwise updating algorithm, which can easily retrain the network when a new observation or a new neuron is added to the existing network.
### _Extreme learning machine_
The ELM is a class of random neural networks, which is first proposed by Huang _et al._[6]. The hidden nodes of the ELM networks are randomly chose, and we just need to determine the output weights. An ELM networks with \(L\) hidden nodes can be mathematically modelled as
\[\mathbf{Y}=\mathbf{H}\mathbf{W} \tag{2}\]
Fig. 1: Basic structure of random vector functional-linked neural network
where
\[\mathbf{H}=\begin{bmatrix}\xi(\mathbf{w}_{1}^{\mathsf{T}}\mathbf{x}_{1}+b_{1})&\cdots& \xi(\mathbf{w}_{L}^{\mathsf{T}}\mathbf{x}_{1}+b_{L})\\ \vdots&\cdots&\vdots\\ \xi(\mathbf{w}_{1}^{\mathsf{T}}\mathbf{x}_{N}+b_{1})&\cdots&\xi(\mathbf{w}_{L}^{ \mathsf{T}}\mathbf{x}_{N}+b_{L})\end{bmatrix}\]
is the hidden layer output matrix of the neural network. For given target observations, the output weights can be obtained by solving a least squares problem. Unlike the RVLNNs, there are not links from the input nodes to the output nodes in the ELM networks.
### _Broad learning system_
To adapt to large-scale data and high dimension encountered in complex learning tasks, Chen _et al._[5] proposed a BLS network based on the underlying idea of the RVFLNN. Unlike the RVFLNN, the BLS takes the features extracted form the raw data as input, which can effectively solve the problem of high dimensionality of the original data. In addition, it is convenient for the BLS to remodel in structure expansion without retraining.
Fig. 2 outlines the basic structure of the BLS. First, it maps the input to a set of features to form the feature nodes
\[\mathbf{Z}^{i}=\left[\mathbf{Z}_{1},\cdots,\mathbf{Z}_{i}\right],\]
\[\mathbf{Z}_{i}=\mathbf{\varphi}_{i}(\mathbf{X}\mathbf{w}_{e_{i}}+\mathbf{\beta}_{e_{ i}}).\]
The \(j\)th group of enhancement nodes can be constructed by the random mapping
\[\mathbf{H}_{j}=\mathbf{\xi}_{j}(\mathbf{Z}^{i}\mathbf{w}_{h_{j}}+\mathbf{\beta}_{h_{j} }).\]
Collecting the previous \(j\) groups of the enhancement nodes to form \(\mathbf{H}^{j}\equiv[\mathbf{H}_{1},\cdots,\mathbf{H}_{j}]\). The parameters \(\mathbf{w}_{e_{i}}\), \(\mathbf{\beta}_{e_{i}}\), \(\mathbf{w}_{h_{j}}\) and \(\mathbf{\beta}_{h_{j}}\) are all randomly generated.
The parameters from feature nodes and enhancement nodes to the output nodes, denoted as \(\mathbf{W}^{n,m}\), can be obtained by solving the minimization problem
\[\min_{\mathbf{W}^{n,m}}\left\|\mathbf{A}\mathbf{W}^{n,m}-\mathbf{Y}\right\|_{v }+\lambda\left\|\mathbf{W}^{n,m}\right\|_{u}, \tag{3}\]
where \(\mathbf{A}=[\mathbf{Z}^{n},\mathbf{H}^{m}]\), \(u\) and \(v\) represent some typical kinds of norm. When \(u,v=2\), the optimal values can be obtained by the ridge regression method
\[\hat{\mathbf{W}}^{n,m}=(\lambda\mathbf{I}+\mathbf{A}^{\mathsf{T}}\mathbf{A})^ {-1}\mathbf{A}^{\mathsf{T}}\mathbf{Y}. \tag{4}\]
Chen _et al._ further develop an incremental learning algorithm for the new incoming input, the increment of the feature nodes and enhancement nodes. Here we just briefly introduce the incremental algorithm for the adding of enhancement nodes. For notational convenience, we assume that \(\mathbf{A}^{m}=[\mathbf{Z}^{n},\ \mathbf{H}^{m}]\) and \(\mathbf{A}^{m+1}=\left[\mathbf{A}^{m},\ \mathbf{\xi}(\mathbf{Z}^{n}\mathbf{w}_{h_{m+1}}+\mathbf{\beta}_{h_{m+1}})\right]\), then the output weights can be updated as follows after adding a group of enhancement nodes:
\[\mathbf{W}^{n,m+1}=\left[\begin{array}{c}\mathbf{W}^{n,m+1}-\mathbf{D} \mathbf{B}^{\mathsf{T}}\mathbf{Y}\\ \mathbf{B}^{\mathsf{T}}\mathbf{Y}\end{array}\right],\]
where \(\mathbf{D}=(\mathbf{A}^{m})^{\dagger}\xi(\mathbf{Z}^{n}\mathbf{w}_{h_{m+1}}+ \mathbf{\beta}_{h_{m+1}})\), \(\mathbf{C}=\mathbf{\xi}(\mathbf{Z}^{n}\mathbf{w}_{h_{m+1}}+\mathbf{\beta}_{h_{m+1}})- \mathbf{A}^{m}\mathbf{D}\), and
\[\mathbf{B}^{\mathsf{T}}=\begin{cases}\mathbf{C}^{\dagger},&\text{if }\mathbf{C} \not=\mathbf{0}\\ (\mathbf{I}+\mathbf{D}^{\mathsf{T}}\mathbf{D})^{-1}\mathbf{B}^{\mathsf{T}}( \mathbf{A}^{m})^{\dagger},&\text{otherwise}.\end{cases}\]
### _Stacked broad learning system_
In [30], Liu _et al._ proposed a variant of BLS combined with deep structure network, named stacked BLS. The stacked BLS can be remodeled as an increment of "neurons" and "layer" dynamically during training process, which mainly includes two repeated steps (generating the BLS block and stacking it on the top of the stacked BLS). Fig. 3 shows the schematic plot of the stacked BLS. The corresponding incremental algorithm presented in [30] calculates not only the linked parameters between the newly stacked block but also the linked parameters of the enhance nodes within the BLS block.
## III Properties of different types of RFLNNs
In this section, we will explore the properties of RFLNNs (including ELM, BLS and stacked BLS) from the perspective of frequency domain. Frequency principle was proposed by Xu _et al._[32] when they used the Fourier analysis tool to shed light on the DNNs. Some valuable tools were developed based on this principle for scientific research applications. Motivated by these, we provide some insights into the RFLNNs in the following part.
### _Fitting sampling function_
In this subsection, we use the ELM, BLS and stacked BLS networks to fit the sampling function
\[f(x)=\frac{\sin(x)}{x},\]
and then observe the fitting performance from frequency domain. The Fourier transformation of the sampling function is shown in the left top in Fig. 4, where the physical frequency (\([0,20\pi]\)) is replaced with the corresponding index (1:1:40). As discussed in [31], since frequency components except for the peaks are susceptible to the artificial periodic boundary
Fig. 2: Basic structure of broad learning system
condition implicitly applied in the Fourier transform [35], we only focus on the analysis of the convergence performance of the frequency peaks. The fitting results in the peak points obtained by different methods are shown in Fig. 4. From Fig. 4, we can observe that: the frequency principle holds in ELM, BLS and stacked BLS, i.e., they tend to fit low-frequency components preferentially and then gradually fit high-frequencies. In addition, the fitting results of the BLS and stacked BLS are better than ELM, which may attributed to that the BLS links both feature nodes and enhancement nodes to the output layer.
_Remark_: In the simulation experiments, we found that for complex functions, this types of RFLNNs are difficult to capture the high-frequency information, which may be a major defect of this kind of random network.
### _RFLNNs for image classification_
RFLNNs have achieved excellent performance in image classification task. Here, we will verify the frequency principle presenting in ELM, BLS and stacked BLS in the some popular datasets, including two sets of handwritten digital images and three persons pictures:
1. The USPS dataset consists of 9298 handwritten digit images;
2. The MNIST is a classical handwritten image dataset including 70000 digits;
3. The EXYAB contains 2414 pictures taken by 38 persons with different expressions and illumination;
4. The ORL dataset consists of 400 face pictures of 40 persons taken under different lights, times, and facial expressions;
5. The UMIST dataset contains 575 pictures of 20 persons of different races, sexes, and appearances.
For high-dimensional input, the curse of dimensionality is prone to occur when computing the Fourier transformation. As discussed in [33, 36], we just consider the first principal component of the input images when performing Fourier analysis.
Denote \(\{\mathbf{X}\in\mathcal{R}^{N\times D},\mathbf{Y}\in\mathcal{R}^{N\times K}\}\) as the input images, where \(N\) is number of images, \(D\) is the pixel number of a picture, and \(K\) is the number of categories. We do the following preprocessing on \(\mathbf{X}\) before performing Fourier analysis. First, the input data \(\mathbf{X}\) is transformed to \(\tilde{\mathbf{X}}=\left[\tilde{\mathbf{x}}_{1}^{\intercal},\cdots,\tilde{ \mathbf{x}}_{N}^{\intercal}\right]\), where
\[\tilde{\mathbf{x}}_{j}=\mathbf{x}_{j}-\frac{1}{N}\sum_{k=1}^{N}\mathbf{x}_{k},\ \ j=1,\cdots,N.\]
We then compute the principal component of the covariance matrix \(\mathbf{C}=\tilde{\mathbf{X}}\tilde{\mathbf{X}}^{\intercal}\), denoted as \(\boldsymbol{p}\). Last, we project each observation in the \(\boldsymbol{p}\)-direction and rescale the obtained element
\[\bar{x}_{k}=\hat{x}_{k}\boldsymbol{p},k=1,2,\cdots,N,\]
\[x_{k}^{\prime}=\frac{\bar{x}_{k}-\min_{j}\bar{x}_{j}}{\max_{i}(\bar{x}_{i}- \min_{j}\bar{x}_{j})}\in[0,1],\ i,j,k=1,\cdots,N.\]
Denote \(\mathbf{X}^{\prime}=[x_{1}^{\prime},\cdots,x_{N}^{\prime}]\) as the principal components. As discussed in [37, 33], we consider the first dimension of the labels (denoted as \(\boldsymbol{y}\)) and do the nonuniform fast Fourier transform (NUFFT) on \(\boldsymbol{X}\), which yields
\[F[y](\alpha_{i})=\frac{1}{N}\sum_{k=0}^{N}y_{k}e^{-\rho ix_{k}^{\prime}\alpha_ {i}},\ \ i=1,2,\cdots,z,\]
where \(\rho\) represents the frequency range and \(\alpha_{i}\in\mathbb{Z}\) is the frequency index. After processing the above steps, the original training data of the networks can be formulated as follows:
\[D_{y}=\{(\alpha_{1},F[y](\alpha_{1})),\cdots,(\alpha_{z},F[y](\alpha_{z}))\}.\]
A similar transformation is carried out on the output of the random neural networks (denoted by \(\boldsymbol{\Psi}\)), we have
\[D_{t}=\{(\alpha_{1},F[t](\alpha_{1})),\cdots,(\alpha_{z},F[t](\alpha_{z}))\},\]
Fig. 4: The upper left plot is the frequency distribution of the sampled data, including three peak points; The remaining plots are the fitting results observed at the three peak points during the training process of ELM, BLS and stacked BLS.
Fig. 3: Illustration of the stacked broad learning system
where \(t\) is the first column of \(\boldsymbol{\Psi}\). The relative error defined as follows is used to evaluate the fitting performance of different networks during the training process:
\[\Delta D(\alpha_{i})=\frac{\big{|}|F[t^{k}](\alpha_{i})\big{|}-|F[y](\alpha_{i}) |\big{|}}{|F[y](\alpha_{i})|}, \tag{5}\]
where \(t^{k}\) is obtained at the \(k\)th step.
Figs. 5-9 show the simulation results obtained by different random neural networks fitting different datasets. The frequency distributions shown in Figs. 5-9 suggest that the handwritten datasets USPS and MNIST are mainly dominated by low-frequencies, while the other three face datasets contains more high-frequency informations.
From the fitting results shown in Figs. 5-9, we can observe that: these random types of neural networks usually capture the low-frequencies quickly and then gradually fit the high-frequency components. The ELM and BLS networks are more prone to instability when fitting the high frequencies. The fitting results of the stacked BLS is improved when adding the first two BLS blocks, while continuing to deepen the network contributes little to the fitting accuracy. In order to show this phenomenon more intuitively, we use the relatively error curve of each peak frequency to illustrate it, as shown in Figs. 5-9.
The simulation results confirm the existence of frequency principle presenting in the RFLNNs, which provides an important perspective to understand the properties of these random neural networks and is of significance to expand their applications in more research fields.
## IV An improved method to generate BLS networks with better prediction performance
The frequency principle presenting in RFLNNs is of great important for readers to understand the random networks and to make some improvements. In [5, 30], the parameters of feature nodes and enhancement nodes in the BLS network are always generated in a fixed distribution interval during the training process, which is inconsistent with the frequency principle of the network. In this subsection, we will propose a method to generate a BLS network with better prediction performance according to the fact that the BLS gradually captures the high-frequency components during the training process.
In the following part, we first take the tanh function as an example to analyze the influence of parameters on the activation function from the perspective of the frequency domain. For convenience, one-dimensional case is discussed here. Performing Fourier analysis on the function
\[\sigma(wx+b)=\tanh(wx+b)=\frac{e^{wx+b}-e^{-(wx+b)}}{e^{wx+b}-e^{-(wx+b)}},\]
Fig. 5: Fitting results of the UMIST data. The upper left plot is the frequency distribution of the sampled data. The remaining plots are the fitting results observed at each peak point of the distribution. The bottom right plot is the fitting error at each frequency point in the process of increasing the BLS blocks
Fig. 8: Fitting results of the EXYAB data.
Fig. 6: Fitting results of the USPS data.
Fig. 7: Fitting results of the MNIST data.
we have
\[\hat{\sigma}(wx+b)(\zeta)=\frac{2\pi i}{|w|}\exp(\frac{ib\zeta}{w})\frac{1}{\exp(- \frac{\pi\zeta}{2w})-\exp(\frac{\pi\zeta}{2w})} \tag{6}\]
Assume \(\frac{\pi\zeta}{2w}>0\), here we focus on the high frequencies, i.e., \(\zeta\) is large, which allows (6) to be approximate by
\[\hat{\sigma}(wx+b)(\zeta)\approx\frac{2\pi i}{|w|}\exp(\frac{ib\zeta}{w})\exp( -\frac{\pi\zeta}{2w}). \tag{7}\]
Similarly, when \(\frac{\pi\zeta}{2w}<0\), we can obtain the approximated equation:
\[\hat{\sigma}(wx+b)(\zeta)\approx\frac{2\pi i}{|w|}\exp(\frac{ib\zeta}{w})\exp( \frac{\pi\zeta}{2w}). \tag{8}\]
Since BLS gradually captures high-frequency components, it is naturally hoped that the enhancement nodes and feature nodes added in the training process can provide more high-frequency information. According to Equations (7) and (8), a large \(w\) is more beneficial to generate high frequency information, which can also be seen intuitively from Fig. 10. Therefore, in this paper, we use a dynamically expanded interval to generate the parameters \(w\) instead of using a fixed interval during the training process (e.g., the fixed interval [-1, 1] was used in [12, 5]).
Next, we will verify the effectiveness of the proposed method to generate the BLS networks based on the frequency principle on different datasets (including the handwritten digital image dataset MNIST, face pictures dataset EXYAB and face pictures dataset ORL). In order to exclude the influence of random factors, we randomly run the original BLS training algorithm and the proposed method 100 times respectively. The boxplots of prediction accuracy obtained by the two methods are shown in Figs. 11-13. From the Figures, it is easy to observe that the BLS network generated by the proposed method usually achieves higher prediction accuracy than the original method. The comparison results suggest that the proposed algorithm that takes into account the frequency principle presenting in the BLS can effectively improve the defect of insufficient fitting of high-frequency components, and achieves better performance than the original training algorithm.
## V A novel method for solving Poisson's equation
Poisson's equation is an important class of partial differential equations, which appears in a wide range of theoretical and application fields. Jacobi iterative algorithm is a conventional and effective method to solve these problems. In [33, 38], Xu _et al._ designed a method (named DNN-Jacobi) for Poisson's equation according to the different frequency principle presented in DNN and convention method.
Fig. 11: Comparisons of prediction accuracies of BLS networks generated by two different methods on MNIST dataset
Fig. 12: Comparisons of prediction accuracies of BLS networks generated by two different methods on EXYAB dataset
Fig. 10: The spectrograms for different parameters w
Fig. 9: Fitting results of the ORL data.
From Fig. 14, we can observe that when using the Jacobi iterative method, the high-frequencies converge much faster than the low frequencies, which is opposite to the frequency principle presenting in DNN or BLS. Therefore, it is natural for us to combine the neural networks and Jacobi method to solve the Poisson's equation. Considering that DNN suffers from a time-consuming training process, in this section, we design a method, named BLS-Jacobi, based on the frequency principle presenting in BLS. The BLS-Jacobi method mainly consists of two parts: First, the BLS with \(M\) incremental steps is used to solve the Poisson's equation; Then we use the output of the BLS as the initial value for the Jacobi iterative method.
Here we consider a 1-dimension Poisson's equation
\[\Delta u(x)=g(x),\ x\in\Omega=(-1,1) \tag{9}\] \[u(x)=0,\ x=-1,1,\]
and a 2-dimension Poisson's equation
\[\left\{\begin{aligned} -\Delta u&=f(x,y),\ (x,y)\in G=(0,1)\times(0,1)\\ u|_{\partial G}&=\left\{\begin{aligned} & 0,\ x=0\ \ \text{or}\ y=0\\ & y^{2},\ x=1\\ & x^{2},\ y=1\end{aligned}\right.\end{aligned}\right., \tag{10}\]
where \(g(x)=\sin(x)+4\sin(4x)-8\sin(8x)+16\sin(24x)\), \(f(x,y)=-2(x^{2}+y^{2})\) and \(\Delta\) is the Laplace operator.
As discussed in [33], the Poisson's equations here are solved by central differencing scheme. For example, Eq (9) is discretized into the following form:
\[-\Delta u_{i}=-\frac{u_{i+1}-2u_{i}+u_{i-1}}{(\Delta x)^{2}}=g(x_{i}),\ i=1,2, \cdots,n. \tag{11}\]
To express more compactly, Eq (11) can be written in a matrix form:
\[\mathbf{A}\mathbf{u}=\mathbf{g}, \tag{12}\]
where
\[\mathbf{A}=\left(\begin{array}{cccc}2&-1&0&0&\cdots&0\\ -1&2&-1&0&\cdots&0\\ 0&-1&2&-1&\cdots&0\\ \vdots&\vdots&\cdots&&\vdots\\ 0&0&\cdots&0&-1&2\end{array}\right)_{n-1\times n-1}\]
\[\mathbf{u}=\left(\begin{array}{c}u_{1}\\ u_{2}\\ \vdots\\ u_{n-2}\\ u_{n-1}\end{array}\right),\ \ \mathbf{g}=(\Delta x)^{2}\left(\begin{array}{c}g_{1}\\ g_{2}\\ \vdots\\ g_{n-2}\\ g_{n-1}\end{array}\right).\]
The Jacobi iterative algorithm, DNN-Jacobi algorithm and the proposed BLS-Jacobi algorithm are adopted to solving the Poisson's equations (Eq (9) and Eq (10)). Tables I and II list
\begin{table}
\begin{tabular}{l c c c} \hline \hline Preset accuracy & Jacobi & DNN-Jacobi & BLS-Jacobi \\ \hline
1e-1 & 0.4537 & 1.4914 & 0.0479 \\
1e-2 & 6.1470 & 5.5628 & 0.0479 \\
1e-3 & 13.5442 & 9.4029 & 0.0479 \\
1e-4 & 21.0617 & 12.9887 & 0.6758 \\
1e-5 & 30.0273 & 24.8210 & 2.0716 \\
1e-6 & 39.1745 & 33.5102 & 8.2575 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Time-consuming comparison of three different algorithms to achieve the preset accuracy (2-dimension Poisson’s equation, Unit: s)
Fig. 14: Frequency principle presented in DNN and Jacobi iterative method for solving Poisson’s equation redrawn from [33]
Fig. 13: Comparisons of prediction accuracies of BLS networks generated by two different methods on ORL dataset
the time required to achieve a preset accuracy when using the three different algorithms to solve the 1-dimensional and 2-dimensional Poisson's equations. As shown in Tables I and II, we can observe that: To achieve the same accuracy, the Jacobi iterative method consumes significantly more time than the proposed BLS-Jacobi method; The DNN-Jacobi method is also more time-consuming than the BLS-Jacobi due to the complex training process of DNN, but the result is better than the pure Jacobi iterative method when solving the 2-dimensional Poisson's equation (for 2-dimensional case, the matrix size in discretized equation is \((n-1)^{2}\times(n-1)^{2}\), which is much larger than that of 1-dimensional Poisson's equation); The BLS-Jacobi method takes advantage of the fast training characteristic of BLS and the advantage of the Jacobi method for fitting high frequencies, which makes it achieve best performance among the three algorithms.
The above numerical simulation results confirm the effectiveness of the proposed BLS-Jacobi method designed according to the frequency principle presenting in BLS.
## VI Conclusion
Random neural networks, which avoid suffering from a time-consuming training process, offer an alternative scheme to DNNs. In this paper, we shed some lights into the RFLNNs from a frequency domain perspective, and observe that the frequency principle presenting in the ELM, BLS and stacked BLS: they capture low-frequency components quickly and then gradually fit the high-frequencies. The results further show that for the stacked BLS, the fitting accuracy is not obviously improved when the number of added BLS blocks is more than 2. This may be a defect of the random neural networks with deep structure and is also an interest research topic in the future.
The frequency principle in RFLNNs is of great important to make some improvements and to expand their applications. Based on frequency principle, in this paper we design a method that can generate a BLS network with better prediction performance than the original method, meanwhile, we combine the BLS with the Jacobi iterative method to obtain a more efficient method (BLS-Jacobi) for solving Poisson's equation. The discovered principle of RFLNNs will play an important enlightening role for researchers to develop more potential applications.
|
2306.14168 | FastBCSD: Fast and Efficient Neural Network for Binary Code Similarity
Detection | Binary code similarity detection (BCSD) has various applications, including
but not limited to vulnerability detection, plagiarism detection, and malware
detection. Previous research efforts mainly focus on transforming binary code
to assembly code strings using reverse compilation and then using pre-trained
deep learning models with large parameters to obtain feature representation
vector of binary code. While these models have proven to be effective in
representing binary code, their large parameter size leads to considerable
computational expenses during both training and inference. In this paper, we
present a lightweight neural network, called FastBCSD, that employs a dynamic
instruction vector encoding method and takes only assembly code as input
feature to achieve comparable accuracy to the pre-training models while
reducing the computational resources and time cost.
On the BinaryCorp dataset, our method achieves a similar average MRR score to
the state-of-the-art pre-training-based method (jTrans), while on the
BinaryCorp 3M dataset, our method even outperforms the latest technology by
0.01. Notably, FastBCSD has a much smaller parameter size (13.4M) compared to
jTrans (87.88M), and its latency time is 1/5 of jTrans on NVIDIA GTX 1080Ti. | Chensen Huang, Guibo Zhu, Guojing Ge, Taihao Li, Jinqiao Wang | 2023-06-25T08:22:10Z | http://arxiv.org/abs/2306.14168v1 | # FastBCSD: Fast and Efficient Neural Network for Binary Code Similarity Detection
###### Abstract.
Binary code similarity detection (BCSD) has various applications, including but not limited to vulnerability detection, plagiarism detection, and malware detection. Previous research efforts mainly focus on transforming binary code to assembly code strings using reverse compilation and then using pre-trained deep learning models with large parameters to obtain feature representation vector of binary code. While these models have proven to be effective in representing binary code, their large parameter size leads to considerable computational expenses during both training and inference. In this paper, we present a lightweight neural network, called FastBCSD, that employs a dynamic instruction vector encoding method and takes only assembly code as input feature to achieve comparable accuracy to the pre-training models while reducing the computational resources and time cost. On the BinaryCorp dataset, our method achieves a similar average MRR score to the state-of-the-art pre-training-based method (JTrans), while on the BinaryCorp 3M dataset, our method even outperforms the latest technology by 0.01. Notably, FastBCSD has a much smaller parameter size (13.4M) compared to jTrans (87.88M), and its latency time is 1/5 of jTrans on NVIDIA GTX 1080Ti.
Binary Code, Similarity Detection, Neural Networks +
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
+
Footnote †: journal: Pattern Recognition
occurrence frequency. In contrast to normalizing tokens, this approach reduces the number of tokens while preserving more token information. we selected TextCNN (Huang et al., 2017) and LSTM (Huang et al., 2018), which are well-established and widely-used models for text classification and other natural language processing tasks, as our basic model. The TextCNN model, which is based on one-dimensional convolution, has the advantages of having a small number of parameters and fast inference speed. On the other hand, the LSTM model based on RNN can easily extract the overall features of the token sequence. Recent studies have shown that the MLP-Mixer(Wang et al., 2019) model based on MLP achieved comparable performance to Transformer on a series of vision tasks(Wang et al., 2019) and language tasks(Chen et al., 2019). Compared to the Transformer, the MLP-Mixer has a simpler architecture that utilizes only multi-layer perceptrons and lacks the incorporation of self-attention. We will modify the MLP-Mixer used in vision tasks to make it applicable to the BCSD task.
In summary, our study offers the following contributions:
* FastBCSD is easy to follow compared to other research methods, as it only requires assembling text strings and a lighten neural network, while other methods necessitate additional features, numerous training techniques, and complex architecture.
* The performance of FastBCSD using TextCNN is comparable to the state-of-the-art (SOTA) model: jTrans (Wang et al., 2019) based on pre-trained models, but with a significantly reduced computational time of approximately 1/5 of jTrans and a smaller number of parameters, which is approximately 1/5 of jTrans.
## 2. Related Work
In prior research, researchers commonly employed direct analysis of specific features of binary code. For instance, \(\alpha\)Dif (Huang et al., 2017) used a CNN (Huang et al., 2017) model to extract internal features of each binary function, which processed raw bytes without additional feature requirements. Subsequent research schemes, however, typically focused on seeking a vector to represent binary code, with the identification of a suitable representation vector serving as the crux of these approaches. The authors of DeepVSA (Huang et al., 2017) employ One-hot encoding on the raw bytes to obtain the representation vector of each instruction, using it to classify malicious software. In contrast, the authors of Gemini (Huang et al., 2018) construct an attribute control flow graph (ACFG) by manually extracting statistical features of the assembly code, such as the number of constants. A graph embedding network is trained to generate representation vectors in this approach. The author of (Wang et al., 2019) has proposed a technique to represent binary code as a sequence of instructions, and then applied the word2vec algorithm (Wang et al., 2019) to obtain a vector representation for each instruction. A recurrent neural network based on LSTM (Huang et al., 2018) was then used to identify similar binary code. This method shares similarity with the approach adopted in SAFE (Huang et al., 2019). Instruction2Vec (Huang et al., 2019) first pre-trained tokens for opcodes and operands using word2vec (Wang et al., 2019), and represented each instruction as a vector that combines an opcode vector and eight operand vectors. The resulting vector size is N x 9 x vector-size, where N is the instruction length. The researchers then used a CNN (Huang et al., 2017) model for training. Asm2Vec (Chen et al., 2019) employs random walks on the CFG (Control Flow Graph) to sample instructions and uses a model similar to the PV-DM model to train function and instruction tokens to obtain representation vectors. Pre-trained models have achieved remarkable results in the field of NLP, and have also been applied to the BCSD task in recent years with favorable outcomes. The study by researchers in OrderMatters (Wang et al., 2019) proposes a semantic-aware neural network to extract semantic information from binary code. By pre-training binary code using BERT (Chen et al., 2019) at token, block, and graph level tasks, the researchers found that the order of Control Flow Graph (CFG) nodes plays a crucial role in detecting graph similarity. To address the issue of the importance of the order of CFG nodes in graph similarity detection, the authors of the OrderMatters study utilized a Convolutional Neural Network (CNN) to extract order information from the adjacency matrix. The extracted features were then integrated to form the final representation vector for binary code. PalmTree (Huang et al., 2019) is an assembly language model pre-trained using the BERT model architecture. In the study, the researchers performed self-supervised training on a vast unlabeled binary corpus to generate universal instruction embeddings. In the study of PalmTree (Huang et al., 2019), researchers proposed a pre-training model based on BERT called PalmTree, which utilizes three pre-training tasks: Masked Language Model (MLM), Context Window Prediction (CWP), and Def-Use Prediction (DUP) to extract various features of assembly language. Although the model generated general instruction embeddings through self-supervised training on a large-scale unlabeled binary corpus, the efficiency of the PalmTree pre-trained language model is noted to be lower than that of traditional deep learning schemes like Instruction2Vec (Huang et al., 2019) which are not pre-trained. In the study of jTrans (Huang et al., 2019), the researchers utilized a Transformer-based language model to embed control flow information of binary code, with a basic structure similar to BERT (Chen et al., 2019), and trained it for the binary code similarity detection task. Moreover, the researchers introduced a new binary dataset called BinaryCorp (Wang et al., 2019), which is currently the most diverse dataset. Notably, this paper employed the BinaryCorp dataset for the experiments. The authors of UniASM (Huang et al., 2019) introduced a novel pre-training model for analyzing assembly code, inspired by UniLM(Chen et al., 2019). The model was trained on a self-constructed dataset and achieved promising results.
## 3. Design of FastBCSD
### Text Preprocessing
Each assembly function consists of multiple instructions, each comprising an opcode and operands. While opcodes' usage in instructions is fixed, operand can vary due to the format characteristics of assembly language. In prior works, UniASM (Huang et al., 2017) treated the entire instruction as a single token, resulting in a high number of token types and the OOV(Out-Of-Vocabulary) problem. In prior studies, separating the opcode and operands from the instruction into distinct tokens was explored as a means of reducing the number of token types in assembly code analysis. To further mitigate the impact of varying instruction content on token types, normalization techniques were employed. For example, special tokens such as [str] were used in the PalmTree (Huang et al., 2019) and jTrans (Wang et al., 2019) studies to represent strings in instructions. In our study, we hypothesize that the OOV problem does not have a substantial impact on the effectiveness of our model. Rather than applying extensive normalization techniques to the operands, we have pursued a strategy of
token mining, expanding the variety of tokens used in our analysis. Delimiters based on commonly used symbols in Instructions, such as "+', "', "', "', and spaces, were utilized to decompose instructions into a multitude of tokens, resulting in hundreds of thousands of distinct tokens. We adopt a frequency-based filtering approach to select a subset of tokens from the large pool of tokens obtained by decomposing operands using various delimiters. Specifically, we increase the frequency of a token by one each time it appears in a training sample, and filter out tokens with a frequency less than a pre-defined hyperparameter F (in our final experiments, F is set to 32), resulting in a more manageable set of tokens with a size of approximately 40,000. The proposed approach preserves a greater quantity of semantic information through retaining more token types and more commonly occurring tokens. Figure 1 depicts the specific details of the preprocessing procedure.
### Embedding Vector Construction
In the jTrans(Wang et al., 2017) study, researchers transformed the assembly text into a sequence of tokens by concatenating the opcode and operands in the order of their appearance within each instruction. The length of the token sequence is determined by the number of opcodes and operands in the assembly text. However, this approach presents a notable issue: as instructions in assembly language usually consist of an opcode and multiple operands, the number of parsed tokens can easily exceed 512 when the number of instructions surpasses 256. As jTrans employs the BERT architecture, it can only process token sequences with a maximum length of 512 tokens, leading to a discard of token sequences longer than 512, and in turn, resulting in poorer representation performance for lengthier assembly texts.
In this paper, each assembly instruction is treated as a dynamic vector, and the length of the input sequence is determined by the number of instructions in the assembly code. The dynamic vector is obtained by concatenating an opcode token vector, multiple operand token vectors, and a positional vector along the feature dimension. To handle with the varying length of instructions, a threshold K is used to truncate instructions with token sequences longer than K, while special token vectors are used for padding when the number of tokens is less than K. In our experiments, we set the threshold K to 5. This value was chosen because the use of many symbols as separators in preprocessing can result in the parsing of many tokens for a single instruction. Therefore, a larger value of K is typically set to preserve more information for most instructions. In this approach, an assembly text composed of S instructions is represented as a two-dimensional vector of initialized embedding vectors with dimensions of S x H. Here, H denotes the dimensionality of a single token vector multiplied by K + 1. This vector is directly fed into TextCNN, LSTM, and MLP-Mixer for training. The assembled text vector can be conveniently utilized for training with TextCNN, LSTM, and MLP-Mixer models, as they don't pose any restriction on the input sequence length, in contrast to the Transformer model, which only supports input sequences of length less than 512. Figure 1 depicts the dimensionality of the instruction vector and the assembled text vector.
Figure 1. In the pre-processing stage of the code text, low-frequency tokens are removed and the number of tokens in each instruction is adjusted, following which the remaining tokens are converted into token embeddings. Subsequently, the token embeddings and a position embedding of each instruction are concatenated into an one-dimensional vector, serving as the instruction representation vector. All instruction vectors in the assembly code are concatenated into a two-dimensional vector, which is used as the input vector of the model.
### Model Training
For training, a Siamese network framework (Beng et al., 2015) is utilized, with TextCNN serving as the model for feature extraction. As a type of text classification model, TextCNN is used in this study to extract the representation vector of assembly code. TextCNN follows a straightforward implementation approach. It receives a two-dimensional text vector (text length x vector dimension) as input and produces a text representation vector by sequentially passing through convolutional layers, activation functions, pooling layers, feature concatenation, and fully connected layers. The running process of the TextCNN model is presented in Figure 2. In addition to TextCNN, for comparison in feature extraction, we employed two other non-Transformer models, namely LSTM (He et al., 2016) and MLP-Mixer (Wang et al., 2017).
LSTM (long-short term memory) is a type of recurrent neural network (RNN) model that is well-suited for handling and predicting important events in time series with long intervals and delays, thanks to its unique design structure. The application of LSTM for extracting the representation vector of assembly text has been reported in previous studies, where its input is a two-dimensional vector, such as the input vector shown in Figure 1. Its output is a one-dimensional vector containing global information, serving as the representation vector of assembly text. The calculation formula for the t-th time step of LSTM is as follows:
\[\mathrm{i_{t}}=\sigma\left(\mathrm{W_{i}}\cdot\left[\mathrm{h_{t-1}},\mathrm{ x_{t}}\right]+\mathrm{b_{i}}\right) \tag{1}\]
\[\mathrm{f_{t}}=\sigma\left(\mathrm{W_{f}}\cdot\left[\mathrm{h_{t-1}},\mathrm{ x_{t}}\right]+\mathrm{b_{f}}\right) \tag{2}\]
\[\mathrm{\bar{C}_{t}}=\tanh\left(\mathrm{W_{C}}\cdot\left[\mathrm{h_{t-1}}, \mathrm{x_{t}}\right]+\mathrm{b_{C}}\right) \tag{3}\]
\[\mathrm{o_{t}}=\sigma\left(\mathrm{W_{o}}\cdot\left[\mathrm{h_{t-1}},\mathrm{ x_{t}}\right]+\mathrm{b_{o}}\right) \tag{4}\]
\[\mathrm{C_{t}}=\mathrm{f_{t}}*\mathrm{C_{t-1}}+\mathrm{i_{t}}*\tilde{\mathrm{C}_ {t}} \tag{5}\]
\[\mathrm{h_{t}}=\mathrm{o_{t}}*\tanh\left(\mathrm{C_{t}}\right) \tag{6}\]
The activation function sigmoid is denoted as \(\sigma\), and \(\tanh\) is also an activation function. \(\mathrm{x_{t}}\) represents the current input instruction vector, and W and b are learnable parameters. The output vector "ht" of the last time step serves as the representation vector of the assembly text.
The original MLP-Mixer consists of per-patch linear embeddings, Mixer layers, and a classifier head. In this paper, we remove the per-patch linear embeddings as our input is not image data and our assembly text vectors can be directly fed into the Mixer layers. The Mixer layers consist of one token-mixing MLP and one channel-mixing MLP, each containing a fully-connected layer and a GELU nonlinearity. The token-mixing MLP is used for feature mixing, while the channel-mixing MLP is used for token mixing, which can extract global information. Other components include skip-connections, dropout, and layer normalization. The structure of the MLP-mixer used in this article is shown in Figure 3.
During the training process of the siamese network, a large number of positive and negative sample pairs are required. The label of the positive sample pair is 1, and the label of the negative sample pair is -1. The same TextCNN model is used to extract representation vectors for both functions in each sample pair, which are denoted as \(E_{1}\) and \(E_{2}\). After obtaining the representation vectors and labels, the cosine loss function is used to calculate the loss and update the network. The formula for the cosine loss function is as follows:
\[\min_{\theta}\mathcal{L}_{F}(\theta)=\left(\left(1-\cos\left(E_{1},E_{2} \right)\right)+\max\left(0,\cos\left(E_{\mathrm{f}},E_{\mathrm{g}}\right)- \mathrm{margin}\right)\right)*\frac{y+1}{2} \tag{7}\]
Figure 2. Extract text features using the TextCNN model with 6 one-dimensional convolutional kernels, a stride of 1, and an output channel of 192.
where \(\theta\) represents the parameters of the model, and margin is a hyper-parameter usually chosen between 0 and 1, we found that setting a relatively large value for the margin achieved the best results. In the experiments, we set it to 0.9.
## 4. Experimental Setup
### Dataset
This study used a publicly available large-scale binary dataset, BinaryCorp, which was first introduced in the jTrans(Kumar et al., 2017) paper. BinaryCorp consists of a large number of binary documents, including official ArchLinux software packages and Arch user repositories, and 48,130 binary programs compiled with gcc and g++ at different optimization levels, with approximately 26 million functions in total. Due to the large size of the BinaryCorp-26M dataset, jTrans extracted a subset from it called BinaryCorp-3M. Table 1 shows some statistics for these two datasets, with BinaryCorp-3M containing approximately 3.6 million functions. In this study, we used the training portion of BinaryCorp-3M as the training set for the entire model, and tested the trained model on the test sets of both BinaryCorp-3M and BinaryCorp-26M. It should be noted that the test set used in this study is the same as the one used in the jTrans article.
### Data Sampling and Parameter Configuration
In the BinaryCorp dataset, each original binary program is compiled at different optimization levels (O0, O1, O2, O3, OS) by the compiler, generating up to 5 functionally equivalent binary programs. We pair binary programs with the same functionality but different optimization levels to generate positive samples (filtering out positive samples with the same assembly text), and randomly sample R functionally different binary programs for each binary program to form negative samples. We found that the performance is better when R is around 30. We used this method to sample the training set of BinaryCorp-3M, generating about 47.6 million negative samples and 2.45 million positive samples, with a positive-to-negative ratio of approximately 1:19. The 50 million samples will be used as the final training samples. The dimension of both the word embedding and position embedding is 192. In TextCNN, we use four one-dimensional convolutional kernels with a size of 5 and two one-dimensional convolutional kernels with a size of 3. Each one-dimensional convolutional kernel has an input channel of 192'6 and an output channel of 192, with a stride of 1. The learning rate is set to 0.001, each batch contains 384 samples, and the model is trained for one epoch.
### Evaluation Metrics
The application scenario of the BCSD model is to search for the function with the highest similarity to the input function from a large number of functions. Researchers usually use the MRR and Recall@k metrics to evaluate the performance of the model. In our experiments, we set a source function pool F, which has the same size as the number of functions, and the input function is selected from this pool. At the same time, we also set a target function pool G with the same size as the source function pool F. The definitions of the source function pool F and the target function pool G are as follows:
\[F=\{f_{1},f_{2},f_{3},\ldots,f_{i},\ldots,f_{n}\} \tag{7}\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline Datasets & \# Projects & \# Binaries & \# Functions \\ \hline BinaryCorp-3M Train & 1,612 & 8,357 & 3,126 \\ \hline BinaryCorp-3M Test & 364 & 1,908 & 444,574 \\ \hline BinaryCorp-26M Train & 7,845 & 38,455 & 21,085,338 \\ \hline BinaryCorp-26M Test & 1,974 & 9,675 & 4,791,673 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics on the number of projects, binaries and functions of the datasets.
Figure 3. MLP-Mixer
\[G=\{g_{1},g_{2},g_{3},\ldots,g_{i},\ldots,g_{n}\} \tag{8}\]
where \(g_{1}\in\mathrm{G}\) and \(f_{1}\in\mathrm{F}\) have the same functionality but different optimization levels, they form a positive sample pair. When calculating Mean Reciprocal Rank (MRR) and Recall@k, the similarity between \(f_{1}\) and each function in \(\mathrm{G}\) is first calculated, and then the functions in \(\mathrm{G}\) are rearranged in descending order according to their similarity. \(Rank_{g_{1}}\) denotes the position of \(g_{1}\) in the reordered list. The formulas for calculating MRR and Recall@k are as follows:
(9) \[MRR=\frac{1}{|F|}\sum_{\hat{f}\in F}\frac{1}{\left|\mathrm{Rank}_{g_{1}}\right|}\] (10)
\[I(x)=\left\{\begin{array}{l}0,x>0\\ 1,x\leq 0\end{array}\right. \tag{11}\]
## 5. Evaluation
In order to establish a standardized evaluation framework, we employed the publicly available code of jTrans to construct the testing datasets, BinaryCorp-3M and BinaryCorp-26M. Furthermore, we applied our pre-processing and tokenization techniques to prepare the functions in the testing datasets. The model training and inference was conducted on a hardware environment consisting of an Intel Xeon 10-core 2.20GHz CPU, 256GB RAM, and 1 Nvidia 1080Ti GPU.
### Binary Similarity Detection Performance
For the test sets of BinaryCorp-3M and BinaryCorp-26M, we adopted the same function pool sizes as jTrans(Kumar et al., 2017), which are 32 and 10000, respectively. Our experimental results can be seen in Tables2-5 (except for FastBCSD, the performance data of other models are from jTrans' experimental results. Table 2 reports the recall@1 scores of several BCSD models, including jTrans, on BinaryCorp-3M and BinaryCorp-26M test sets. We noticed a minor error in the recall@1 score of jTrans reported in the original table, which did not significantly affect the results. However, we recalculated the value based on the provided numerical information and obtained a revised score of 0.538. Our FastBCSD-TextCNN method achieves similar performance to jTrans and outperforms other BCSD models significantly. For the test task with a function pool size of 32, which is easier but significantly different from real-world scenarios, our proposed FastBCSD-TextCNN method achieves an MRR value that is slightly lower than that of jTrans, with a difference of 0.01 to 0.02. However, when the function pool size is 10000, which is closer to real-world scenarios, FastBCSD-TextCNN outperforms the best-performing jTrans model on BinaryCorp-3M with a difference of 0.01 in MRR value and 0.012 in recall@1 value. When evaluating on the larger BinaryCorp-26M dataset, FastBCSD-TextCNN achieved an MRR value that is 0.02 lower and a recall@1 value that is 0.02 lower than the jTrans model. The performance gap between the two models may be attributed to the fact that we only used the training set of BinaryCorp-3M and did not utilize the training set of BinaryCorp-26M due to its large size. Based on the experimental findings, the performance of FastBCSD-TextCNN is similar to that of jTrans, yet it shows significant improvement over other baseline models in terms of MRR and recall values, except for jTrans. Particularly, when the function pool size is 10000, the MRR value of FastBCSD-TextCNN is higher by 0.31 compared to SAFE, a model constructed with the Siamese network framework and bidirectional LSTM. Based on the results shown in Figure 6, it can be observed that the parameter size of FastBCSD-TextCNN is only 15% of that of jTrans, and its inference time is only 1/5 of jTrans. Meanwhile, the inference time of FastBCSD-MLP is even less, about 1/7 of jTrans. In contrast, the slowest model is FastBCSD-LSTM, which is due to the recurrent neural network architecture used.
The performance comparison of different models is presented in Tables 2-3. The results demonstrate that FastBCSD-TextCNN outperforms all other models, while FastBCSD-MLP based on MLP-Mixer exhibits the worst performance. Nevertheless, it is observed that even the worst-performing model, FastBCSD-MLP, is not significantly behind jTrans. Table 3 shows that FastBCSD-MLP achieves an MRR score that is only 0.033 lower than jTrans, while outperforming the SAFE model by 0.27. The SAFE model is based on the bidirectional LSTM plus the Siamese network framework, which is similar to the model used in FastBCSD-LSTM. However, the MRR score of FastBCSD-LSTM is significantly higher than that of SAFE, indicating that our dynamic instruction scheme can effectively adapt to smaller models.
### Reflection on experimental results
One important factor contributing to the success of small models in our study is the ability to construct a large amount of supervised data in batches for the BCSD task. This allows for a substantial volume of data to be generated and a relatively uniform data distribution. The proposed approach in this study, which trains small models using a large amount of supervised data, can achieve comparable performance to the pre-training and fine-tuning approach that uses large models. It should be noted that not all small model approaches are able to achieve comparable results to pre-trained models when trained on a large amount of supervised data, as these approaches may not take into account the impact of data volume on performance. Many studies rely on small-scale training data, which is susceptible to overfitting. The present study uses the largest open dataset available, which helps to avoid the overfitting problem. The popularity of pre-training approaches in NLP is well-known, mainly because most NLP tasks are not as well-defined as BCSD tasks, and acquiring a sufficient amount of supervised data for NLP tasks is often difficult.
## 6. Discussion
In this article, our research is focused solely on the x86 instruction set. But FastBCSD could be applied to other types of assembly languages such as ARM and MIPS. However, we have not conducted experiments on cross-platform binary function recognition in this study. This task is more challenging since the representation of the same function varies significantly across different instruction sets, and there are additional issues such as semantic alignment between different assembly languages. In the future, we plan to make modifications to FastBCSD to adapt it to the task of cross-platform binary function recognition.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \multicolumn{13}{c}{MRR} & \multicolumn{13}{c}{Recall@1} \\ \hline Models & O0,O3 & O1,O3 & O2,O3 & O0,Oos & O1,Os & O2,Os & Average & O0,O3 & O1,O3 & O2,O3 & O0,Oos & O1,O3 & O2,Os & Average \\ \hline Gemini & 0.037 & 0.161 & 0.416 & 0.049 & 0.133 & 0.195 & 0.165 & 0.024 & 0.122 & 0.367 & 0.030 & 0.099 & 0.151 & 0.132 \\ SAFE & 0.127 & 0.345 & 0.643 & 0.147 & 0.321 & 0.377 & 0.320 & 0.068 & 0.247 & 0.575 & 0.079 & 0.221 & 0.283 & 0.246 \\ Asm2Vec & 0.072 & 0.449 & 0.669 & 0.083 & 0.409 & 0.510 & 0.366 & 0.046 & 0.367 & 0.589 & 0.052 & 0.332 & 0.426 & 0.302 \\ GraphEmb & 0.087 & 0.217 & 0.486 & 0.110 & 0.195 & 0.222 & 0.219 & 0.050 & 0.154 & 0.447 & 0.063 & 0.135 & 0.166 & 0.169 \\ OrderMatters & 0.062 & 0.319 & 0.600 & 0.075 & 0.260 & 0.233 & 0.263 & 0.040 & 0.248 & 0.535 & 0.040 & 0.178 & 0.158 & 0.200 \\ Genus & 0.041 & 0.193 & 0.596 & 0.049 & 0.186 & 0.224 & 0.214 & 0.028 & 0.153 & 0.538 & 0.032 & 0.146 & 0.180 & 0.179 \\ JTrans & 0.475 & 0.663 & 0.731 & 0.539 & 0.665 & 0.664 & 0.623 & 0.376 & 0.580 & 0.661 & 0.443 & 0.586 & 0.585 & 0.538 \\ \hline
**FastBCSD-TextCNN** & **0.485** & **0.662** & **0.742** & **0.558** & **0.681** & **0.679** & **0.633** & **0.389** & **0.577** & **0.675** & **0.461** & **0.599** & **0.600** & **0.550** \\ \hline
**FastBCSD-LSTM** & **0.437** & **0.645** & **0.727** & **0.530** & **0.667** & **0.653** & **0.610** & **0.349** & **0.563** & **0.659** & **0.441** & **0.585** & **0.573** & **0.528** \\ \hline
**FastBCSD-MLP** & **0.398** & **0.638** & **0.721** & **0.476** & **0.660** & **0.652** & **0.590** & **0.309** & **0.557** & **0.652** & **0.387** & **0.582** & **0.574** & **0.510** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Results of different binary similarity detection methods on BinaryCorp-26M (Poolsize-32).It should be noted that the FastBCSD model uses the training set from BinaryCorp-3M, while the other models in the table employ BinaryCorp-26M, which is a superset of BinaryCorp-3M.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \multicolumn{13}{c}{MRR} & \multicolumn{13}{c}{Recall@1} \\ \hline Models & O0,O3 & O1,O3 & O2,O3 & O0,Oos & O1,Os & O2,Os & Average & O0,O3 & O1,O3 & O2,O3 & O0,Oos & O1,O3 & O2,Os & Average \\ \hline Gemini & 0.388 & 0.580 & 0.750 & 0.455 & 0.546 & 0.614 & 0.556 & 0.238 & 0.457 & 0.669 & 0.302 & 0.414 & 0.450 & 0.422 \\ SAFE & 0.826 & 0.917 & 0.958 & 0.854 & 0.927 & 0.927 & 0.902 & 0.729 & 0.869 & 0.933 & 0.766 & 0.879 & 0.880 & 0.843 \\ Asm2Vec & 0.479 & 0.878 & 0.961 & 0.536 & 0.855 & 0.900 & 0.768 & 0.351 & 0.828 & 0.942 & 0.408 & 0.796 & 0.863 & 0.701 \\ GraphEmb & 0.602 & 0.694 & 0.750 & 0.632 & 0.674 & 0.675 & 0.671 & 0.485 & 0.600 & 0.678 & 0.521 & 0.581 & 0.584 & 0.575 \\ OrderMatters-online & 0.542 & 0.740 & 0.869 & 0.638 & 0.702 & 0.682 & 0.695 & 0.414 & 0.647 & 0.822 & 0.515 & 0.611 & 0.593 & 0.591 \\ OrderMatters & 0.601 & 0.838 & 0.933 & 0.701 & 0.812 & 0.800 & 0.777 & 0.450 & 0.763 & 0.905 & 0.566 & 0.724 & 0.715 & 0.687 \\ Genius & 0.377 & 0.587 & 0.868 & 0.437 & 0.600 & 0.627 & 0.583 & 0.243 & 0.479 & 0.830 & 0.298 & 0.490 & 0.526 & 0.478 \\ JTrans & 0.947 & 0.976 & 0.985 & 0.956 & 0.979 & 0.977 & 0.970 & 0.913 & 0.960 & 0.974 & 0.927 & 0.964 & 0.961 & 0.949 \\ \hline
**FastBCSD-TextCNN** & **0.931** & **0.971** & **0.981** & **0.945** & **0.976** & **0.970** & **0.962** & **0.894** & **0.953** & **0.968** & **0.915** & **0.960** & **0.951** & **0.940** \\ \hline
**FastBCSD-LSTM** & **0.909** & **0.964** & **0.977** & **0.932** & **0.971** & **0.961** & **0.952** & **0.864** & **0.943** & **0.963** & **0.899** & **0.954** & **0.941** & **0.927** \\ \hline
**FastBCSD-MLP** & **0.900** & **0.964** & **0.978** & **0.921** & **0.969** & **0.961** & **0.948** & **0.851** & **0.943** & **0.963** & **0.883** & **0.950** & **0.940** & **0.921** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Results of different binary similarity detection methods on BinaryCorp-3M (Poolsize-32)
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multicolumn{13}{c}{MRR} & \multicolumn{13}{c}{Recall@1} \\ \hline Models & O0,O3 & O1,O3 & O2,O3 & O0,Oos & O1,Os & O2,Os & Average & O0,O3 & O1,O3 & O2,O3 & O0,Oos & O1,O3 & O2,Os & Average \\ \hline Gemini & 0.072 & 0.189 & 0.474 & 0.069 & 0.147 & 0.202 & 0.192 & 0.058 & 0.148 & 0.420 & 0.051 & 0.115 & 0.162 & 0.159 \\ SAFE & 0.198 & 0.415 & 0.696 & 0.197 & 0.377 & 0.431 & 0.386 & 0.135 & 0.314 & 0.634 & 0.127 & 0.279 & 0.343 & 0.305 \\ Asm2Vec & 0.118 & 0.443 &
## 7. Conclusion
In this paper, we propose a novel dynamic instruction vector encoding method taking only assembly code as input features combing with a lightweight neural network, named FastBCSD, to address some limitations of current methods in BCSD task, such as the training of pre-trained models requires a significant amount of computational resources, and normalization of tokens can lead to loss of information. Experimental results show that FastBCSD achieves a similar performance with the state-of-the-art pre-training model, but with significantly fewer parameters and faster inference speed.
|
2307.11339 | Chrion: Optimizing Recurrent Neural Network Inference by Collaboratively
Utilizing CPUs and GPUs | Deploying deep learning models in cloud clusters provides efficient and
prompt inference services to accommodate the widespread application of deep
learning. These clusters are usually equipped with host CPUs and accelerators
with distinct responsibilities to handle serving requests, i.e. generalpurpose
CPUs for input preprocessing and domain-specific GPUs for forward computation.
Recurrent neural networks play an essential role in handling temporal inputs
and display distinctive computation characteristics because of their high
inter-operator parallelism. Hence, we propose Chrion to optimize recurrent
neural network inference by collaboratively utilizing CPUs and GPUs. We
formulate the model deployment in the CPU-GPU cluster as an NP-hard scheduling
problem of directed acyclic graphs on heterogeneous devices. Given an input
model in the ONNX format and user-defined SLO requirement, Chrion firstly
preprocesses the model by model parsing and profiling, and then partitions the
graph to select execution devices for each operator. When an online request
arrives, Chrion performs forward computation according to the graph partition
by executing the operators on the CPU and GPU in parallel. Our experimental
results show that the execution time can be reduced by 19.4% at most in the
latency-optimal pattern and GPU memory footprint by 67.5% in the memory-optimal
pattern compared with the execution on the GPU. | Zinuo Cai, Hao Wang, Tao Song, Yang Hua, Ruhui Ma, Haibing Guan | 2023-07-21T04:09:28Z | http://arxiv.org/abs/2307.11339v1 | # _Chrion_: Optimizing Recurrent Neural Network Inference by Collaboratively Utilizing CPUs and GPUs
###### Abstract
Deploying deep learning models in cloud clusters provides efficient and prompt inference services to accommodate the widespread application of deep learning. These clusters are usually equipped with host CPUs and accelerators with distinct responsibilities to handle serving requests, _i.e._ general-purpose CPUs for input preprocessing and domain-specific GPUs for forward computation. Recurrent neural networks play an essential role in handling temporal inputs and display distinctive computation characteristics because of their high inter-operator parallelism. Hence, we propose _Chrion_ to optimize recurrent neural network inference by collaboratively utilizing CPUs and GPUs. We formulate the model deployment in the CPU-GPU cluster as an NP-hard scheduling problem of directed acyclic graphs on heterogeneous devices. Given an input model in the ONNX format and user-defined SLO requirement, _Chrion_ firstly preprocesses the model by model parsing and profiling, and then partitions the graph to select execution devices for each operator. When an online request arrives, _Chrion_ performs forward computation according to the graph partition by executing the operators on the CPU and GPU in parallel. Our experimental results show that the execution time can be reduced by 19.4% at most in the latency-optimal pattern and GPU memory footprint by 67.5% in the memory-optimal pattern compared with the execution on the GPU.
## 1 Introduction
Prosperous development of deep learning (DL) has brought more and more DL models deployed in cloud computing clusters and providing inference services to users. For example, Microsoft's Deep-Learning-Inference-Service (DLIS) [42] serves thousands of machine learning models worldwide and processes three million inference calls per second. Facebook [27] handles trillions of serving requests to provide user-interactive services, like recommendation and advertising. The enormous number of concurrent requests puts forward demanding requirements for commercial companies to build the infrastructure for the serving system. Numerous researchers and engineers have proposed novel techniques to build high-performance deep learning serving systems [11, 25, 26, 49].
Most current research on serving systems focuses on designing general algorithms to improve the quality of inference services and meet the users' Service Level Objective (SLO). Batch processing [11, 25, 46] is commonly adopted when handling requests to improve the system's throughput and reduce the average execution time. Model compression [13, 20, 51] is proposed to minimize model size and accelerate inference speed. Besides, recent research considers how to reasonably allocate resources for inference workloads [38, 44] to optimize the system's resource utilization.
However, existing works have ignored two key opportunities to build a high-performance and cost-efficient model serving framework. First, modern inference servers are equipped with general-purpose computing devices like CPUs and domain-specific computing devices like GPUs. CPUs are usually used for input prepossessing, and GPUs for forward computation of the models [52] when handling deep learning inference workloads. The increasing complexity of deep learning models and many inference requests result in higher GPU memory requirements and lower CPU resource utilization. Hence, it is natural to consider collaboratively utilizing CPUs and GPUs for forward computation to improve system throughput and reduce inference latency instead of leaving CPUs idle and GPUs confined by limited memory capacity.
Second, the increasing complexity of deep learning models brings more potential to inter-operator parallelism during execution. Multiple branches are implemented in each inception block of GoogleNet [43], one of the most classical convolutional neural networks. For recurrent neural networks (RNN), there are also opportunities for parallelism after unfolding along the number of layers and the sequence length. Graph neural networks can also be executed in parallel due to their graph structure. We find that RNNs are most suitable for hybrid execution on CPUs and GPUs because of their computation characteristics. However, the current research on model parallel strategy mainly focuses on homogeneous
devices [21, 30], ignoring the potential of parallelism across general-purpose and domain-specific devices.
Therefore, we design _Chrion_, an inference framework designed for RNN models by collaboratively utilizing CPUs and GPUs for forward computation. The basic idea of _Chrion_ is to exploit hardware capacity in the CPU-GPU environment, and its core component is a graph partition algorithm used to schedule RNNs' operators to heterogeneous platforms for parallel execution. The system's inputs are a pre-trained RNN model in the ONNX format and the SLO requirement defined by the user. _Chrion_ first pre-processes the model, mapping the computation graph into a directed acyclic graph (DAG) and obtaining the execution time of operators both on CPUs and GPUs. The graph partition module then sorts the operators in the directed acyclic graph and selects suitable execution providers for each operator. Finally, when users' inference requests arrive, _Chrion_ will execute on CPUs and GPUs collaboratively according to the partition scheme and respond to the requests.
To confirm the effectiveness of our design, we conduct extensive experiments to evaluate our framework. We use the LSTM model as our baseline, the most widely used RNN model. By changing the parameters of the LSTM model, including the number of layers, input/output dimension, sequence length, and batch size, we generate more LSTM variants. Our experimental results show that, in the end-to-end experiment, the execution time of the model can be reduced by 19.4% at most in the Latency-Optimal pattern compared with the execution on the GPU. By relaxing the latency limit of the model, our framework can reduce the GPU memory requirement of the model by 67.5%. _Chrion_ can reduce model swapping and SLO violation rate in the local cluster evaluation.
Our main contributions are as follows:
* We identify the potential of parallel execution in the CPU-GPU environment for model inference. To our knowledge, we are the first to support inter-operator parallelism across heterogeneous platforms.
* We design _Chrion_, an optimized model serving framework for recurrent neural networks by collaboratively utilizing CPUs and GPUs to resolve GPU memory bottlenecks and improve CPU utilization.
* We design an adaptive graph partition algorithm to select execution platforms for operators. Our algorithm is not designed for specific models, so it can be extended to more complex models with little effort.
* We conduct extensive experiments on LSTM and its variants to confirm _Chrion_'s high performance. Experimental results show that inter-operator parallelism in the CPU-GPU environment benefits RNN inference.
## 2 Background and Motivation
### Serving Deep Learning Models
Since deep learning plays an essential role in all walks of life, Inference-as-a-Service comes forth. To build an efficient model reasoning system, researchers and engineers carry out optimization design from the following two aspects. Since different DL models have different structures and characteristics, various techniques are proposed to optimize serving systems for specific model structures, like CNNs [37, 45], RNNs [23, 28, 50], Transformers [22, 47, 54] and GNNs [35, 39]. Others optimize the inference framework from the system perspective. Morphling [44] provides an automatic resource configuration method for cloud-native model serving. Clockwork [25] serves as a distributed serving system with the observation of predictable model execution latency. MArk [49] and Batch [11] refer to the emerging cloud computing techniques to optimize inference efficiency.
### Observations and Opportunities
Observation I: GPU Memory Bottleneck in Deep Learning Clusters.GPU memory becomes the bottleneck when serving thousands of inference requests in deep learning clusters. Its consumption includes three parts when serving inference workloads: input/output tensors, weight tensors, and ephemeral tensors [24, 25]. With the number of model parameters spiking from millions [32, 34] to billions [15], the memory requirement to load the whole model increases from MBs to GBs. Ephemeral tensors are generated during invoking CUDA APIs, and GPU memory requirements also burst with the increasing number of GPU kernels. Therefore, serving inference workloads with GPU has imposed intense pressure on GPU memory capacity, and Gao _et al._ conclude that 8.8% of job failures in a deep learning cluster is caused by "Out Of Memory" [24].
However, GPU memory capacity is restricted compared with the increasing requirement of deep learning models. Table 1 compares the memory capacity of mainstream NVIDIA GPUs _v.s._ the memory consumption of popular deep learning models when serving requests with a batch size equal to four. We observe that only six VGG19 models can simultaneously be loaded in RTX 2080 Ti memory. In contrast, a model serving cluster often serves requests with hundreds to thou
\begin{table}
\begin{tabular}{c c|c c} \hline \hline
**GPU** & **Capacity** & **Model** & **Requirement** \\ \hline RTX 2080 Ti & 11 GB & VGG19 [41] & 1762 MB \\ RTX 3090 Ti & 24 GB & YOLOv5 [8] & 1444 MB \\ Grid A100 & 48 GB & RenNext50 [31] & 1218 MB \\ \hline \hline \end{tabular}
\end{table}
Table 1: GPU Memory Capacity _v.s._ Inference Requirements.
sands of models [42]. Therefore, when an inference request requires a model not in the GPU memory, we should first load the model from the main memory (much larger than GPU memory and hold all the models) and then perform forward computation. Due to the limited GPU memory, model loading must be triggered frequently, significantly reducing the serving throughput and leading to violations of per-request deadlines. GPU memory is becoming even more critical to the serving performance as deep learning models get larger [25].
**Takeaway I**
_Limited GPU memory capacity cannot satisfy the increasing demands of inference workloads, degrading the throughput of cluster servers._
**Observation II: Under-utilization of CPU Resources in the Cloud.** The CPU utilization of deep learning clusters is typically low [19, 29]. We further verify this observation by analyzing the CPU utilization in modern cloud clusters from two open-source cloud datasets, Philly Traces [5] and Azure Public Dataset [2]. The Philly Traces dataset collects information on clustered machines and task execution for Microsoft's machine learning platform during the four months in 2017, while the Azure Dataset collects task information for the cloud during 2019 and 2020. The two curves in the Figure 1 can reflect the typical characteristics of CPU utilization in commercial clusters, regardless of whether for deep learning workloads. About 80% of CPUs in both Philly Traces and Azure Public Dataset are facing a utilization rate below 30% while about 40% of CPUs in Azure Public Dataset are below 40%.
However, we observe CPU over-provisioning in commercial inference servers, which may aggravate the challenge of low CPU utilization. Amazon EC2 G4 Instances [1] is claimed to be the most cost-effective GPU instances for machine learning inference in the industry. We list resource configuration details of G4 Instances in Table 2 to illustrate the CPU over-provisioning status in commercial cloud platforms. The five configurations in Table 2 are equipped with a single GPU and multiple vGPUs. When handling inference requests, only one CPU thread will be used to launch kernel functions if we only use GPU for forward computation, leaving most cores idle for scheduling.
**Takeaway II**
_The waste of CPU resources in cloud clusters is severe, but they can become a supplement to the computing power on heterogeneous platforms._
**Opportunity: Utilizing CPUs to Optimize RNN Inference.** While GPUs are the primary choice to perform forward computation of deep learning models, CPUs are coming to the fore for some specific model structures. Le _et al._ compare the speedups between an NVIDIA K80 GPU _v.s._ an Intel 20-core CPU for diverse deep learning models and reveals that CPUs and CPUs have similar performance when processing bi-directional LSTM models [33]. We experiment with an 8-layer LSTM model, and Figure 2 and 3 demonstrates the overall model execution time and fine-grained operator execution time, respectively.
As a whole, we observe the change in the model's execution time with different CPU cores. In Figure 2, as the number of CPU cores increases, the execution time of the model gradually decreases, approaching the execution time of the model executing on the GPU. But its execution time will fluctuate when the number of CPU cores is higher than five due to the limited parallel degree of model structure and contention for shared resources such as memory bandwidth, last level cache (LLC), etc. We also analyzed the execution time of a single operator on the CPU and GPU, and the results are shown in
Figure 3. We find that not all GPUs have higher execution efficiency than the CPU. Moreover, the parallel ability of CPU multi-core can make up for the low execution efficiency of some operators on the CPU, as shown in Figure 4.
**Takeaway III**
_The multi-core parallelism of CPUs and the model structure of RNNs can be well-matched to optimize the inference performance of RNNs._
### Implications
**Takeaway I** and **Takeaway II** imply that CPU can handle inference workloads in addition to GPUs in the deep learning cluster. Introducing CPU to forward computation instead of handling all the workloads with GPU can both alleviate GPU pressure and improve the CPU utilization. Besides, compared with the limited GPU memory, the main memory in the deep learning cluster in ample, getting rid of OOM risk for the serving system. Although the multi-core parallelism of the CPU effectively improves its task execution efficiency, there is still a gap in the performance of the CPU compared with the domain specific design of the GPU. Therefore, not all deep learning models are well suitable to adopt hybrid parallelism across heterogeneous platforms to optimize GPU memory footprint. **Takeaway III** indicates that RNNs are a suitable choice.
We demonstrate how to schedule different operators of a computing graph to heterogeneous platforms for efficient execution in Figure 5. Suppose there is a GPU graphics card and a 4-core CPU on a reasoning server, and the data copy between the CPU and the GPU is realized through PCIe. We need to schedule graph \(\mathbf{G}\) with seven computing nodes to be executed in this heterogeneous environment. It is the most common to schedule all the operators on GPU for efficient execution. In the GPU sequential version, we assume that the model weights have already been transferred to GPU memory. After the inputs are copied to GPU, kernel functions are launched on the CPU and executed on GPU sequentially. Finally, the outputs will be transferred back to the main memory and respond to users' requests to satisfy their SLO requirements. Another method is to schedule the graph on the CPU when GPU memory is occupied. Unlike the sequential execution pattern, multi-branches models can utilize inter-operator parallelism to accelerate inference speed. For instance, we schedule \(\mathbf{A}\), \(\mathbf{B}\), \(\mathbf{C}\), \(\mathbf{D}\) to different CPU cores in the first stage. Due to the data dependency of the computational graph, operator \(\mathbf{G}\) can not be executed until the completion of \(\mathbf{E}\) and \(\mathbf{F}\).
Since either CPU or GPU is fully utilized in the both execution patterns, we propose a hybrid execution pattern for multi-branch models in the heterogeneous environment. We schedule the operator \(\mathbf{A}\), \(\mathbf{B}\), \(\mathbf{C}\), and \(\mathbf{E}\) to be executed on CPU, and the others on CPU. Since the four kernels from \(\mathbf{A}\) to \(\mathbf{D}\) are
Figure 4: Folded and unfolded RNNs. All the RNN cells are executed one-by-one when scheduled to one GPU, but they can exploit the multi-core parallelism of CPUs through inter-operator parallelism of data-independent kernels, \(A_{0}^{2}\) and \(A_{1}^{1}\) can execute in parallel by schduling to different cores after the completion of \(A_{0}^{1}\).
Figure 5: Partition scheme for a seven-node computational graph. (a) and (b) illustrate the model structure and partition scheme, respectively. Four kernels are scheduled on GPU while the other three on CPU. (c) shows hybrid parallelism across CPUs and GPUs can minimize GPU memory requirement within SLO.
data-independent, they can run in parallel in heterogeneous platforms. Operator \(\mathbf{E}\) is scheduled on CPU to minimize data movement latency and save GPU time for operator \(\mathbf{F}\). We can conclude that after computational graph partition and hybrid execution, our method can satisfy the users' SLO requirements and save GPU memory in the meantime.
### Challenges
The first challenge is efficiently partitioning a graph in fine granularity. Graph partition is commonly designed in machine learning scenarios. For instance, different graph partition algorithms are proposed for specific targets, such as overcoming memory bottleneck in serverless scenarios [48], reducing context switch overhead by pipeline [12]. However, existing partition algorithms are designed either for homogeneous environments or models with a simple structure stacked sequentially of deep learning operators. Layer grouping simplifies the problem formulation, which is unsuitable for much more complicated multi-branch models. To the best of our knowledge, _Chrion_ is the first to provide a fine-grained graph partition for deep learning models to schedule the operators in a heterogeneous environment.
The second challenge is that the search space of the graph partition is ample due to the considerable number of model operators and variable hardware performance under different circumstances. The increase in the number of model operators makes the complexity of finding feasible scheduling schemes increase exponentially, which makes the time complexity of exhausting all feasible schemes and finding the best scheme unbearable. Besides, the selection of CPU cores also impacts performance. Figure 6 shows that the average latency of each LSTM cell increases when we use more CPUs for higher parallelism, which results from shared resource contention, including memory bandwidth and LLC.
## 3 Problem Formulation
Problem DescriptionWe map the computational graph of a machine learning model to a directed acyclic graph \(G=(V,E)\). The vertex set \(V=\{v_{0},v_{1},\cdots,v_{n-1}\}\) represents the \(n\) kernel functions in the computational graph. The edge set \(E=\{e_{ij}|0\leq i\neq j<n\}\) represents the execution dependency between kernel functions, where \(e_{ij}\) represents that fuction \(v_{j}\) should be launched only after the execution of function \(v_{i}\). Based on the mapping of the computational graph, we define the predecessor and successor sets for each vertex \(v_{i}\): \(\text{pred}(v_{i})=\{v_{j}|e_{ji}\in E,j\in[0,n)\}\) and \(\text{succ}(v_{i})=\{v_{j}|e_{ij}\in E,j\in[0,n)\}\). The vertex that has an empty predecessor set is the entry function of the graph, while the vertex that has an empty successor set is the exit function. Note that neither the entry function nor the exit function is unique in the computational graph.
We denote available processors as \(P=\{p_{0},p_{1},\cdots,p_{k}\}\), where \(p_{0}\) represents a GPU, and there are \(k\) available CPU cores in the heterogeneous environment. Although we assume that all the \(k\) CPU cores have the same computation capacity, they have diverse performance when different numbers of CPU cores are in use. We define a performance weight matrix \(W\in\mathbb{R}^{n\times(k+1)}\) to record the execution latency. \(W_{i,0}\) denotes how long to execute function \(v_{i}\) on GPU, while \(W_{i,j}(j\neq 0)\) denotes how long to execute function \(v_{i}\) on CPU when \(j\) CPU cores are in use. Since the communication cost between CPU and GPU is not negligible in the heterogeneous environment, we define \(C\in\mathbb{R}^{n\times n}\) as the communication data size and \(b\) as the bandwidth between CPU and GPU. Note that \(C_{i,j}=0\) if \(e_{ij}\notin E\) and we ignore the communication cost between \(v_{i}\) and \(v_{j}\) if they are scheduled on the same device since the memory bandwidth is much larger than PCIe bandwidth between heterogeneous platforms. \(M\) is a memory consumption matrix whose dimension is \(n\times 4\). \(M_{i,0},M_{i,1},M_{i,2},M_{i,3}\) denote the GPU memory requirements of input tensors, output tensors, ephemeral tensors and model weights, respectively, if \(v_{i}\) is scheduled on GPU.
Objective & ConstraintsOur problem considers how to generate a partition plan to schedule the computational graph \(G\) in a CPU-GPU heterogeneous environment. The expected outputs should minimize the GPU memory consumption within the guarantee of inference latency. We formulate the objective as Equation 5, where \(\alpha\) is a hyper-parameter to make trade-off between inference delay and memory consumption. The execution latency \(L\) of the computation graph is equal to the maximum actual finish time of all the exit functions considering multiple exit functions in the graph:
\[L=\max_{\text{succ}(v_{i})=\varnothing}\{\text{AFT}(v_{i})\} \tag{1}\]
Equation 2 and Equation 3 illustrates how to compute \(\text{EST}(v_{i},p_{j})\) and \(\text{EFT}(v_{i},p_{j})\) if the function \(v_{i}\) is scheduled
Figure 6: Cell Latency.
on \(p_{j}\). \(\mathrm{EFT}(v_{i},p_{j})\) is equal to the sum of \(\mathrm{EST}(v_{i},p_{j})\) and its execution time on processor \(p_{j}\). If GPU is selected, the execution time equals \(W_{i,0}\) while it equals \(W_{i,k^{\prime}}\) since we assume the CPU capacity is bottle-necked by the number of running CPU cores in the heterogeneous environment.
\[\mathrm{EST}\left(v_{i},p_{j}\right)=\max\left\{\mathrm{avail}[j],\max_{v_{m} \in\mathrm{pred}(v_{i})}\left(\mathrm{AFT}\left(v_{m}\right)+\frac{C_{m,i}}{b }\right)\right\} \tag{2}\]
\[\mathrm{EFT}\left(v_{i},p_{j}\right)=\begin{cases}W_{i,0}+\mathrm{EST}\left(v _{i},p_{j}\right),\;\mathrm{if}\;j=0\\ W_{i,k^{\prime}}+\mathrm{EST}\left(v_{i},p_{j}\right),\;\mathrm{if}\;j\neq 0 \end{cases} \tag{3}\]
We formulate the memory consumption of the computational graph as:
\[M=\sum_{v_{i}\in V}\left(s_{i}==0\right)\cdot\left(M_{i,1}+M_{i,2}+M_{i,3}+ \sum_{j\in\mathrm{pred}(i),s_{j}\neq 0}M_{j,1}\right) \tag{4}\]
The partition plan consists of three components: an execution order \(O\), a device selection \(S\), and the number of CPU cores \(k^{*}\). Constraints 6 and 7 ensure that the execution order \(O=\{o_{0},o_{1},\cdots,o_{n-1}\}\) is some kind of permutation of integers from \(0\) to \(n-1\). Constraint 8 means that the execution order should satisfy the requirements of the topological sorting since a child node can only be executed after its predecessors. The device selection \(S=\{s_{1},s_{2},\cdots,s_{n}\}\) denotes whether to launch the kernel function on the GPU or CPU, and if \(s_{i}\) equals \(0\), the vertex \(v_{i}\) is scheduled to be executed on GPU (Constraint 9). The selected number of CPU cores \(k^{*}\) is an integer between \(0\) and \(k\) (Constraint 10). If \(k^{*}\) is set to \(0\), the computational graph is scheduled only on GPU without optimization of GPU memory consumption.
In all, our problem can be formulated as the optimization problem below:
**minimize**: \[L+\alpha M\] (5)
**subject to**: \[o_{i}\in[0,n), \forall i\in[0,n)\] (6) \[o_{i}\neq o_{j}, \forall i,j\in[0,n)\;\mathrm{and}\;i\neq j\] (7) \[o_{i}<o_{j}, v_{j}\in\mathrm{succ}(v_{i})\] (8) \[s_{i}\in\{0,1\}, \forall i\in[0,n)\] (9) \[0\leq k^{*}\leq k\] (10) \[\mathrm{EST}\left(v_{i},p_{j}\right)=0, \mathrm{pred}(v_{i})=\varnothing\] (11)
## 4 System Design
### System Architecture
**Design Goal.** _Chrion_'s design satisfies three requirements for a machine learning serving system. Firstly, _Chrion_ is portable for machine learning developers because its inputs are only pre-trained models and user-defined SLO requirements. It does not require developers to spare more effort to transform their models to a specific format since ONNX is supported by the most popular deep learning frameworks, including TensorFlow [7], PyTorch [6] and MXNet [4]. Secondly, _Chrion_ is lightweight to satisfy the urgent latency requirements of inference requests. Existing methods resort to more complex algorithms to decide configurations to serve machine learning inference workloads, like meta-learning [44], LSTM [49], which has been a drag on the prompt response to users' inference requests. _Chrion_ utilizes a classical graph traversing algorithm for topological sorting and a greedy algorithm for device selection, which are much more light-weighted than ML algorithms. Finally, _Chrion_ provides a universal schedule plan independent of model structures for computational graph partition. _Chrion_ is not elaborately designed for one specific model but can be extended to much more complex multi-branch models.
Workflow.After training the deep learning models, developers can submit their models in the ONNX format and define the expected execution latency. Then _Chrion_ will derive a directed acyclic graph of the model and profile the model to get the operator execution time on heterogeneous platforms. The DAG is significant because it not only determines the execution order of the model's operators but also affects the memory
\begin{table}
\begin{tabular}{c l} \hline \hline Notation & Description \\ \hline \(G\) & a computational graph \\ \(V\) & a set of kernel functions \(\{v_{0},v_{2},\cdots,v_{n-1}\}\) \\ \(E\) & execution dependency among vertex \\ \(\mathrm{pred}(v_{i})\) & immediate predecessors of vertex \(v_{i}\) \\ \(\mathrm{succ}(v_{i})\) & immediate successors of vertex \(v_{i}\) \\ \(P\) & a list of available processors \\ \(W\) & a \(n\times(k+1)\) matrix of performance \\ \(C\) & a \(n\times n\) matrix of communication data size \\ \(M\) & a \(n\times 4\) matrix of memory consumption \\ \(n\) & number of vertex in the graph \\ \(k\) & number of available CPU cores \\ \(b\) & PCIe bandwidth between CPU and GPU \\ \hline \(\mathrm{EST}(v_{i},p_{j})\) & earliest start time of \(v_{i}\) if scheduled on \(p_{j}\) \\ \(\mathrm{EFT}(v_{i},p_{j})\) & earliest finish time of \(v_{i}\) if scheduled on \(p_{j}\) \\ \(\mathrm{AFT}(v_{i})\) & actual finish time of \(v_{i}\) \\ \hline \(O\) & execution order \\ \(S\) & device selection \\ \(k^{*}\) & selected number of CPU cores \\ \hline \(L\) & execution latency \\ \(M\) & GPU memory requirement \\ \hline \hline \end{tabular}
\end{table}
Table 3: Notation Table.
copy between host and device memory. The data movement happens when two adjacent operators are scheduled to different devices. Graph partition is the core component of _Chrion_, composed of two algorithms, topological sorting and device selection. The former algorithm decides the execution order of the computational graph, and the latter chooses which device is most suitable for each operator. When the user sends a request for model inference, the runtime will check whether the user's model weight has been copied to the GPU's display memory. When the model is not in the GPU memory, and the remaining memory can not meet the minimum requirements for executing the calculation, the runtime will select a replacement algorithm to cache the model in the CPU. Finally, _Chrion_ will use the CPU and GPU to execute the model operators in parallel and return the execution results within the requirements of the SLO.
### Topological Sorting
Given a computational graph, it is critical to determine the execution order for the vertices on the graph, which influences the inference efficiency on heterogeneous platforms and SLO satisfaction rate for users. Depth-first-search (DFS) algorithm and breadth-first-search (BFS) algorithm are the two most commonly used graph traversal algorithms, which have been widely used in topological sorting. Both starting from the root nodes, DFS explores feasible kernel functions as far as possible before backtracing, while BFS explores vertices by the order of their distances from the root nodes. However, Figure 8 demonstrates that simply applying BFS or DFS can not achieve ideal benefits. We assume there are two available devices and we always schedule the nodes according to the given order to a more idle device. It results in low parallelism if scheduling the kernel functions according to DFS, while high context switches between continuous kernels if according to BFS. The optimal situation should be that we can explore parallelism with BFS and maintain locality to reduce context switch with DFS.
```
Input: computational graph \(G=(V,E)\) Output: schedule order \(O\) begin
1
2 initialize an empty order \(O\)
3 initialize an empty ready queue \(Q\)
4 initialize a mark list \(marked\leftarrow[false]\times n\)
5for\(v_{i}\) in \(V\)do
6if\(\text{pred}(v_{i})=\varnothing\)then
7 push \(v_{i}\) into \(Q\)
8
9 end if
10
11 end for
12while\(Q\) is not emptydo
13 pop the first element \(v_{curr}\) from \(Q\)
14\(ready=true\)
15\(ready=ready\)\(\&\&\)\(marked[v_{j}],v_{j}\in\)\(\text{pred}(v_{curr})\)
16if\(ready\)then
17\(marked[v_{curr}]=true\)
18 append \(v_{curr}\) to \(O\)
19else
20 push \(v_{curr}\) into \(Q\)
21
22 end if
23
24 end for
25
26 end for
```
**Algorithm 1**Topological Sorting
Based on the observations above, we design a topological sorting method by taking advantage of BFS and DFS, and provide the pseudo code in Algorithm 1. We use BFS to explore potential parallelism by distributing different branches to different devices for execution, and DFS to maintain locality by avoiding closely-related neighbour kernel functions being scheduled across devices. In Algorithm 1, we first initialize three variables, order \(O\) to be returned by the algorithm, a ready queue \(Q\) to store the those kernels that have no predecessors or whose kernels have been scheduled, and a mark list to
Figure 8: Topological Sorting Optimization.
Figure 7: System Architecture.
record whether a kernel has been scheduled. Different from the classical BFS algorithm, before appending the ready kernel \(v_{curr}\) to \(O\), we will check whether its child function can be merged. If its child has only one predecessor and the communication time is larger than the average execution time among devices, it means maintaining the locality can achieve more benefits, and it is where DFS plays its role in the algorithm.
### Device Selection
The increasing number of model operators and variable hardware performance make it impractical to enumerate all the device selections to achieve the optimal. We design an adaptive algorithm to provide fine-grained device selection for each operator in the computational graph. The core idea of our algorithm is to use greedy method to select the appropriate device for each operator to perform the forward computation. After comparing the execution time of the operator with the available time of the hardware, the greedy algorithm always schedules the operator to the device that can let the operator end the execution earliest.
We show the details of the device selection algorithm in Algorithm 2. The inputs of the adaptive device selection are the DAG and the topological order obtained by Algorithm 1 while its outputs are the optimal resource configuration and the device selection scheme for each operator. _Chrion_ computes the cost for each kind of resource configuration. When the number of chosen CPU cores equals \(k^{\prime}\), we first initialize the available time for each device (Line 5). Then we iterate over the topological order list to find the best device for each operator (Line 8-14). For each operator \(o_{i}\) in the order \(O\), _Chrion_ calculate the earliest finish time and memory requirement, and then decide which device to select to achieve the least cost.
## 5 Implementation
_Chrion_ is implemented based on ONNXRuntime [3], a cross-platform inference and training accelerator for machine learning workloads. Figure 9 shows the high-level design of ONNXRuntime. It first converts users' input graph into an in-memory graph presentation, and performs graph optimizations independent of execution providers like operator fusing and constant folding. The execution provides are the intersection of user-defined and system-provided execution providers. The graph partitioner splits the graph into sub-graphs and schedule them to execute on different devices. ONNXRuntime implements a simple partition technique that schedules the graph according to the order of user-defined providers. Finally, the parallel and distributed graph runner execute the sub-graphs after accepting uses' input data on the underlying devices, like CPU and CUDA.
Our modification on ONNXRuntime focuses on the following. Firstly, based on the default in-memory graph, we implement the graph parsing and profiling component. These two components are independent of ONNXRuntime for flexibility. Secondly, we re-implement the graph partition algorithm. Different from the default graph partition according to the user-defined order, _Chrion_ provides a dynamic graph partition algorithm to allow parallel execution between CPUs and GPUs. Finally, we re-design the execution mechanism of providers. Since the default execution providers has no distinction between CPU and GPU operators, we add two additional operators, GPU operator and memory-copy operator and assign these operators to different thread pools for execution.
Figure 9: ONNXRuntime Architecture.
## 6 Evaluation
### Experiment Setup
We evaluate _Chrion_ to reveal its high-performance in terms of latency and memory requirement when serving recurrent neural networks. Table 4 and Table 5 shows the hardware and software requirements of our experiments. In the rest of the section, our experiments are designed to answer the following questions.
* **How can _Chrion_ optimize response time or memory requirement for inference workloads?** The end-to-end experiments in SS6.2 show _Chrion_ can reduce inference latency by 13.3% in the latency-optimal scenario and reduce GPU memory requirement by 60.0% in the memory-optimal scenario.
* **How does the hyperparameter affect graph partition?** The component analysis of \(\alpha\) in SS6.3 shows that we can adjust \(\alpha\) to achieve trade-off between GPU memory requirement and inference latency.
* **How can _Chrion_ performance for real workloads in local clusters?** We test _Chrion_ with real-world workloads in SS6.4 and it can reduce GPU memory loading and offloading by from 44% to 9%.
### End-to-End Experiments
We first experiment how _Chrion_ performs under different execution patterns. The structure of the baseline model is \(\texttt{num\_of\_layers}=12,\texttt{batch\_size}=8,\texttt{io\_size}=64, \texttt{seq\_len}=96\). We experiment with four patterns: GPU, CPU, Latency-Optimal and Memory-Optimal, and compare various metrics of different patterns in Table 6. We set \(\alpha\) to zero in the Latency-Optimal pattern, and then tune it larger to minimize GPU memory requirement with the guarantee of inference latency in the Memory-Optimal pattern. We also calculate the reduction rate of GPU memory footprint and execution latency for Latency-Optimal and Memory-Optimal patterns. Results show that in the Latency-Optimal pattern, _Chrion_ can not only reduce inference latency by 12.3% from 59.3 milliseconds to 52.0 milliseconds, but also decrease GPU memory footprint from 1643.0 MB to 948.1 MB by 42.3%. As we relax the latency restrictions without exceeding GPU execution pattern, we can achieve a maximum memory reduction by 61.7%.
We further explore whether _Chrion_ can adapt to different variants of RNNs. We introduce a wider range of configurations by varying the structural parameters, including number of layers, batch size, input/output dimensions, and sequence length. Table 6 shows twelve of such variants where the parameters of these variants are consistent with the basic structure except for the specified ones. It should be noted that if the latency reduction rate of Latency-Optimal pattern is minus, it means hybrid execution on CPUs and GPUs is not suitable for such an model structure, let alone Memory-Optimal pattern. Since the number of layers and sequence length play a critical role in model structures, it decides how well the LSTM can collaboratively utilize CPUs and GPUs for inter-operator parallelism. Table 6 shows that although a smaller number of layers can still achieve memory optimization, it leads to bad performance in latency, no matter in Latency-Optimal or Memory-Optimal patterns, _e.g._ when the number of layers is four. It is also applicable to explain why a larger sequence length brings little benefit to optimize execution latency. For the batch size and input/output dimensions, their values determine the computation efficiency of each cell in an RNN on the CPU and GPU. Since GPUs can take advantage of a larger batch size or input/output dimension to explore its parallelism capacity, _Chrion_ is not suitable when the batch size is equal to 16 or the input/output dimension is 128 in Table 6.
### Effectiveness of \(\alpha\)
We evaluate the effectiveness of \(\alpha\) in trading off between inference latency and GPU memory requirement. We experiment with the LSTM model with the baseline structure and three variants, and Figure 10 shows the results. In addition to the GPU memory and execution latency that vary as we increase \(\alpha\) from 0.0 to 1.0 with the stepsize of 0.1, we also add execution latency of vanilla execution patterns in the figure. Figure 9(a) is the result with baseline structures, and Figure 9(b), Figure 9(c) and Figure 9(d) are selected from Table 6 with a different batch size, input/output dimension, and sequence length. As expected, the selection of hyperparameter determines the preference for latency or memory optimization. All the sub-figures show a similar tendency, _i.e._ with the increasing of \(\alpha\), GPU requirement decreases but the latency keeps increasing. According to Equation 5, a larger \(\alpha\) denotes we spare more kernels to CPU to explore inter-operator parallelism on multi-core CPUs, but may result in SLO violation because of memory movement or long CPU
\begin{table}
\begin{tabular}{l c|c c} \hline \hline
**Component** & **Specification** & **Component** & **Specification** \\ \hline CPU Device & Intel Xeon ES-2685 v3 & GPU Device & NVIDIA RTX 1080Ti \\ Memory Capacity & 128GB & GPU Memory & 12GB \\ Number of Cores & 24 (12 physical cores) & GPU SM Cores & 4352 \\ Shared LLC Size & 30MB & Operating System & Ubntu 18.04 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Hardware Properties for Experiments.
\begin{table}
\begin{tabular}{l c|c c} \hline \hline
**Component** & **Specification** & **Component** & **Specification** \\ \hline ONNXRuntime & v1.10.0 & onnx & v1.12.0 \\ CUDA & 11.0 & cudnn & 8 \\ Python & 3.8 & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Software Properties for Experiments.
execution. When \(\alpha\) is equal to \(0.3\) for the baseline structure, _Chrion_ reaches its critical point of memory optimization without violation of SLO. However, when \(\alpha\) is larger than \(0.3\), inference latency will exceed GPU, which means _Chrion_ can not work for further optimization.
### Local Cluster Evaluation
To evaluate _Chrion_'s performance with read-world workloads, we experiment it with local cluster evaluation. To display _Chrion_'s impact on SLO, we first select three models including the baseline one from Table 6. For each model, we duplicate it for 20 times and serve them in three patterns, GPU, Latency-Optimal and Memory-Optimal. Figure 11 illustrates the histogram of the execution time for each model in different patterns. Taking our baseline model for example, we find out that the mode of execution time in GPU pattern appears between \(57.5\) and \(60.0\), while that in Latency-optimal pattern appears around \(50.0\). Since the Memory-Optimal pattern relaxes the execution time restriction, its mode is a little bigger, bringing reduction in memory footprint. However, we also observe that the Memory-Optimal pattern may bring long tail latency, especially for Figure (a)a and Figure (b)b.
\[\text{slo\_violation}=\frac{\text{\#\,of\,violation}}{\text{\#\,of\,invocations}} \tag{12}\]
\[\text{swapping\_rate}=\frac{\text{\#\,of\,swapping}}{\text{\#\,of\,invocations}} \tag{13}\]
We further use the nine feasible models listed in Table 6, and simulate such a workload where each model's requests come uniformly. Our evaluation metrics include SLO violation rate and model swapping rate defined in Equation 12 amd Equation 13, and Table 7 shows the results of the local custer evaluation. We first compare the SLO violation rate between GPU, Latency-Optimal and Memory-Optimal patterns.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Pattern**} & \multirow{2}{*}{**Metric**} & \multirow{2}{*}{**Baseline**} & \multicolumn{3}{c}{**Number of Layers**} & \multicolumn{3}{c}{**Batch Size**} & \multicolumn{3}{c}{**1/O Size**} & \multicolumn{3}{c}{**Sequence**} \\ \cline{3-13} & & & **4** & **8** & **16** & **1** & **2** & **4** & **16** & **32** & **128** & **32** & **64** & **128** \\ \hline \multirow{2}{*}{**GPU**} & GPU Memory (MB) & 1643.0 & 777.0 & 1209.0 & 2075.0 & 1643.0 & 1643.0 & 1643.0 & 1643.0 & 1533.0 & 2461.0 & 777.0 & 1209.0 & 2075.0 \\ & Latency (ms) & 59.3 & 20.5 & 39.0 & 78.1 & 58.3 & 59.1 & 57.2 & 58.2 & 59.2 & 58.6 & 19.7 & 39.2 & 78.5 \\ \hline \multirow{2}{*}{**CPU**} & Latency (ms) & 70.6 & 23.9 & 47.1 & 94.1 & 65.9 & 63.3 & 67.0 & 80.4 & 61.5 & 96.8 & 23.7 & 47.1 & 95.4 \\ \hline \multirow{2}{*}{**Latency-Optimal**} & GPU Memory (MB) & 948.1 & 525.0 & 833.0 & 1189.0 & 959.0 & 935.0 & 1053.0 & 623.0 & 927.0 & 875.0 & 609.0 & 757.0 & 707.0 \\ & Memory Reduction (\%) & 42.3 & 32.4 & 31.1 & 42.7 & 41.6 & 43.1 & 35.9 & 62.1 & 39.5 & 64.4 & 21.6 & 37.4 & 65.9 \\ & Latency (ms) & 52.0 & 21.2 & 36.8 & 71.1 & 48.2 & 47.6 & 55.7 & 67.5 & 52.4 & 72.0 & 18.8 & 34.2 & 82.1 \\ & Latency Reduction(\%) & 12.3 & -3.2 & 5.6 & 8.9 & 17.4 & **19.4** & 2.7 & -16.0 & 11.6 & -22.9 & 4.2 & 12.7 & -4.5 \\ \hline \multirow{2}{*}{**Memory-Optimal**} & GPU Memory (MB) & 629.0 & — & 833.0 & 675.0 & 607.0 & 591.0 & 1053.0 & — & 581.0 & — & 609.0 & 533.0 & — \\ & Memory Reduction (\%) & 61.7 & — & 31.1 & **67.5** & 63.1 & 64.0 & 35.9 & — & 62.1 & — & 21.6 & 55.9 & — \\ & Latency (ms) & 58.4 & — & 36.8 & 77.3 & 57.2 & 55.8 & 55.7 & — & 55.8 & — & 18.8 & 39.1 & — \\ & Latency Reduction (\%) & 1.6 & — & 5.6 & 1.0 & 1.9 & 5.7 & 2.7 & — & 5.7 & — & 4.2 & 0.4 & — \\ \hline \hline \end{tabular}
\end{table}
Table 6: End-to-End Experiments. The structure of the baseline model is \(\text{num\_of\_layers}=12,\text{batch\_size}=8,\text{io\_size}=64,\text{seq\_len}=96\). We adjust only one structural parameter for other models, and compare their performance in four scenarios: GPU, CPU, Latency-Optimal, and Memory-Optimal. “–” means _Chrion_ is not feasible to optimize such a model structure.
Figure 10: \(\alpha\)’s Effect on Balancing Execution Latency and Memory Requirement. The sub-caption of each sub-figure is a quadruple of \(\langle\text{num\_of\_layers},\text{batch\_size},\text{io\_size},\text{seq\_len}\rangle\). The orange and blue lines describe the memory and latency as the hyper-parameter \(\alpha\) varies, and two red horizontal lines represent the latency of vanilla execution patterns.
The Latency-Optimal pattern can not only decrease memory footprint but also reduce execution time. But the Memory-Optimal pattern increases \(\mathtt{slo}\_\mathtt{violation}\) rate because of relaxed SLO restriction. We also compare the model swapping rate when serving multiple deep learning models in one inference server. Our experiments confirm that it is common of model swapping between CPUs and GPUs because of the limited GPU memory capacity. Compared with executing the full model on GPU, the Latency-Optimal pattern can reduce the model swapping rate from 44% to 9%. If we further relax the latency restriction with SLO guarantee, the model swapping rate can decrease 0 because the server can keep all the nine models in GPU memory.
## 7 Related Work
Heterogeneous Computing.Modern computing clusters are equipped with heterogeneous platforms, CPUs and GPUs for deep learning workloads. Many existing works focus on resource management and scheduling to explore their computation potential. CHARM [52] provides collaborative resource management between CPUs and GPUs for latency-critical tasks. It achieves a trade-off between resource consumption and SLO satisfaction by dynamically adjusting resource quota among platforms. Allox [33] is designed based on the performance gap of heterogeneous platforms for different model structures. Dopia [18] aims to improve the performance of data-intensive workloads by resolving the limited memory bandwidth on integrated architectures.
LSTM Application and Optimization.LSTMs have extensive applications in various fields due to their capability of capturing the correlation between time-series data. In finance, since the financial market is a process of time series change, LSTMs can be used to capture the characteristics of the market and make predictions. For example, LSTMs are utilized for prediction of stock market prices [16, 36] and financial fraud detection [10] to improve business security. In biology, LSTMs are integrated with CNNs to predict protein structures [17], diabete detection [40], inter-protein interaction [9]. Besides, LSTMs are also applied in transportation, for example, to predict the traffic flow during a continuous period of time [14, 53].
Because of its widespread application, optimizing its computation efficiency on heterogeneous platforms is necessary. GRNN [28] is an RNN inference library to improve data reuse and alleviate synchronization overhead when serving RNNs on GPUs. Zhang _et al._ identify poor data reuse as the root cause of high execution latency, and design Deep-CPU [50], a CPU-based RNN serving library claimed to speed up RNN inference by ten times. Since batching is a common technique to improve system efficiency, BatchMaker [23] proposes the technique of cellular batching to improve RNN inference throughput. _Chrion_ reduces GPU memory footprint when serving RNNs and empowers CPU collaboratively for forward computation, which is orthogonal to these related works.
## 8 Conclusion
Developing high-performance and cost-efficiency deep learning serving systems is critical to machine learning practitioners. However, existing serving systems face severe challenges, including GPU memory bottleneck and under-utilization of CPUs. We observe the opportunity of providing computation power using idle CPUs for inter-operator parallelism across heterogeneous platforms when handling inference workloads, and design _Chrion_, an optimized serving system for RNNs by collaboratively utilizing CPUs and GPUs for forward computation. By scheduling partial operators to CPUs, _Chrion_ reduces the GPU memory footprint to serve RNNs within the guarantee of SLO and improves CPU utilization. Our end-to-end experiments show that _Chrion_ can reduce inference latency by at most 19.4% and GPU memory requirement by
Figure 11: Histogram of Execution Time in Cluster Evaluation. The sub-caption of each sub-figure is a quadruple of \(\langle\mathtt{num\_of\_layers,batch\_size,io\_size,seq\_len}\rangle\). Note that the x-axis range of each sub-graph is different, and the y-axis is processed with logarithm for better visual effect.
at most 67.5% for multi-layer RNNs.
|
2305.01035 | Random neural networks for rough volatility | We construct a deep learning-based numerical algorithm to solve
path-dependent partial differential equations arising in the context of rough
volatility. Our approach is based on interpreting the PDE as a solution to an
SPDE, building upon recent insights by Bayer, Qiu and Yao, and on constructing
a neural network of reservoir type as originally developed by Gonon,
Grigoryeva, Ortega. The reservoir approach allows us to formulate the
optimisation problem as a simple least-square regression for which we prove
theoretical convergence properties. | Antoine Jacquier, Zan Zuric | 2023-05-01T18:49:15Z | http://arxiv.org/abs/2305.01035v1 | # Random neural networks for rough volatility
###### Abstract.
We construct a deep learning-based numerical algorithm to solve path-dependent partial differential equations arising in the context of rough volatility. Our approach is based on interpreting the PDE as a solution to an SPDE, building upon recent insights by Bayer, Qiu and Yao, and on constructing a neural network of reservoir type as originally developed by Gonon, Grigoryeva, Ortega. The reservoir approach allows us to formulate the optimisation problem as a simple least-square regression for which we prove theoretical convergence properties.
Key words and phrases:Rough volatility, SPDEs, neural networks, reservoir computing 2020 Mathematics Subject Classification: 60G22, 35K10, 65C20, 68T07, 91G60 AJ acknowledges financial support from the EPSRC/T032146 grant. ZZ is supported by the EPSRC/S023925 CDT in Mathematics of Random Systems: Analysis, Modelling and Simulation. We would like to thank Lukas Gonon, Christian Bayer and Jinniao Qiu for helpful discussions. The python code is available at ZuricZ/RWIN_PDE_solver.
## 1. Introduction
In recent years, a fundamental shift from classical modelling towards so-called rough stochastic volatility models has happened. These "rough" models were first proposed by Gatheral, Jusselin, Rosenbaum [28] and by Bayer, Gatheral, Friz [4], and have since sparked a great deal of research, because of their ability to capture stylised facts of volatility time series and of option prices more accurately, while remaining parsimonious. In essence, they are a class of continuous-path stochastic volatility models, where the instantaneous volatility is driven by a stochastic process with paths rougher than those of Brownian Motion, typically modelled by a fractional Brownian motion [51] with Hurst parameter \(H\in(0,1)\). The reason for this drastic paradigm shift can be found not only under the historical measure, where the roughness of the time series of daily log-realised variance estimates suggests Holder regularity of \(H\approx 0.1\), but also under the pricing measure, where rough volatility models are able to reproduce the power-law behaviour of the ATM volatility skew. Since then, a slew of papers have appeared, providing closed-form expressions for the characteristic functions of rough Heston models [22], machine learning techniques for calibration [40], microstructural foundations [21], option pricing partial differential equations (PDEs) solvers [46, 5], among others.
Dating back to the Black-Scholes equation [11], PDEs have been used to model the evolution of the prices of European-style options. However, rough volatility models give rise to a non-Markovian framework, where the value function for a European option is not deterministic anymore, but is instead random and satisfies a backward stochastic partial differential equation (BSPDE) as was shown in [5]. Moreover, even in classical diffusive models, the so-called curse of dimensionality poses a challenge when solving PDEs in high dimension; until recently, only the backward stochastic differential equation (BSDE) approach by [54] was available to tackle this, which is not really feasible in dimension beyond six.
On a positive note, machine learning methods have spread inside quantitative finance in recent years, and neural networks in particular have become a powerful tool to overcome problems in high-dimensional situations, because of their superior computational performance across a wide range of applications [15, 31, 59]; more precisely in the context of PDEs, examples of applications thereof can be found in [19, 36, 61, 46, 60, 6]. For a more thorough literature review on the use of neural networks in finance and finance-related PDEs, we refer the reader to the surveys in [7, 30].
In this paper, we focus on the works by Hure, Pham and Warin [42], and by Bayer, Qiu and Yao [5], where the classical backward resolution technique is combined with neural networks to estimate both the value function and its gradient. Not only does this approach successfully reduce the curse of dimensionality, but also appears more effective in both accuracy and computational efficiency than existing Euler-based approaches.
Besides research on numerical aspects, a lot of progress has been made on the theoretical foundations for neural network-based methods, in particular showing that they are able to approximate solutions of certain types of PDEs [20, 43, 57, 34]. These results are significant as they show that deep neural networks can be used to solve complex problems that were previously thought intractable. However, in practice, optimal parameters of any given neural network minimising a loss function ultimately have to be calculated approximately. This is usually done through some kind of stochastic gradient descent (SGD) algorithm, which inadvertently introduces an optimisation error. Because of the
non-convexity of the network's loss surface and the stochastic nature of the SGD, the optimisation error is notoriously hard to treat rigorously. One such attempt by Gonon [32] instead involves the use of neural networks in which only certain weights are trainable and the remaining are randomly fixed. This suggests that these random-weight neural networks are, in fact, capable of learning non-degenerate Black-Scholes-type PDEs without succumbing to the curse of dimensionality. Following this, we combine the classical BSDE approach [54, 14] with random-weight neural networks (RWNNs) [41, 55, 56].
Our final algorithm then reduces to a least-square Monte-Carlo, as introduced by Longstaff and Schwartz [49] (see also [1] for related applications), where the usually arbitrary choice of basis is 'outsourced' to the reservoir of the corresponding RWNN. The basis is computationally efficient and ultimately allows us to express the approximation error in terms of the number of stochastic nodes in the network. Moreover, vectorisation of the Randomised Least Square along the sampling direction allows us to evaluate the sum of outer tensor products using the einsum function (available in NumPy and PyTorch) and achieve an even greater speed-up.
To summarise, in contrast with Bayer-Qiu-Yao [5], our numerical scheme employs RWNNs as opposed to the conventional feed-forward neural networks, resulting in significantly faster training times without sacrificing the accuracy of the scheme. Moreover, this structure allows us to provide error bounds in terms of the number of hidden nodes, granting additional insights into the network's performance. Given the comparable performance of RWNNs and conventional feed-forward neural networks, we argue that this paper illuminates an essential lesson, namely that the additional complexity of deep neural networks can sometimes be redundant at the cost of precise error bounds. We note in passing that RWNNs have already been used in Finance to price American options [37], for financial data forecasting ][48], for PIDEs [34], and we refer the interested reader to [16] for a general overview of their applications in data science.
The paper is structured as follows: Section 2 provides a brief overview of Random-weight Neural Networks (RWNNs), including their key features and characteristics. In Section 3, we outline the scheme for the Markovian case and discuss the non-Markovian case in Section 4. The convergence analysis is presented in Section 5. Additionally, Section 6 presents numerical results, which highlight the practical relevance of the scheme and its performance for different models. Some of the technical proofs are postponed to Appendix C to ease the flow of the paper.
**Notations:**\(\mathbb{R}^{+}=[0,\infty)\) represents the non-negative real numbers; \(\aleph\) refers to a random neural network, defined in Section 2; for an open subset \(E\subset\mathbb{R}^{d}\), \(1\leq p\leq\infty\) and \(s\in\mathbb{N}\) we define the Sobolev space
\[\mathcal{W}^{\aleph,p}(E,\mathbb{R}^{m})\coloneqq\Big{\{}f\in L^{p}(E,\mathbb{ R}^{m}):\;\partial_{\aleph}^{\boldsymbol{\alpha}}f\in L^{p}(E,\mathbb{R}^{m}), \text{for all }|\boldsymbol{\alpha}|\leq s\Big{\}},\]
where \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{d})\), \(|\boldsymbol{\alpha}|=\alpha_{1}+\ldots+\alpha_{d}\), and the derivatives \(\partial_{\aleph}^{\boldsymbol{\alpha}}f=\partial_{x_{1}}^{\alpha_{1}}\ldots \partial_{x_{d}}^{\alpha_{d}}f\) are taken in a weak sense.
## 2. Random-weight neural network (Rwnn)
Neural networks with random weights first appeared in the seminal works by Barron [2, 3], but a more modern version was proposed by Huang [41] under the name _Extreme learning machine_. Today these networks are known under different names: reservoir networks, random feature or random-weight networks; we choose to follow the latter as it sounds more explicit to us.
**Definition 2.1** (Neural network).: Let \(L,N_{0},\ldots,N_{L}\in\mathbb{N},\varrho:\mathbb{R}\to\mathbb{R}\) and for \(l=1,\ldots,L\) let \(w_{l}:\mathbb{R}^{N_{l-1}}\to\mathbb{R}^{N_{l}}\) an affine function. A function \(F:\mathbb{R}^{N_{0}}\to\mathbb{R}^{N_{L}}\) defined as
\[F=w_{L}\circ F_{L-1}\circ\cdots\circ F_{1},\quad\text{with }F_{l}=\varrho\circ w _{l}\quad\text{ for }l=1,\ldots,L-1,\]
is called a _neural network_, with activation function \(\varrho\) applied component-wise. \(L\) denotes the total number of layers, \(N_{1},\ldots,N_{L-1}\) denote the dimensions of the hidden layers and \(N_{0}and\)\(N_{L}\) those of the input and output layers respectively. For each \(l\in\{1,\ldots,L\}\) the affine function \(w_{l}:\mathbb{R}^{N_{l-1}}\to\mathbb{R}^{N_{l}}\) is given as \(w_{l}(\mathbf{x})=\mathrm{A}^{(l)}\mathbf{x}+\mathrm{b}^{(l)}\), for \(\mathbf{x}\in\mathbb{R}^{N_{l-1}}\), with \(\mathrm{A}^{(l)}\in\mathbb{R}^{N_{l}\times N_{l-1}}\) and \(\mathrm{b}^{(l)}\in\mathbb{R}^{N_{l}}\). For any \(i\in\{1,\ldots N_{l}\}\) and \(j\in\{1,\ldots,N_{l-1}\}\) the number \(A^{(l)}_{ij}\) is interpreted as the weight of the edge connecting node \(i\) of layer \(l-1\) to node \(j\) of layer \(l\).
A _random-weight neural network_ (RWNN) is a neural network where the hidden layers are randomly sampled from a given distribution and then fixed; consequently, only the last layer is trained: out of all the parameters \((\mathrm{A}^{(l)},\mathrm{b}^{(l)})_{l=0,\ldots,L}\) of the \(L\)-layered neural network, the parameters \((\mathrm{A}^{(0)},\mathrm{b}^{(0)},\ldots,A^{(L-1)},\mathrm{b}^{(L-1)})\) are randomly sampled and frozen and only \((\mathrm{A}^{(L)},\mathrm{b}^{(L)})\) from the last layer are trained.
The training of such an RWNN can then be simplified into a convex optimisation problem. This makes the training easier to manage and understand both practically and theoretically. However, by only allowing certain parts of the parameters to be trained, the overall capacity and expressivity are possibly reduced. Although it is still unclear if random neural networks still maintain any of the powerful approximation properties of general deep neural networks, these questions have been addressed to some extent in e.g. [33, 53], where learning error bounds for RWNNs have been proved.
Denote now \(\aleph_{\infty}^{\varrho}(d_{0},d_{1})\) the set of random neural networks from \(\mathbb{R}^{d_{0}}\) to \(\mathbb{R}^{d_{1}}\), with activation function \(\varrho\)-and we shall drop the explicit reference to input and output dimensions in the notation whenever they are clear from the context. Moreover, for any \(L,K\in\mathbb{N}\), \(\aleph_{L,K}^{\varrho}\) represents a random neural network with a fixed number of hidden layers \(L\) and fixed input and output dimension \(K\) for each hidden layer. We now give a precise definition of a single layer \(\aleph_{K}^{\varrho}\coloneqq\aleph_{1,K}^{\varrho}\), which we will use for our approximation.
**Definition 2.2** (Single layer RWNN).: Let \((\widetilde{\Omega},\widetilde{\mathcal{F}},\widetilde{\mathbb{P}})\) be a probability space on which the iid random variables \(\mathrm{A}_{k}:\widetilde{\Omega}\to\mathbb{R}^{d}\) and \(b_{k}:\widetilde{\Omega}\to\mathbb{R}\), respectively corresponding to weights and biases, are defined. Let \(\boldsymbol{\phi}=\left\{\phi_{k}\right\}_{k\geq 1}\) denote a sequence of random basis functions, where each \(\phi_{k}:\mathbb{R}^{d}\to\mathbb{R}\) is of the form
\[\phi_{k}(\mathbf{x})\coloneqq\varrho\left(\mathrm{A}_{k}^{\top}\mathbf{x}+b_{ k}\right),\qquad x\in\mathbb{R}^{d},\]
with \(\varrho:\mathbb{R}\to\mathbb{R}\) a Lipschitz continuous activation function. For an output dimension \(m\) and \(K\) hidden units, we define the _reservoir_ or _random basis_ as \(\Phi_{K}\coloneqq\phi_{1:K}=(\phi_{1},\ldots,\phi_{K})\) and the random network \(\aleph_{K}^{\varrho}\) with parameter \(\Theta=\left(\theta_{1},\ldots,\theta_{m}\right)^{\top}\in\mathbb{R}^{m\times K}\) as the map
\[\aleph_{K}^{\varrho}:\mathbf{x}\mapsto\Psi_{K}(\mathbf{x};\Theta)\coloneqq \Theta\Phi_{K}(\mathbf{x}).\]
Thus, for each output dimension \(j\in\{1,\ldots,m\}\), \(\aleph_{K}^{\varrho}\) produces a linear combination of the first \(K\) random basis functions \(\theta_{j}^{\top}\phi_{1:K}\coloneqq\sum_{k=1}^{K}\theta_{j,k}\phi_{k}\).
**Remark 2.3**.: In this paper, we will make use of the more compact vector notation
\[\Phi_{K}:\mathbb{R}^{d}\ni\mathbf{x}\mapsto\boldsymbol{\varrho}(\mathrm{A} \mathbf{x}+\mathrm{b})\in\mathbb{R}^{K},\]
where \(\boldsymbol{\varrho}:\mathbb{R}^{K}\to\mathbb{R}^{K}\) acts component-wise \(\boldsymbol{\varrho}(\mathbf{y})\coloneqq(\varrho(y_{1}),\ldots\varrho(y_{K}))\) and \(\mathrm{A}:\widetilde{\Omega}\to\mathbb{R}^{K\times d}\) and \(\mathrm{b}:\widetilde{\Omega}\to\mathbb{R}^{K}\) are the random matrix and bias respectively.
### Derivatives of ReLu-RWNN
In recent years ReLu neural networks have been predominately used in deep learning, because of their simplicity, efficiency and ability to address the so-called vanishing gradient problem [47]. In many ways, ReLu networks also give a more tractable structure to the optimisation problem compared to their smooth counterparts such as \(\tanh\) and sigmoid. Gonon, Grigoryeva and Ortega [33] derived error bounds to the convergence of a single layer RWNN with ReLu activations. Now, while \(\varsigma(y)\coloneqq\max\{y,0\}\) is performing well numerically, it is, however, not differentiable at zero (see [10] for a short exposition on the chain-rule in ReLu networks). As ReLu-RWNNs will be used in our approach to approximate solutions of partial differential equations, a discussion on its derivatives is in order. To that end we let \(\boldsymbol{\varsigma}(\mathbf{y})\coloneqq(\varsigma(y_{1}),\ldots, \varsigma(y_{K}))\) and \(\boldsymbol{H}(y)=\mathds{1}_{(0,\infty)}(\mathbf{y})\in\mathbb{R}^{K}\) for \(\mathbf{y}\in\mathbb{R}^{K}\), where the indicator function is again applied component-wise.
**Lemma 2.4**.: _For any linear function \(\ell(\mathbf{x})=\mathrm{A}\mathbf{x}+\mathrm{b}\), with \(\mathrm{A}\in\mathbb{R}^{K\times d}\) and \(\mathrm{b}\in\mathbb{R}^{K}\), then_
\[\nabla_{x}(\boldsymbol{\varsigma}\circ\ell)(\mathbf{x})=\mathrm{diag}( \boldsymbol{H}(\mathrm{A}\mathbf{x}+\mathrm{b}))\mathrm{A},\qquad\text{for a.e. }\mathbf{x}\in\mathbb{R}^{d}.\]
Proof.: Let \(\mathcal{A}\coloneqq\big{\{}\mathbf{x}\in\mathbb{R}^{d}:(\boldsymbol{\varsigma }\circ\ell)(\mathbf{x})=0\big{\}}=\big{\{}\mathbf{x}\in\mathbb{R}^{d}:\ell( \mathbf{x})\leq 0\big{\}}\). Then \((\boldsymbol{\varsigma}\circ\ell)(\mathbf{x})=\ell(\mathbf{x})\) for all \(\mathbf{x}\in\mathbb{R}^{d}\backslash\mathcal{A}\). Since \(\ell\) is Lipschitz, differentiability on level sets [23, Section 3.1.2, Corollary I] implies that \(\nabla_{\mathbf{x}}\left(\boldsymbol{\varsigma}\circ\ell\right)(\mathbf{x})= \mathbf{0}\in\mathbb{R}^{d}\) for almost every \(\mathbf{x}\in\mathcal{A}\), and hence
\[\nabla_{\mathbf{x}}(\boldsymbol{\varsigma}\circ\ell)(\mathbf{x})=\mathrm{diag }\left(\mathds{1}_{\{\ell(\mathbf{x})\in\mathbb{R}^{d}\backslash\mathcal{A} \}}\right)\nabla_{\mathbf{x}}\ell(\mathbf{x})=\mathrm{diag}\left(\mathds{1}_ {(0,\infty)}(\ell(\mathbf{x}))\right)\nabla_{\mathbf{x}}\ell(\mathbf{x})= \mathrm{diag}(\boldsymbol{H}(\mathrm{A}\mathbf{x}+\mathrm{b}))\mathrm{A}.\]
Thus by Lemma 2.4, the first derivative of \(\Psi(\cdot;\Theta)\in\aleph_{K}^{\varsigma}\) is equal to
\[\nabla_{\mathbf{x}}\Psi_{K}(\mathbf{x};\Theta)=\Theta\,\mathrm{diag}( \boldsymbol{H}(\mathrm{A}\mathbf{x}+\mathrm{b}))\mathrm{A}\qquad\text{for a.e. }\mathbf{x}\in\mathbb{R}^{d}. \tag{2.1}\]
The above statements hold almost everywhere, it is thus appropriate we introduce a notion of _approximate differentiability_.
**Definition 2.5** (Approximate limit, [23, Section 1.7.2]).: Consider a Lebesgue-measurable set \(E\subset\mathbb{R}^{d}\), a measurable function \(f:E\to\mathbb{R}^{m}\) and a point \(\mathbf{x}_{0}\in E\). We say \(l\in\mathbb{R}^{m}\) is the approximate limit of \(f\) at \(\mathbf{x}_{0}\), and write \(\mathrm{ap}\lim_{\mathbf{x}\to x_{0}}f(\mathbf{x})=l\), if for each \(\varepsilon>0\),
\[\lim_{r\downarrow 0}\frac{\lambda\left(\mathcal{B}_{r}(\mathbf{x}_{0})\cap\{ \mathbf{x}\in E:\;|f(\mathbf{x})-l|\geq\varepsilon\}\right)}{\lambda(\mathcal{B }_{r}(\mathbf{x}_{0}))}=0,\]
with \(\lambda\) the Lebesgue measure and \(\mathcal{B}_{r}(\mathbf{x}_{0})\) the closed ball with radius \(r>0\) and center \(\mathbf{x}_{0}\).
**Definition 2.6** (Approximate differentiability, [23, Section 6.1.3]).: Consider a measurable set \(E\subset\mathbb{R}^{d}\), a measurable map \(f:E\to\mathbb{R}^{m}\) and a point \(\mathbf{x}_{0}\in E\). The map \(f\) is approximately differentiable at \(\mathbf{x}_{0}\) if there exists a linear map \(\mathrm{D}_{\mathbf{x}}:\mathbb{R}^{d}\to\mathbb{R}^{m}\) such that
\[\mathrm{ap}\lim_{\mathbf{x}\to x_{0}}\frac{f(\mathbf{x})-f(\mathbf{x}_{0})- \mathrm{D}_{\mathbf{x}}(\mathbf{x}-\mathbf{x}_{0})}{|\mathbf{x}-\mathbf{x}_{0}| }=0.\]
Then \(\mathrm{D}_{\mathbf{x}}\) is called the approximate differential of \(f\) at \(\mathbf{x}_{0}\).
**Remark 2.7**.: The usual rules from classical derivatives, such as the uniqueness of the differential, and differentiability of sums, products and quotients, apply to approximately differentiable functions. Moreover, the chain rule applies to compositions \(\varphi\circ f\) when \(f\) is approximately differentiable at \(\mathbf{x}_{0}\) and \(\varphi\) is classically differentiable at \(f(\mathbf{x}_{0})\).
**Remark 2.8** ([23, Theorem 4, Section 6.1.3]).: For \(f\in\mathcal{W}^{1,p}_{\mathrm{loc}}\left(\mathbb{R}^{d}\right)\) and \(1\leq p\leq\infty\), \(f\) is approximately differentiable almost everywhere and its approximate derivative equals its weak derivative almost everywhere. We will thus use the operator \(\mathrm{D}_{\mathbf{x}}\) to denote the weak derivative and approximate derivative interchangeably, to distinguish them from the classical derivative denoted by \(\nabla\).
**Lemma 2.9**.: _Let \(E\subset\mathbb{R}^{d}\) be a measurable set with finite measure, \(X:\Omega\to E\) a continuous random variable on some probability space \((\Omega,\mathcal{F},\mathbb{P})\), \(\varphi\in\mathcal{C}^{1}(\mathbb{R}^{m})\), \(\Phi_{\mathrm{ap}}:E\to\mathbb{R}^{m}\) an approximately differentiable function, and \(\Phi\in\mathcal{C}^{1}(\mathbb{R}^{d};\mathbb{R}^{m})\) its continuously differentiable extension to \(\mathbb{R}^{d}\). Then \(\mathbb{E}[\varphi(\mathrm{D}_{x}\Phi_{\mathrm{ap}}(X))]=\mathbb{E}[\varphi( \nabla_{x}\Phi(X))]\)._
Proof.: By [24, Theorem 3.1.6] a function \(\Phi_{\mathrm{ap}}:E\to\mathbb{R}^{m}\) is approximately differentiable almost everywhere if for every \(\varepsilon>0\) there is a compact set \(F\subset E\) such that the Lebesgue measure \(\lambda(E\backslash F)<\varepsilon\) and \(\left.\Phi_{\mathrm{ap}}\right|_{F}\) is \(\mathcal{C}^{1}\). Since \(\varphi\) is everywhere differentiable, it maps null-sets to null-sets [58, Lemma 7.25]. The claim follows since \(\mathbb{P}\) is absolutely continuous with respect to the Lebesgue measure \(\lambda\), \(X\) being a continuous random variable.
**Corollary 2.10**.: _Let \(E\subset\mathbb{R}^{d}\) be a measurable set with finite measure, \(X:\Omega\to E\) a continuous random variable on some probability space \((\Omega,\mathcal{F},\mathbb{P})\), \(\varphi\in\mathcal{C}^{1}(\mathbb{R}^{m})\), \(\Phi:E\to\mathbb{R}^{m}\) an approximately differentiable function, and \(\Psi\in\mathcal{W}^{1,p}(E,\mathbb{R}^{m})\) for \(p\geq 1\) such that \(\Phi=\Psi\) almost everywhere. Then \(\mathbb{E}[\varphi(\mathrm{D}_{\mathbf{x}}\Phi(X))]=\mathbb{E}[\varphi( \mathrm{D}_{\mathbf{x}}\Psi(X))]\)._
Proof.: This is a direct consequence of Lemma 2.9, after noting that the two notions of derivatives are the same on \(\mathcal{W}^{1,p}(E,\mathbb{R}^{m})\) (see Remark 2.8).
From a practical perspective, the second-order derivative of the network with respect to the input will be zero for all intents and purposes. However, as will become apparent in Lemma 5.6, we need to investigate it further, in particular the measure zero set of points where ReLu-RWNN is not differentiable. Rewriting the diagonal operator in terms of the natural basis \(\{e_{i}\}\) and evaluating the function \(\boldsymbol{H}\) component-wise yields
\[\nabla_{\mathbf{x}}\Psi_{K}(\mathbf{x};\Theta)=\Theta\left(\sum_{j=1}^{K}e_{j }e_{j}^{\top}H\left(e_{j}^{\top}\mathrm{A}\mathbf{x}+b_{j}\right)\right) \mathrm{A}.\]
The \(i\)-th component of the second derivative is thus
\[\left[\nabla_{\mathbf{x}}^{2}\Psi_{K}(\mathbf{x};\Theta)\right]_{i}=\Theta \left(\sum_{j=1}^{K}e_{j}e_{j}^{\top}a_{ji}H^{\prime}\left(e_{j}^{\top} \mathrm{A}\mathbf{x}+b_{j}\right)\right)\mathrm{A}=\Theta\operatorname{diag} \left(a_{i}\right)\operatorname{diag}\left(\boldsymbol{H}^{\prime}(\mathrm{A} \mathbf{x}+\mathrm{b})\right)\mathrm{A},\]
where \(a_{i}\) denotes the \(i\)-th column of the matrix \(\mathrm{A}\). Next, we let
\[\delta_{0}^{\varepsilon}(\mathbf{x})\coloneqq\frac{H(\mathbf{x})-H(\mathbf{x }-\varepsilon)}{\varepsilon}\]
for \(\mathbf{x}\in\mathbb{R}\) and define the left derivative of \(H\) as \(H^{\prime}=\lim_{\varepsilon\downarrow 0}\delta_{0}^{\varepsilon}=\delta_{0}\) in the distributional sense. This finally gives the second derivative of the network:
\[\left[\nabla_{\mathbf{x}}^{2}\Psi_{K}(\mathbf{x};\Theta)\right]_{i}=\Theta \operatorname{diag}\left(a_{i}\right)\operatorname{diag}\left(\boldsymbol{ \delta}_{0}(\mathrm{A}\mathbf{x}+\mathrm{b})\right)\mathrm{A},\]
where \(\boldsymbol{\delta}_{0}\) denotes the vector function applying \(\delta_{0}\) component-wise.
### Randomised least squares (RLS)
Let \(Y\in\mathbb{R}^{d}\) and \(X\in\mathbb{R}^{k}\) random variables defined on some probability space \((\Omega,\mathcal{F},\mathbb{P})\) and let \(\boldsymbol{\beta}\in\mathbb{R}^{d\times k}\) be a deterministic coefficient matrix. If the loss function is the mean square error (MSE), we can derive the estimator for the randomised least square:
\[\nabla_{\boldsymbol{\beta}}\mathbb{E}\left[\|Y-\boldsymbol{\beta}X \|^{2}\right] =\nabla_{\boldsymbol{\beta}}\mathbb{E}[(Y-\boldsymbol{\beta}X)^{ \top}(Y-\boldsymbol{\beta}X)]\] \[=\mathbb{E}\left[\nabla_{\boldsymbol{\beta}}(Y^{\top}Y-Y^{\top} \boldsymbol{\beta}X-X^{\top}\boldsymbol{\beta}^{\top}Y+X^{\top}\boldsymbol{ \beta}^{\top}\boldsymbol{\beta}X)\right]\] \[=\mathbb{E}\left[2\boldsymbol{\beta}XX^{\top}-2YX^{\top}\right],\]
which gives the minimiser1\(\boldsymbol{\beta}=\mathbb{E}\left[YX^{\top}\right]\mathbb{E}\left[XX^{\top} \right]^{-1}\), and its estimator
Footnote 1: The matrix \(\mathbb{E}[XX^{\top}]\) may not be invertible, but its generalised Moore-Penrose inverse always exists.
\[\widehat{\boldsymbol{\beta}}\coloneqq\left(\sum_{j=1}^{n}Y_{j}X_{j}^{\top} \right)\left(\sum_{j=1}^{n}X_{j}X_{j}^{\top}\right)^{-1}.\]
Depending on the realisation of the reservoir of the RWNN, the covariates of \(X\) may be collinear, so that \(X\) is close to rank deficient. A standard remedy is to use the Ridge regularised version [38] of the estimator
\[\widehat{\boldsymbol{\beta}}_{R}=\left(\sum_{j=1}^{n}Y_{j}X_{j}^{\top}\right) \left(\sum_{j=1}^{n}X_{j}X_{j}^{\top}+\lambda I\right)^{-1},\quad\text{for }\lambda>0,\]
which results in a superior, more robust performance in our experiments.
**Remark 2.11**.: The above derivation holds true for the approximate derivative \(\mathrm{D}_{x}\) as well because all operations above hold for approximately differentiable functions (Remark 2.7).
**Remark 2.12**.: One of the advantages of RLS estimators is the ability to vectorise them over the sampling dimension (i.e. over \(n\) samples) in order to use tensor functionalities of packages such as NumPy and PyTorch to efficiently evaluate the sum of outer products using the einsum function. Details are provided in the code available at ZuricZ/RWNN_PDE_solver.
## 3. The Markovian case
Let the process \(\mathbf{X}\) of the traded and non-traded components of the underlying under the risk-neutral measure \(\mathbb{Q}\) be given by the following \(d\)-dimensional SDE:
\[\mathbf{X}_{s}^{t,\mathbf{x}}=\mathbf{x}+\int_{t}^{s}\mu(r,\mathbf{X}_{r}^{t, \mathbf{x}})\mathrm{d}r+\int_{t}^{s}\Sigma\left(r,\mathbf{X}_{r}^{t,\mathbf{x }}\right)\mathrm{d}W_{r}, \tag{3.1}\]
where \(\mu:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d}\) and \(\Sigma:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\) adhere to Assumption A.1, and \(W\) is a standard \(d\)-dimensional Brownian motion on the probability space \((\Omega,\mathcal{F},\mathbb{Q})\) equipped with the natural filtration \(\mathbb{F}=\{\mathcal{F}_{t}\}_{0\leq t\leq T}\) of \(W\). By the Feynman-Kac formula, options whose discounted expected payoff under \(\mathbb{Q}\) can be represented as
\[u(t,\mathbf{x})=\mathbb{E}\left[\int_{t}^{T}\mathrm{e}^{-r(s-t)}f\left(s, \mathbf{X}_{s}^{t,\mathbf{x}}\right)\mathrm{d}s+\mathrm{e}^{-r(T-t)}g\left( \mathbf{X}_{T}^{t,\mathbf{x}}\right)\right]\quad\text{ for all }(t,\mathbf{x})\in[0,T]\times \mathcal{A},\]
for \(\mathcal{A}\subset\mathbb{R}^{d}\) with interest rate \(r\geq 0\) and continuous functions \(f:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}\) and \(g:\mathbb{R}^{d}\to\mathbb{R}\) can be viewed as solutions to the Cauchy linear parabolic PDE
\[\left\{\begin{array}{rl}\partial_{t}u+\mathcal{L}u+f-ru=0,&\text{ on }[0,T)\times\mathcal{A},\\ u(T,\cdot)=g,&\text{ on }\mathcal{A},\end{array}\right.\]
where
\[\mathcal{L}u\coloneqq\frac{1}{2}\operatorname{Tr}\left(\Sigma\Sigma^{\top} \nabla_{\mathbf{x}}^{2}u\right)+(\nabla_{\mathbf{x}}u)\mu,\qquad\text{on }[0,T)\times\mathcal{A}, \tag{3.2}\]
is the infinitesimal generator associated with diffusion (3.1). In this Markovian setting, we thus adopt a set-up similar to [42] and consider a slightly more general PDE
\[\left\{\begin{array}{rl}\partial_{t}u(t,\mathbf{x})+\mathcal{L}u(t,\mathbf{ x})+f\Big{(}t,\mathbf{x},u(t,\mathbf{x}),\nabla_{\mathbf{x}}u(t,\mathbf{x}) \cdot\Sigma(t,\mathbf{x})\Big{)}&=0,\qquad\text{on }[0,T)\times\mathcal{A},\\ u(T,\mathbf{x})&=g(\mathbf{x}),&\text{on }\mathcal{A},\end{array}\right. \tag{3.3}\]
with \(f:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}\times\mathbb{R}^{d}\to\mathbb{R}\) such that Assumption A.1 is satisfied, which guarantees the existence and uniqueness of the solution to the corresponding backward stochastic differential equation (BSDE) [54, Section 4]. The corresponding second-order generator is again given by (3.2).
The following assumption is only required to cast the problem into a regression. Otherwise, the optimisation (below) can still be solved using other methods, such as stochastic gradient descent. Another solution would be to use the so-called splitting method to linearise the PDE (as in [6] and the references therein for example).
**Assumption 3.1**.: The function \(f:[0,T]\times\mathbb{R}^{d}\times\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R }^{d}\to\mathbb{R}\) has an affine structure in \(\mathbf{y}\in\mathbb{R}^{m}\) and in \(\mathbf{z}^{1},\mathbf{z}^{2}\in\mathbb{R}^{d}\):
\[f\left(t,\mathbf{x},\mathbf{y},\mathbf{z}^{1},\mathbf{z}^{2}\right)=a(t, \mathbf{x})\mathbf{y}+b(t,\mathbf{x})\mathbf{z}^{1}+c(t,\mathbf{x})\mathbf{z} ^{2}+\widetilde{f}(t,\mathbf{x}),\]
for some real-valued functions \(a,b,c,\widetilde{f}\) on \([0,T]\times\mathbb{R}^{d}\) that map to conformable dimensions.
### Random weighted neural network scheme
The first step in so-called deep BSDE schemes [19, 36, 42] is to establish the BSDE associated with the PDE (3.3) and the process (3.1) through the nonlinear Feynman-Kac formula. By [54] there exist \(\mathbb{F}\)-adapted processes \((Y,Z)\), which are unique solutions to the BSDE
\[Y_{t}=g\left(\mathbf{X}_{T}\right)+\int_{t}^{T}f\left(s,\mathbf{X}_{s},Y_{s},Z _{s}\right)\mathrm{d}s-\int_{t}^{T}Z_{s}\mathrm{d}W_{s},\qquad\text{for any }t\in[0,T], \tag{3.4}\]
and which are connected to the PDE (3.3) via
\[Y_{t}=u(t,\mathbf{X}_{t})\qquad\text{and}\qquad Z_{t}=\nabla_{x}u(t,\mathbf{ X}_{t})\cdot\Sigma(t,\mathbf{X}_{t}).\]
with terminal condition \(u(T,\cdot)=g\). Next, the BSDE (3.4) is rewritten in forward form
\[Y_{t}=Y_{0}-\int_{0}^{t}f\left(s,\mathbf{X}_{s},Y_{s},Z_{s}\right)\mathrm{d}s +\int_{0}^{t}Z_{s}\mathrm{d}W_{s},\qquad\text{for any }t\in[0,T],\]
and both processes are discretised according to the Euler-Maruyama scheme. To this end let \(\pi\coloneqq\{0=t_{0}<t_{1}<\ldots<t_{N}=T\}\) be a partition of the time interval \([0,T]\) with modulus \(|\pi|=\max_{i=\{0,1,\ldots,N-1\}}\delta_{i}\) and \(\delta_{i}\coloneqq t_{i+1}-t_{i}\). Then the scheme is given by
\[\left\{\begin{array}{rl}\mathbf{X}_{t_{i+1}}&=\mathbf{X}_{t_{i}}+\mu(t_{i}, \mathbf{X}_{t_{i}})\delta_{i}+\Sigma(t_{i},\mathbf{X}_{t_{i}})\Delta_{i}^{W}, \\ Y_{t_{i+1}}&=Y_{t_{i}}-f\left(t_{i},\mathbf{X}_{t_{i}},Y_{t_{i}},Z_{t_{i}} \right)\delta_{i}+Z_{t_{i}}\Delta_{i}^{W},\end{array}\right. \tag{3.5}\]
where naturally \(\Delta_{i}^{W}\coloneqq W_{t_{i+1}}-W_{t_{i}}\). Then for all \(i\in\{N-1,\ldots,0\}\) we approximate \(u(t_{i},\cdot)\) with \(\mathfrak{U}_{i}(\cdot;\Theta^{i})\in\aleph_{K}^{\rho}\) and \(Z_{t_{i}}\) as
\[\begin{array}{llll}u(t_{i},\mathbf{X}_{t_{i}})=Y_{t_{i}}&\approx\mathfrak{U} _{i}(\mathbf{X}_{t_{i}};\Theta^{i})&\coloneqq\Theta^{i}\Phi_{K}^{i}(\mathbf{X} _{t_{i}}),\\ Z_{t_{i}}&\approx\mathfrak{J}_{i}(\mathbf{X}_{t_{i}})&\coloneqq\mathrm{D}_{ \mathbf{x}}\mathfrak{U}_{i}(\mathbf{X}_{t_{i}};\Theta^{i})\cdot\Sigma(t_{i}, \mathbf{X}_{t_{i}})=\Theta^{i}\mathrm{D}_{\mathbf{x}}\Phi_{K}^{i}(\mathbf{X}_ {t_{i}})\cdot\Sigma(t_{i},\mathbf{X}_{t_{i}}).\end{array}\]
Recall that the derivative \(\mathfrak{J}_{i}(\mathbf{X}_{t_{i}})\) is the approximate derivative from Definition 2.6. The following formulation of the loss function \(\ell\) using the approximate derivative is sensible by Lemma 2.9: notice that for the optimal parameter \(\Theta^{i+1,*}\) in step \((i+1)\), the optimal approximation \(\widehat{\mathfrak{U}}_{i+1}(\mathbf{X}_{t_{i+1}})\coloneqq\mathfrak{U}_{i+1} (\mathbf{X}_{t_{i+1}};\Theta^{i+1,*})\) does not depend on \(\Theta^{i}\), hence under Assumption 3.1 with \(c=0\) the loss function at the \(i\)-th discretisation step reads
\[\begin{array}{l}\ell(\Theta^{i})\coloneqq\mathbb{E}^{\Phi}\left[\left\| \widehat{\mathfrak{U}}_{i+1}(\mathbf{X}_{t_{i+1}})-\left[\mathfrak{U}_{i}( \mathbf{X}_{t_{i}};\Theta^{i})-f\Big{(}t_{i},\mathbf{X}_{t_{i}},\mathfrak{U}_{ i}(\mathbf{X}_{t_{i}};\Theta^{i}),\mathfrak{J}_{i}(\mathbf{X}_{t_{i}};\Theta^{i}) \Big{)}\delta_{i}+\mathfrak{J}_{i}(\mathbf{X}_{t_{i}};\Theta^{i})\Delta_{i}^{W }\right]\right\|^{2}\right]\\ \phantom{\ell(\Theta^{i})}=\mathbb{E}^{\Phi}\left[\left\|\widehat{\mathfrak{U }}_{i+1}(\mathbf{X}_{t_{i+1}})-\left[(\mathfrak{U}_{i}(\mathbf{X}_{t_{i}}; \Theta^{i})-\left(a_{i}\mathfrak{U}_{i}(\mathbf{X}_{t_{i}};\Theta^{i})+b_{i} \mathfrak{J}_{i}(\mathbf{X}_{t_{i}};\Theta^{i})+\widetilde{f}_{i}\right)\delta _{i}+\mathfrak{J}_{i}(\mathbf{X}_{t_{i}};\Theta^{i})\Delta_{i}^{W}\right] \right\|^{2}\right]\\ \phantom{\ell(\Theta^{i})}=\mathbb{E}^{\Phi}\left[\left\|\widehat{\mathfrak{U }}_{i+1}(\mathbf{X}_{t_{i+1}})+\widetilde{f}_{i}\delta_{i}-\Theta^{i}\Big{\{}( 1-a_{i}\delta_{i})\Phi_{K}^{i}(\mathbf{X}_{t_{i}})+\mathrm{D}_{x}\Phi_{K}^{i}( \mathbf{X}_{t_{i}})\Sigma_{i}\left(b_{i}\delta_{i}+\Delta_{i}^{W}\right) \Big{\}}\right\|^{2}\right]\\ \phantom{\ell(\Theta^{i})}=\mathbb{E}^{\Phi}\left[\left\|\mathbf{Y}^{i}-\Theta ^{i}\mathrm{X}^{i}\right\|^{2}\right]\end{array}\]
where \(p_{i}\coloneqq p(t_{i},\mathbf{X}_{t_{i}})\) for \(p\in\{a,b,\widetilde{f},\Sigma\}\), and the expectation \(\mathbb{E}^{\Phi}\) is of course conditional on the realisation of the random basis \(\Phi_{K}^{i}\), i.e., conditional on the random weights and biases of the RWNN. Furthermore, we used the notations
\[\mathrm{Y}^{i}\coloneqq\widehat{\mathfrak{U}}_{i+1}(\mathbf{X}_{t_{i+1}})+ \widetilde{f}_{i}\delta_{i}\qquad\text{and}\qquad\mathrm{X}^{i}\coloneqq(1-a_ {i}\delta_{i})\Phi_{K}(\mathbf{X}_{t_{i}})+\mathrm{D}_{\mathbf{x}}\Phi_{K}( \mathbf{X}_{t_{i}})\cdot\Sigma_{i}\left(b_{i}\delta_{i}+\Delta_{i}^{W}\right).\]
The problem can now be solved via least squares from Section 2.2, yielding the estimator
\[\Theta^{i,*}=\mathbb{E}^{\Phi}\left[\mathrm{Y}^{i}\mathrm{X}^{i\top}\right] \mathbb{E}^{\Phi}\left[\mathrm{X}^{i}\mathrm{X}^{i\top}\right]^{-1}.\]
### Algorithm
We now summarise the algorithmic procedure of our RWNN scheme. See how the algorithm resembles the Least-Square Monte-Carlo method of [49] after considering sample estimator version of RLS from Section 2.2:
```
Input: \(\pi=\{0=t_{0}<t_{1}<\ldots<t_{N}=T\}\) time grid Initialisation: \(\Phi_{K}^{i}\) reservoirs with \(K\in\mathbb{N}\) hidden nodes with weights and biases distributed as \(\mathcal{U}_{[-R,R]}\) with \(R>0\) for all \(i\in\{0,\ldots,N-1\}\)do: Generate \(n\in\mathbb{N}\) paths of \(\{\mathbf{X}_{t_{i}}^{\pi,j}\}_{i=0}^{N}\) for \(j\in\{1,\ldots,n\}\) with the Euler-Maruyama scheme (3.5) Set \(\widehat{\mathfrak{U}}_{N}(\mathbf{x})=g(\mathbf{x})\) for all \(\mathbf{x}\in\mathbb{R}^{d}\) for\(i\in\{N-1,\ldots,0\}\)do Approximate \(u(t_{i},\cdot)\) with \(\mathfrak{U}(\cdot;\Theta^{i})\in\aleph_{K}^{\varsigma}\) based on reservoir \(\Phi_{K}^{i}\) Evaluate the derivative of \(\mathfrak{U}(\cdot;\Theta^{i})\) according to (2.1) Solve the regression problem (possibly using the Ridge estimator, see Section 2.2) \[\Theta^{i,*}=\operatorname*{arg\,min}_{\Theta^{i}}\ell(\Theta^{i})= \operatorname*{arg\,min}_{\Theta^{i}}\mathbb{E}^{\Phi,n}\left[\left\|\mathrm{ Y}^{i}-\Theta^{i}\mathrm{X}^{i}\right\|^{2}\right]\] where \[\mathrm{Y}^{i} \coloneqq\mathfrak{U}_{i}(\mathbf{X}_{t_{i}};\Theta^{i})+ \widetilde{f}\delta_{i}\] \[\mathrm{X}^{i} \coloneqq(1-a)\Phi_{K}^{i}(\mathbf{X}_{t_{i}})+(\nabla_{x} \Phi_{K}^{i}(\mathbf{X}_{t_{i}}))\Sigma_{i}\left(b\delta_{i}+\Delta_{i}^{W} \right).\] and \(\mathbb{E}^{\Phi,n}\) is evaluated over the empirical measure of \(\{\mathbf{X}_{t_{i}}^{\pi,j}\}_{i=0}^{N}\) for \(j\in\{1,\ldots,n\}\) Update \(\widehat{\mathfrak{U}}_{i}=\mathfrak{U}_{i}\left(\cdot,\Theta^{i,*}\right)\) endfor return\(\mathfrak{U}=\{\mathfrak{U}(\cdot;\Theta^{i,*})\}_{i=0}^{N}\).
```
**Algorithm 1** RWNN scheme
## 4. The non-Markovian case
We now consider a stochastic volatility model under a risk-neutral measure so that \(\mathbf{X}=(X,V)\), where the dynamics of log-price process \(X\) are given by
\[\mathrm{d}X_{s}^{t,x}=\bigg{(}r-\frac{V_{s}}{2}\bigg{)}\,\mathrm{d}s+\sqrt{V_{s }}\Big{(}\rho_{1}\mathrm{d}W_{s}^{1}+\rho_{2}\mathrm{d}W_{s}^{2}\Big{)},\qquad 0 \leq t\leq s\leq T, \tag{4.1}\]
starting from \(X_{t}^{t,x}=x\in\mathbb{R}\), with interest rate \(r\in\mathbb{R}\), correlation \(\rho_{1}\in[-1,1]\), and denote \(\rho_{2}\coloneqq\sqrt{1-\rho_{1}^{2}}\), and \(W^{1},W^{2}\) are two independent Brownian motions. We allow for a general variance process process \(V\), satisfying the following:
**Assumption 4.1**.: The process \(V\) has continuous trajectories, is non-negative, adapted to the natural filtration of \(W^{1}\) and integrable, i.e. \(\mathbb{E}\left[\int_{0}^{t}V_{s}\mathrm{d}s\right]<\infty\), for any \(t\geq 0\).
By no-arbitrage, the fair price of a European option with payoff \(h:\mathbb{R}^{+}\to\mathbb{R}^{+}\) reads
\[u(t,x)\coloneqq\mathbb{E}\left[\mathrm{e}^{-r(T-t)}h\left(\mathrm{e}^{X_{T}^{t,x}+rT}\right)\middle|\mathcal{F}_{t}\right],\quad\text{for all }(t,x)\in[0,T]\times\mathbb{R},\]
subject to (4.1). Since \(\mathbf{X}\) is not Markovian, one cannot characterise the value function \(u(t,x)\) via a deterministic PDE. Bayer, Qiu and Yao [5] proved that \(u\) can be viewed as a random field which, together with another random field \(\psi\), satisfies the backward
stochastic partial differential equation (BSPDE)
\[-{\rm d}u(t,x)=\left[\frac{V_{t}}{2}\partial_{x}^{2}u(t,x)+\rho\sqrt{V_{t}} \partial_{x}\psi(t,x)-\frac{V_{t}}{2}\partial_{x}u(t,x)-ru(t,x)\right]{\rm d}t- \psi(t,x){\rm d}W_{t}^{1}, \tag{4.2}\]
in a distributional sense, for \((t,x)\in[0,T]\times\mathbb{R}\), with boundary condition \(u(T,x)=h\left({\rm e}^{x+rT}\right)\). where the variance process \(\{V_{t}\}_{t\geq 0}\) is defined exogenously under Assumption 4.1. We in fact consider the slightly more general BSPDEs
\[\left\{\begin{array}{rl}-{\rm d}u(t,x)&=\left\{\frac{V_{t}}{2} \mathrm{D}_{x}^{2}u(t,x)+\rho\sqrt{V_{t}}\mathrm{D}_{x}\psi(t,x)-\frac{V_{t}}{2 }\mathrm{D}_{x}u(t,x)\right.\\ &\left.+f\left(t,{\rm e}^{x},u(t,x),\rho_{2}\sqrt{V_{t}}\mathrm{D}_{x}u(t,x), \psi(t,x)+\rho_{1}\sqrt{V_{t}}\mathrm{D}_{x}u(t,x)\right)\,\right\}{\rm d}t\\ &-\psi(t,x){\rm d}W_{t}^{1},\quad(t,x)\in[0,T)\times\mathbb{R},\\ u(T,x)&=g\left({\rm e}^{x}\right),\quad x\in\mathbb{R},\end{array}\right. \tag{4.3}\]
for \(g:\mathbb{R}^{+}\to\mathbb{R}\). We present exact conditions on \(f\) and \(g\) ensuring well-posedness in Assumption A.2, but additionally require the existence of a weak-Sobolev solution (see Assumption 5.1 for more details). Note that (4.2) is just a particular case of the general BSPDE (4.3) for the choice \(f(t,x,y,z,\widetilde{z})\equiv-ry\) and \(g({\rm e}^{x})\equiv h({\rm e}^{x+rT})\). Again, this general form is shown to be well-posed in the distributional sense under Assumption A.2 (borrowed from [5]). By [14] the corresponding BSDE is then, for \(0\leq t\leq s\leq T\),
\[\left\{\begin{aligned} -{\rm d}Y_{s}^{t,x}&=f\left(s,{ \rm e}^{X_{s}^{t,x}},Y_{s}^{t,x},{Z_{s}^{1}}^{t,x},{Z_{s}^{2}}^{t,x}\right){ \rm d}s-{Z_{s}^{1}}^{t,x}{\rm d}W_{s}^{1}-{Z_{s}^{2}}^{t,x}{\rm d}W_{s}^{2},\\ Y_{T}^{t,x}&=g\left({\rm e}^{X_{T}^{t,x}}\right), \end{aligned}\right. \tag{4.4}\]
where \((Y_{s}^{t,x},{Z_{s}^{1}}^{t,x},{Z_{s}^{2}}^{t,x})\) is defined as the solution to BSDE (4.4) in the weak sense.
### Random neural network scheme
Let the quadruple \(\left(X_{s},Y_{s},Z_{s}^{1},Z_{s}^{2}\right)\) be the solution to the forward BSDE (FBSDE)
\[\left\{\begin{aligned} -{\rm d}Y_{s}&=f\left(s,{ \rm e}^{X_{s}},Y_{s},Z_{s}^{1},Z_{s}^{2}\right){\rm d}s-{Z_{s}^{1}}{\rm d}W_{s }^{1}-{Z_{s}^{2}}{\rm d}W_{s}^{2},\\ {\rm d}X_{s}&=-\frac{V_{s}}{2}{\rm d}s+\sqrt{V_{s}} \left(\rho_{1}{\rm d}W_{s}^{1}+\rho_{2}{\rm d}W_{s}^{2}\right),\\ V_{s}&=\xi_{s}\mathcal{E}\left(\eta\widehat{W}_{s} \right),\quad\text{ with }\quad\widehat{W}_{s}=\int_{0}^{s}\mathcal{K}(s,r){\rm d}W_{r}^{1}, \quad,\end{aligned}\right. \tag{4.5}\]
for \(s\in[0,T]\), with terminal condition \(Y_{T}=g\left({\rm e}^{X_{T}}\right)\), initial condition \(X_{0}=x\) and \(\mathcal{K}\) a locally square-integrable kernel. For notational convenience below, we use \(\rho_{2}\coloneqq\sqrt{1-\rho_{1}^{2}}\), with \(\rho_{1}\in[-1,1]\). Here \(\mathcal{E}(\cdot)\) denotes the Wick stochastic exponential and is defined as \(\mathcal{E}(\zeta)\coloneqq\exp\left(\zeta-\frac{1}{2}\mathbb{E}[|\zeta|^{2}]\right)\) for a zero-mean Gaussian variable \(\zeta\). Then by [5, Theorem 2.4],
\[Y_{t} =u\left(t,X_{t}\right), \text{ for }t\in[0,T],\] \[Z_{t}^{1} =\psi\left(t,X_{t}\right)+\rho_{1}\sqrt{V_{t}}\mathrm{D}_{x}u \left(t,X_{t}\right), \text{ for }t\in[0,T),\] \[Z_{t}^{2} =\rho_{2}\sqrt{V_{t}}\mathrm{D}_{x}u\left(t,X_{t}\right), \text{ for }t\in[0,T),\]
where \((u,\psi)\) is the unique weak solution to (4.4). Accordingly, the forward equation reads
\[Y_{t}=Y_{0}-\int_{0}^{t}f\left(s,{\rm e}^{X_{s}},Y_{s},Z_{s}^{1},Z_{s}^{2} \right){\rm d}s+\int_{0}^{t}Z_{s}^{1}{\rm d}W_{s}^{1}+\int_{0}^{t}Z_{s}^{2}{ \rm d}W_{s}^{2},\qquad\text{ for }t\in[0,T].\]
By simulating \((W^{1},W^{2},V)\), the forward process \(X\) may be approximated by an Euler scheme-with the same notations as in the Markovian case-and the forward representation above yields the approximation
\[u\left(t_{i+1},X_{t_{i+1}}\right)\approx u(t_{i},X_{t_{i}})-f\left(t_{i},\mathrm{ e}^{X_{t_{i}}},u\left(t_{i},X_{t_{i}}\right),Z^{1}_{t_{i}},Z^{2}_{t_{i}} \right)\delta_{i}+Z^{1}_{t_{i}}\Delta_{i}^{W^{1}}+Z^{2}_{t_{i}}\Delta_{i}^{W^{ 2}},\]
with
\[Z^{1}_{t_{i}}=\rho_{1}\sqrt{V_{t_{i}}}\mathrm{D}_{x}u\left(t_{i},X_{t_{i}} \right)+\psi\left(t_{i},X_{t_{i}}\right)\qquad\text{and}\qquad Z^{2}_{t_{i}}= \rho_{2}\sqrt{V_{t_{i}}}\mathrm{D}_{x}u\left(t_{i},X_{t_{i}}\right).\]
By Lemma B.2 we can, for each time step \(i\in\{0,\ldots,N-1\}\), approximate the solutions \(u(t_{i},\cdot)\) and \(\psi(t_{i},\cdot)\) by two separate networks \(\mathfrak{U}_{i}\) and \(\Psi_{i}\) in \(\aleph_{K}^{\varsigma}\):
\[Y_{t_{i}} \approx\mathfrak{U}_{i}(X_{t_{i}};\Theta^{i})\qquad=\Theta^{i} \Phi^{\Theta,i}_{K}(X_{t_{i}}),\] \[Z^{1}_{t_{i}} \approx\mathfrak{Z}^{1}_{i}(X_{t_{i}};\Theta^{i},\Xi^{i})=\Theta ^{i}\left(\mathrm{D}_{x}\Phi^{\Theta,i}_{K}(X_{t_{i}})\right)\rho_{1}\sqrt{V_ {t_{i}}}+\Xi^{i}\Phi^{\Xi,i}_{K}(X_{t_{i}}),\] \[Z^{2}_{t_{i}} \approx\mathfrak{Z}^{2}_{i}(X_{t_{i}};\Theta^{i},\Xi^{i})=\Theta ^{i}\left(\mathrm{D}_{x}\Phi^{\Theta,i}_{K}(X_{t_{i}})\right)\rho_{2}\sqrt{V_ {t_{i}}}.\]
Here \(\Phi^{\Xi}_{K}\) and \(\Phi^{\Theta}_{K}\) are realisations of random bases (reservoirs) of the RWNNs with respective parameters \(\Xi\) and \(\Theta\). The next part relies on Assumption 3.1, namely
\[f\left(t_{i},\mathrm{e}^{X_{t_{i}}},Y_{t_{i}},Z^{1}_{t_{i}},Z^{2}_{t_{i}} \right)=a(t_{i},X_{t_{i}})Y_{t_{i}}+b(t_{i},X_{t_{i}})Z^{1}_{t_{i}}+c(t_{i},X_ {t_{i}})Z^{2}_{t_{i}}+\widetilde{f}(t_{i},X_{t_{i}}),\]
for some functions \(a,b,c,\widetilde{f}\) mapping to \(\mathbb{R}\), so that, as in the Markovian case, the minimisation of the expected quadratic loss at every time step \(i\in\{N-1,\ldots,0\}\) reads
\[\ell(\Theta^{i},\Xi^{i})\] \[\coloneqq\mathbb{E}^{\Phi}\Bigg{[}\Bigg{|}\widehat{\mathfrak{U}}_ {i+1}(X_{t_{i+1}})-\bigg{\{}\mathfrak{U}_{i}(X_{t_{i}};\Theta^{i})-f\Big{(}a \mathfrak{U}_{i}(X_{t_{i}};\Theta^{i})+b\mathfrak{Z}^{1}_{i}(X_{t_{i}};\Theta^ {i},\Xi^{i})+c\mathfrak{Z}^{2}_{i}(X_{t_{i}};\Theta^{i},\Xi^{i})+\widetilde{f }\Big{)}\,\delta_{i}\] \[\qquad\qquad\qquad\qquad+\sum_{k=1}^{2}\mathfrak{Z}^{k}_{i}(X_{t_ {i}};\Theta^{i},\Xi^{i})\Delta_{i}^{W^{k}}\bigg{\}}\Bigg{|}^{2}\Bigg{]}\] \[=\mathbb{E}^{\Phi}\Bigg{[}\Bigg{|}\widehat{\mathfrak{U}}_{i+1}(X _{t_{i+1}})-\bigg{\{}\mathfrak{U}_{i}(X_{t_{i}};\Theta^{i})-\Big{(}a\mathfrak{ U}_{i}(X_{t_{i}};\Theta^{i})+b\mathfrak{Z}^{1}_{i}(X_{t_{i}};\Theta^{i},\Xi^{i})+c \mathfrak{Z}^{2}_{i}(X_{t_{i}};\Theta^{i},\Xi^{i})+\widetilde{f}\Big{)}\, \delta_{i}\] \[\qquad\qquad\qquad\qquad+\sum_{k=1}^{2}\mathfrak{Z}^{k}_{i}(X_{t _{i}};\Theta^{i},\Xi^{i})\Delta_{i}^{W^{k}}\bigg{\}}\Bigg{|}^{2}\Bigg{]}\] \[=\mathbb{E}^{\Phi}\Bigg{[}\Bigg{|}\widehat{\mathfrak{U}}_{i+1}(X _{t_{i+1}})+\widetilde{f}\delta_{i}-\bigg{\{}\Xi^{i}\Phi^{\Xi,i}_{K}(X_{t_{i}}) \left(\Delta_{i}^{W^{1}}-b\delta_{i}\right)\] \[\qquad\qquad\qquad\qquad\qquad+\Theta^{i}\left((1-a\delta_{i}) \Phi^{\Theta,i}_{K}(X_{t_{i}})+\mathrm{D}_{x}\Phi^{\Theta,i}_{K}(X_{t_{i}}) \sqrt{V_{t_{i}}}\left(\Delta_{i}^{B}-(b\rho_{1}+c\rho_{2})\delta_{i}\right) \bigg{)}\,\Bigg{\}}\Bigg{|}^{2}\Bigg{]}\] \[=\mathbb{E}^{\Phi}\left[\big{|}\mathrm{Y}^{i}-\Xi^{i}\mathrm{X}^{ i}_{1}-\Theta^{i}\mathrm{X}^{i}_{2}\big{|}^{2}\right],\]
with \(\Delta_{i}^{B}=(\rho_{1}\Delta_{i}^{W^{1}}+\rho_{2}\Delta_{i}^{W^{2}})\) and where \(\widehat{\mathfrak{U}}_{i+1}(X_{t_{i+1}})\coloneqq\mathfrak{U}_{i+1}(X_{t_{i+1}}; \Theta^{i+1,*})\) was set in the previous time step and is now constant (without dependence on \(\Theta^{i}\)). We defined
\[\left\{\begin{array}{ll}\mathrm{Y}^{i}&\coloneqq\widehat{\mathfrak{U}}_{i+1} (X_{t_{i+1}})+\widetilde{f}\delta_{i},\\ \mathrm{X}_{1}^{i}&\coloneqq\Phi_{K}^{\Xi}(X_{t_{i}})\left(\Delta_{i}^{W^{1}} -b\delta_{i}\right),\\ \mathrm{X}_{2}^{i}&\coloneqq(1-a\delta_{i})\Phi_{K}^{\Theta}(X_{t_{i}})+D_{x }\Phi_{K}^{\Theta}(X_{t_{i}})\sqrt{V_{t_{i}}}\left(\Delta_{i}^{B}-(b\rho_{1}+c \rho_{2})\delta_{i}\right).\end{array}\right. \tag{4.6}\]
In matrix form, this yields \(\ell(\Theta^{i},\Xi^{i})=\mathbb{E}^{\Phi}[\|\mathrm{Y}^{i}-\boldsymbol{ \beta}^{i}\mathrm{X}^{i}\|^{2}]\), with \(\boldsymbol{\beta}^{i}=\left[\Xi^{i},\Theta^{i}\right]\) and \(\mathrm{X}^{i}=\left[\mathrm{X}_{1}^{i},\mathrm{X}_{2}^{i}\right]^{\top}\), for which the RLS from Section 2.2 yields the solution
\[\boldsymbol{\beta}^{i}=\mathbb{E}^{\Phi}\left[\left[\mathrm{Y}^{i}\mathrm{X} _{1}^{i\top}\quad\mathrm{Y}^{i}\mathrm{X}_{2}^{i\top}\right]\right]\mathbb{E} ^{\Phi}\left[\left[\begin{array}{cc}\mathrm{X}_{1}^{i}\mathrm{X}_{1}^{i\top }&\mathrm{X}_{1}^{i}\mathrm{X}_{2}^{i\top}\\ \mathrm{X}_{2}^{i}\mathrm{X}_{1}^{i\top}&\mathrm{X}_{2}^{i}\mathrm{X}_{2}^{i \top}\end{array}\right]\right]^{-1}. \tag{4.7}\]
### Algorithm
We summarise the steps of the algorithm below:
_Inputs_: \(\mathrm{time\,grid}\,\pi=\{0=t_{0}<t_{1}<\ldots<t_{N}=T\}\); number \(K\) of hidden nodes; \(R>0\);
_Initialisation_:
\(\left(\{\Phi_{K}^{\Theta,i}\}_{i=0}^{N-1},\{\Phi_{K}^{\Xi,i}\}_{i=0}^{N-1}\right)\) reservoirs with weights and biases distributed as \(\mathcal{U}_{[-R,R]}\);
**do**:
Generate \(n\) paths of \((\{X_{t_{i}}^{\pi,j}\}_{i=0}^{N},\{V_{t_{i}}^{\pi,j}\}_{i=0}^{N})\) for \(j\in\{1,\ldots,n\}\) with Euler-Maruyama
Set \(\widehat{Y}_{N}(x)=g(x)\) for all \(x\in\mathbb{R}\)
**for \(i\in\{N-1,\ldots,0\}\) do**:
Approximate \(u(t_{i},\cdot)\) with \(\mathfrak{U}(\cdot;\Theta^{i})\in\aleph_{K}^{\varsigma}\) based on reservoir \(\Phi_{K}^{\Theta,i}\)
Approximate \(\psi(t_{i},\cdot)\) with \(\Psi(\cdot;\Xi^{i})\in\aleph_{K}^{\varsigma}\) based on reservoir \(\Phi_{K}^{\Xi,i}\)
Evaluate derivatives of \(\left(\mathfrak{U}(\cdot;\Theta^{i}),\Psi(\cdot;\Xi^{i})\right)\) according to (2.1)
Solve the regression problem (possibly using the Ridge estimator from Section 2.2)
\[\Theta^{i,*}=\operatorname*{arg\,min}_{\Theta^{i}}\ell(\Theta^{i},\Xi^{i})= \operatorname*{arg\,min}_{\Theta^{i}}\mathbb{E}^{\Phi,n}\left[\left\|\mathrm{Y }^{i}-\boldsymbol{\beta}^{i}\mathrm{X}^{i}\right\|^{2}\right]\]
with \(\boldsymbol{\beta}^{i}=\left[\Xi^{i},\Theta^{i}\right]\), \(\mathrm{X}^{i}=\left[\mathrm{X}_{1}^{i},\mathrm{X}_{2}^{i}\right]^{\top}\), where \(\mathrm{Y}^{i},\mathrm{X}_{1}^{i},\mathrm{X}_{2}^{i}\) are given in (4.6), and \(\mathbb{E}^{\Phi,n}\) is computed with the empirical measure of \(\left(\{X_{t_{i}}^{\pi,j}\}_{i=0}^{N},\{V_{t_{i}}^{\pi,j}\}_{i=0}^{N}\right)_{j \in\{1,\ldots,n\}}\);
Update \(\widehat{\mathfrak{U}}_{i}(X_{t_{i}})=\mathfrak{U}_{i}\left(X_{t_{i}},\Theta^{ i,*}\right)\)
**end for**:
**return \(\{\mathfrak{U}_{i}(\cdot;\Theta^{i,*})\}_{i=0}^{N}\).**
## 5. Convergence analysis
In this section, whenever there is any ambiguity, we use the notation \(X^{\pi}\) to denote the discretised version of the solution process of (4.5) over the partition \(\pi=\{0=t_{0}<t_{1}<\ldots<t_{N}=T\}\) of the interval \([0,T]\), with modulus \(|\pi|=\max_{i\in\{0,1,\ldots,N-1\}}\delta_{i}\) with \(\delta_{i}=t_{i+1}-t_{i}\). As mentioned just before Assumption 3.1, the linearity of \(f\) assumed before was only required to cast the optimisation in Algorithm 2 into a regression problem. In the forthcoming convergence analysis, this does not play any role, and we, therefore, allow for a more general function \(f\).
**Assumption 5.1**.:
1. There exists a unique weak solution to the BSPDE system (4.3) with \(u,\psi\in\mathcal{W}^{3,2}\);
2. There is an increasing continuous function \(\omega:\mathbb{R}^{+}\to\mathbb{R}^{+}\) with \(\omega(0)=0\) such that \[\mathbb{E}\left[\int_{t_{1}}^{t_{2}}V_{s}\mathrm{d}s\right]+\mathbb{E}\left[ \left|\int_{t_{1}}^{t_{2}}V_{s}\mathrm{d}s\right|^{2}\right]\leq\omega(|t_{2}- t_{1}|),\quad\text{for any }0\leq t_{1}\leq t_{2}\leq T;\]
3. There exists \(L_{f}>0\) such that, for all \((t,x,z^{1},z^{2})\) and \((\widetilde{t},\widetilde{x},\widetilde{z}^{1},\widetilde{z}^{2})\), \[\left|f\left(t,\mathrm{e}^{x},y,z^{1},z^{2}\right)-f\left( \widetilde{t},\mathrm{e}^{\widetilde{x}},\widetilde{y},\widetilde{z}^{1}, \widetilde{z}^{2}\right)\right|\] \[\qquad\qquad\leq L_{f}\left\{\omega(|t-\widetilde{t}|)^{\frac{1} {2}}+|x-\widetilde{x}|+|y-\widetilde{y}|+|z^{1}-\widetilde{z}^{1}|+|z^{2}- \widetilde{z}^{2}|\right\}.\]
**Assumption 5.2**.: The first absolute moment of the discretisation scheme over the partition \(\pi\) for \(\{V_{s}\}_{s\in[0,T]}\) is bounded, namely \(\mathbb{E}\left[\left|V_{t_{i}}^{\pi}\right|\right]<\infty\) for all \(i\in\{0,\dots,N-1\}\).
Under Assumptions 4.1-5.1 Briand, Delyon, Hu, Pardoux, Stoica [14] established that
\[\mathbb{E}\left[\sup_{0\leq t\leq T}|X_{t}|^{2}\right] \leq C\left(1+|x_{0}|^{2}\right), \tag{5.1}\] \[\max_{i\in\{0,\dots,N-1\}}\mathbb{E}\left[\left|X_{t_{i+1}}-X_{t_{i+1}}^{ \pi}\right|^{2}+\sup_{t\in[t_{i},t_{i+1}]}\left|X_{t}-X_{t_{i}}^{\pi}\right|^{ 2}\right] \leq C\omega(|\pi|),\]
for some \(C>0\) independent of \(|\pi|\), and we furthermore have [14]
\[\mathbb{E}\left[\int_{0}^{T}\left|f\left(t,\mathrm{e}^{X_{t}},Y_{t},Z_{t}^{1},Z_{t}^{2}\right)\right|^{2}\mathrm{d}t\right]<\infty, \tag{5.2}\]
as well as the standard \(L^{2}\)-regularity result on \(Y\):
\[\max_{i\in\{0,\dots,N-1\}}\mathbb{E}\left[\sup_{t\in[t_{i},t_{i+1}]}\left|Y_{ t}-Y_{t_{i}}^{\pi}\right|^{2}\right]=\mathcal{O}(|\pi|). \tag{5.3}\]
For \(k\in\{1,2\}\), define the errors
\[\varepsilon^{Z^{k}}(\pi)\coloneqq\mathbb{E}\left[\sum_{i=0}^{N-1}\int_{t_{i}} ^{t_{i+1}}\left|Z_{t}^{k}-\overline{Z}_{t_{i}}^{k}\right|^{2}\mathrm{d}t\right],\quad\text{ with }\quad\overline{Z}_{t_{i}}^{k}\coloneqq\frac{1}{\delta_{i}} \mathbb{E}_{i}\left[\int_{t_{i}}^{t_{i+1}}Z_{t}^{k}\mathrm{d}t\right], \tag{5.4}\]
where \(\mathbb{E}_{i}\) denotes the conditional expectation given \(\mathcal{F}_{t_{i}}\). We furthermore define the auxiliary processes, for \(i\in\{0,\dots,N-1\}\),
\[\widehat{\mathcal{V}}_{t_{i}} \coloneqq\mathbb{E}_{i}\left[\widehat{\mathfrak{U}}_{i+1}\left(X_ {t_{i+1}}^{\pi}\right)\right]+f\left(t_{i},\mathrm{e}^{X_{t_{i}}^{\pi}}, \widehat{\mathcal{V}}_{t_{i}},\overline{\widehat{Z}}_{t_{i}}^{1},\overline{ \widehat{Z}}_{t_{i}}^{2}\right)\delta_{i},\] \[\overline{\widehat{Z}}_{t_{i}}^{1} \coloneqq\widehat{\Psi}_{i}(X_{t_{i}}^{\pi})+\frac{1}{\delta_{i}} \mathbb{E}_{i}\left[\widehat{\mathfrak{U}}_{i+1}\left(X_{t_{i+1}}^{\pi}\right) \Delta_{i}^{W^{1}}\right],\] \[\overline{\widehat{Z}}_{t_{i}}^{2} \coloneqq\frac{1}{\delta_{i}}\mathbb{E}_{i}\left[\widehat{ \mathfrak{U}}_{i+1}\left(X_{t_{i+1}}^{\pi}\right)\Delta_{i}^{W^{2}}\right], \tag{5.5}\]
with \(\widehat{\mathfrak{U}}_{i}(\mathbf{x})\coloneqq\mathfrak{U}_{i}(x;\Theta^{i, *})\) and \(\widehat{\Psi}_{i}(x)\coloneqq\Psi(x;\Xi^{i,*})\) as before. Observe that \(\widehat{\Psi}_{i+1}\) and \(\widehat{\mathfrak{U}}_{i+1}\) do not depend on \(\Theta^{i}\) because the parameters were fixed at \((i+1)\) time step and are held
constant at step \(i\) (see Algorithm 2). Next, notice that \(\widehat{\mathcal{V}}\) is well defined by a fixed-point argument since \(f\) is Lipschitz. By Assumption 5.1(i), there exist \(\widehat{v}_{i},\widetilde{\widetilde{z}}_{t_{i}}^{k}\) for which
\[\widehat{\mathcal{V}}_{t_{i}}=\widehat{v}_{i}(X_{t_{i}}^{\pi})\qquad\text{and} \qquad\widetilde{\widetilde{Z}}_{t_{i}}^{k}=\widetilde{\widetilde{z}}_{t_{i}}^ {k}(X_{t_{i}}^{\pi})\quad\text{for }i\in\{0,\dots,N-1\},k\in\{1,2\}. \tag{5.6}\]
Then by the martingale representation theorem [17, Theorem 14.5.1], there exist integrable processes \(\overline{\widetilde{Z}}^{1}\) and \(\overline{\widetilde{Z}}^{2}\) such that
\[\widehat{\Omega}_{i+1}\left(X_{t_{i+1}}^{\pi}\right)=\widehat{\mathcal{V}}_{t _{i}}-f\left(t_{i},\mathrm{e}^{X_{t_{i}}^{\pi}},\widehat{\mathcal{V}}_{t_{i}},\widetilde{\widetilde{Z}}_{t_{i}}^{1},\widetilde{\widetilde{Z}}_{t_{i}}^{2} \right)\delta_{i}+\int_{t_{i}}^{t_{i+1}}\widehat{Z}_{t}^{1}\mathrm{d}W_{t}^{1 }+\int_{t_{i}}^{t_{i+1}}\widehat{Z}_{t}^{2}\mathrm{d}W_{t}^{2}, \tag{5.7}\]
since \(\overline{\widetilde{Z}}_{t}^{k}\) are \(\mathcal{F}_{t}^{W}\)-adapted as asserted by the martingale representation theorem. From here, Ito's isometry yields
\[\overline{\widetilde{Z}}_{t_{i}}^{1} =\widehat{\Psi}_{i}(X_{t_{i}}^{\pi})+\frac{1}{\delta_{i}}\mathbb{ E}_{i}\left[\widehat{\Omega}_{i+1}(X_{t_{i+1}}^{\pi})\Delta_{i}^{W^{k}}\right]\] \[=\frac{1}{\delta_{i}}\int_{t_{i}}^{t_{i+1}}\widehat{\Psi}_{i}(X_ {t_{i}}^{\pi})\mathrm{d}t+\frac{1}{\delta_{i}}\mathbb{E}_{i}\left[\left( \widehat{\mathcal{V}}_{t_{i}}+\int_{t_{i}}^{t_{i+1}}\widehat{Z}_{t}^{1} \mathrm{d}W_{t}^{1}+\int_{t_{i}}^{t_{i+1}}\widehat{Z}_{t}^{2}\mathrm{d}W_{t}^ {2}\right)\int_{t_{i}}^{t_{i+1}}\mathrm{d}W_{t}^{1}\right]\] \[=\frac{1}{\delta_{i}}\mathbb{E}_{i}\left[\int_{t_{i}}^{t_{i+1}} \left(\widehat{\Psi}_{i}(X_{t_{i}}^{\pi})+\widehat{Z}_{t}^{1}\right)\mathrm{d }t\right],\]
and similarly,
\[\overline{\widetilde{Z}}_{t_{i}}^{2}=\frac{1}{\delta_{i}}\mathbb{E}_{i}\left[ \int_{t_{i}}^{t_{i+1}}\widehat{Z}_{t}^{2}\mathrm{d}t\right].\]
We consider convergence in terms of the following error:
\[\mathscr{E}\left(\widehat{\mathfrak{U}},\widehat{\Psi}\right)\coloneqq\max_{i \in\{0,\dots,N-1\}}\mathbb{E}^{\Phi}\left[\left|Y_{t_{i}}-\widehat{\mathfrak{U }}_{i}\left(X_{t_{i}}^{\pi}\right)\right|^{2}\right]+\mathbb{E}^{\Phi}\left[ \sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\sum_{k=1}^{2}\left|Z_{t}^{k}-\widehat{ \mathcal{Z}}_{i}^{k}\left(X_{t_{i}}^{\pi}\right)\right|^{2}\mathrm{d}t\right],\]
with \(\widehat{\mathcal{Z}}_{i}^{1},\widehat{\mathcal{Z}}_{i}^{2}\) introduced before Lemma 5.7. We now state the main convergence result:
**Theorem 5.3**.: _Under Assumptions 4.1-A.2-5.1, there exists \(C>0\) such that_
\[\mathscr{E}\left(\widehat{\mathfrak{U}},\widehat{\Psi}\right)\leq C\left\{ \omega(|\pi|)+\mathbb{E}\left[\left|g(X_{T})-g(X_{T}^{\pi})\right|^{2}\right]+ \sum_{k=1}^{2}\varepsilon^{Z^{k}}(\pi)+\frac{C^{*}}{K}N+M|\pi|^{2}\right\}\]
_over a compact \(\mathcal{Q}\subset\mathbb{R}\), with \(C^{*},M>0\) given in Lemma 5.8._
The following corollary is immediate from (C.5), established in Part II of the proof of Theorem 5.3:
**Corollary 5.4**.: _Under Assumptions 4.1-A.2-5.1, there exists \(C>0\) such that_
\[\max_{i\in\{0,\dots,N-1\}} \mathbb{E}^{\Phi}\left[\left|Y_{t_{i}}-\widehat{\mathfrak{U}}_{i} \left(X_{t_{i}}^{\pi}\right)\right|^{2}\right]\leq\] \[C\left\{\omega(|\pi|)+\mathbb{E}\left[\left|g(X_{T})-g(X_{T}^{ \pi})\right|^{2}\right]+\sum_{k=1}^{2}\varepsilon^{Z^{k}}(\pi)+\frac{C^{*}}{K} N+M|\pi|^{2}\right\}\]
_over a compact \(\mathcal{Q}\subset\mathbb{R}\), with \(C^{*},M>0\) given in Lemma 5.8._
**Remark 5.5**.: The second error term is the strong \(L^{2}\)-Monte-Carlo error and is \(\mathcal{O}(N^{-H})\) for processes driven by an fBm with Hurst parameter \(H\in(0,1)\). We refer the reader to [13, 27] for an exposition on strong versus weak error rates in rough volatility models.
To prove Theorem 5.3, the following bounds on the derivatives are key.
**Lemma 5.6**.: _Let \(\Psi_{K}(\cdot;\Theta)\in\mathbb{N}_{K}^{\varsigma}\) and \((X_{t_{i}}^{\pi},V_{t_{i}}^{\pi})_{i}\) denote the discretised versions of (4.5) over the partition \(\pi\), then there exist \(L_{1},L_{2}>0\) such that, for all \(i\in\{0,\ldots,N-1\}\),_
\[\left\|\mathbb{E}_{i}^{\Phi}\left[\mathrm{D}_{x}\Psi_{K}(X_{t_{i+1}}^{\pi}; \Theta)\right]\right\|\leq L_{1}\qquad\text{and}\qquad\left\|\mathbb{E}_{i}^{ \Phi}\left[\mathrm{D}_{x}^{2}\Psi_{K}(X_{t_{i+1}}^{\pi};\Theta)\right]\right\| \leq L_{2}.\]
Proof.: We start with the first derivative. For all \(x,y\in\mathbb{R}^{d}\),
\[\|\Psi_{K}(x;\Theta)-\Psi_{K}(y;\Theta)\| =\|\Theta\left(\upvarsigma(\mathrm{A}x+b)-\upvarsigma(Ay+b) \right)\|\leq\|\Theta\|_{F}\|\upvarsigma(\mathrm{A}x+b)-\upvarsigma(Ay+b)\|\] \[\leq\|\Theta\|_{F}\|\mathrm{A}x-Ay\|\leq\|\Theta\|_{F}\|A\|_{F}\| x-y\|\leq L_{1}\|x-y\|,\]
since \(\varsigma\) is \(1\)-Lipschitz. The estimator \(\Theta\) has an explicit form (4.7) and its norm is finite, therefore \(\Psi_{K}(\cdot;\Theta)\) is globally Lipschitz and its first derivative is bounded by \(L_{1}>0\). Next, without loss of generality, we can set \(\mathrm{A}=\mathrm{I}\) and \(\mathrm{b}=0\), since their support is bounded. As in Section 2.1, for the component \(j\in\{1,\ldots,m\}\),
\[\mathbb{E}_{i}^{\Phi}\left[\mathrm{D}_{x}^{2}\Psi_{K}(X_{t_{i+1}} ^{\pi};\Theta)\right]_{j} =\int\Theta\operatorname{diag}\left(e_{j}\right)\operatorname{ diag}\left(\mathbf{\delta}_{0}\left(x-\frac{1}{2}V_{t_{i}}^{\pi}\delta_{i}+\sqrt{V_{t_ {i}}}w\right)\right)\mathbf{p}_{\mathcal{N}}(w)\mathrm{d}w\] \[=\Theta\operatorname{diag}\left(e_{j}\right)\operatorname{diag} \left(\mathbf{p}_{\mathcal{N}}\left(0;x-\frac{1}{2}V_{t_{i}}^{\pi}\delta_{i},V_{t_ {i}}^{\pi}\delta_{i}\right)\right),\]
since \(\Delta_{i}^{B}\sim\mathcal{N}(0,\delta_{i})\) and \(\mathbf{p}_{\mathcal{N}}\) is the Gaussian density applied component-wise. Since the weights are sampled on a compact and \(\|\Theta\|\) is finite, then there exists \(C>0\) such that
\[\left\|\mathbb{E}_{i}^{\Phi}\left[\mathrm{D}_{x}^{2}\Psi_{K}(X_{t_{i+1}}^{\pi} ;\Theta)\right]\right\|\leq C\|\Theta\|_{F}=L_{2}.\]
From here the error bound of approximating \(\widehat{\mathcal{V}}_{t_{i}}\), \(\overline{\widehat{Z}}_{t_{i}}^{1}\) and \(\overline{\widehat{Z}}_{t_{i}}^{2}\) with their RWNN approximators \(\widehat{\mathfrak{U}}_{i}\), \(\widehat{\mathcal{Z}}_{i}^{1}\) and \(\widehat{\mathcal{Z}}_{i}^{2}\) (defined in the lemma below) can be obtained. For \(i\in\{0,\ldots,N-1\}\), \((\mathfrak{U}_{i},\Psi_{i})\in\mathbb{N}_{K}^{\varsigma}\), introduce
\[\mathcal{Z}_{i}^{1}(x) \coloneqq\Psi_{i}(x)+\rho_{1}\sqrt{V_{t_{i}}}\,\mathrm{D}_{x} \mathfrak{U}_{i}(x),\ \ \mathcal{Z}_{i}^{2}(x) \coloneqq\rho_{2}\sqrt{V_{t_{i}}}\,\mathrm{D}_{x} \mathfrak{U}_{i}(x),\] \[\widehat{\mathcal{Z}}_{i}^{1}(x) \coloneqq\widehat{\Psi}_{i}(x)+\rho_{1}\sqrt{V_{t_{i}}}\,\mathrm{ D}_{x}\widehat{\mathfrak{U}}_{i}(x),\ \ \widehat{\mathcal{Z}}_{i}^{2}(x) \coloneqq\rho_{2}\sqrt{V_{t_{i}}}\,\mathrm{D}_{x} \widehat{\mathfrak{U}}_{i}(x).\]
**Lemma 5.7**.: _There exists \(M>0\) such that, for any \(i\in\{0,\ldots,N-1\}\), \(k=1,2\),_
\[\mathbb{E}^{\Phi}\left[\left|\mathcal{Z}_{i}^{k}(X_{t_{i}}^{\pi})-\overline{ \widehat{Z}}_{t_{i}}^{k}\right|^{2}\right]\leq\rho_{k}^{2}|\pi|^{2}M.\]
Proof.: From (5.5) and (5.6), we have, for \(i\in\{0,\ldots,N-1\}\) and \(k\in\{1,2\}\),
\[\widehat{v}_{i}(x) =\mathbb{E}_{i}^{\Phi}\left[\widehat{\mathfrak{U}}_{i+1}\left(X_ {t_{i+1}}^{x,\pi}\right)\right]+f\left(t_{i},\mathrm{e}^{x},\widehat{v}_{i}(x),\overline{\widehat{z}}_{i}^{1}(x),\overline{\widehat{z}}_{i}^{2}(x)\right) \delta_{i},\] \[\overline{\widehat{z}}_{i}^{k}(x) =\Psi_{i}\left(X_{t_{i+1}}^{x,\pi}\right)\!\!1\!\!1_{\{k=1\}}+ \frac{1}{\delta_{i}}\mathbb{E}_{i}^{\Phi}\left[\widehat{\mathfrak{U}}_{i+1} \left(X_{t_{i+1}}^{x,\pi}\right)\Delta_{i}^{W^{k}}\right],\]
where \(X_{t_{i+1}}^{x,\pi}=x+\left(r-\frac{1}{2}V_{t_{i}}\right)\delta_{i}+\sqrt{V_{t _{i}}}\,\Delta_{i}^{B}\) is the Euler discretisation of \(\{X_{t}\}_{t\in[0,T]}\) over \(\pi\) and \(\{V_{t_{i}}^{\pi}\}_{i=0}^{N}\) is the appropriate discretisation of the volatility process over the same partition. For \(\{\mathcal{R}^{k}\}\stackrel{{\text{iid}}}{{\sim}}\mathcal{N}(0,1)\), the two auxiliary processes can be written as
\[\overline{\widehat{z}}_{i}^{k}(x)=\Psi_{i}\left(X_{t_{i+1}}^{x,\pi}\right)\!\!1 \!\!1_{\{k=1\}}+\frac{1}{\delta_{i}}\mathbb{E}_{i}^{\Phi}\left[\widehat{ \mathfrak{U}}_{i+1}\left(x-\frac{1}{2}V_{t_{i}}^{x,\pi}\delta_{i}+\sqrt{V_{t_{i} }^{x,\pi}}\sqrt{\delta_{i}}\left(\rho_{1}\mathcal{R}^{1}+\rho_{2}\mathcal{R}^{2} \right)\right)\sqrt{\delta_{i}}\mathcal{R}^{k}\right].\]
Notice that, while any sensible forward scheme for \(\{V_{t}\}_{t\in[0,T]}\) does depend on a series of Brownian increments, \(V_{t_{i}}^{x,\pi}\) only depends on \(\left(\Delta_{i-1}^{W},\ldots,\Delta_{0}^{W}\right)\), which are known at time \(t_{i}\). Thus, since usual derivative operations are available for approximately differentiable functions (Remark 2.7) multivariate integration by parts for Gaussian measures (a formulation of Isserlis' Theorem [44]) yields
\[\widetilde{\widetilde{z}}_{i}^{\,k}(x)=\Psi_{i}\left(X_{t_{i+1}}^{x,\pi}\right) \mathbf{1}_{\{k=1\}}+\rho_{k}\sqrt{V_{t_{i}}}\mathbb{E}^{\Phi}\left[\mathrm{D} _{x}\widehat{\Omega}_{i+1}\left(X_{t_{i+1}}^{x,\pi}\right)\right],\]
with corresponding derivatives
\[\mathrm{D}_{x}\widetilde{\widetilde{z}}_{i}^{\,k}(x)=\mathrm{D}_{x}\Psi_{i} \left(X_{t_{i+1}}^{x,\pi}\right)\mathbf{1}_{\{k=1\}}+\rho_{k}\sqrt{V_{t_{i}}} \mathbb{E}^{\Phi}\left[\mathrm{D}_{x}^{2}\widehat{\Omega}_{i+1}\left(X_{t_{i +1}}^{x,\pi}\right)\right].\]
An application of the implicit function theorem then implies
\[\mathrm{D}_{x}\widehat{v}_{i}(x)=\mathbb{E}_{i}^{\Phi}\left[\mathrm{D}_{x} \widehat{\Omega}_{i+1}\left(X_{t_{i+1}}^{x,\pi}\right)\right]+\delta_{i} \left\{\mathrm{D}_{x}\widehat{f}_{i}(x)+\mathrm{D}_{y}\widehat{f}_{i}(x) \mathrm{D}_{x}\widehat{v}_{i}(x)+\sum_{k=1}^{2}\mathrm{D}_{z^{k}}\widehat{f}_{ i}(x)\mathrm{D}_{x}\widetilde{\widetilde{z}}_{i}^{\,k}(x)\right\},\]
where \(\widehat{f}_{i}(x)\coloneqq f\left(t_{i},\mathrm{e}^{x},\widehat{v}_{i}(x), \widetilde{\widetilde{z}}_{i}^{\,1}(x),\widetilde{\widetilde{z}}_{i}^{\,2}(x )\right)\) and
\[\Psi_{i}\left(X_{t_{i+1}}^{x,\pi}\right)\mathbf{1}_{\{k=1\}}+ \left(1-\delta_{i}\mathrm{D}_{y}\widehat{f}_{i}(x)\right)\rho_{k}\sqrt{V_{t_{i }}^{\pi}}\mathrm{D}_{x}\widehat{v}_{i}(x)\] \[=\widetilde{\widetilde{z}}_{i}^{k}(x)+\rho_{k}\sqrt{V_{t_{i}}^{ \pi}}\delta_{i}\left(\mathrm{D}_{x}\widehat{f}_{i}(x)+\mathrm{D}_{z^{1}} \widehat{f}_{i}(x)\mathrm{D}_{x}\widetilde{\widetilde{z}}_{i}^{\,1}(x)+ \mathrm{D}_{z^{2}}\widehat{f}_{i}(x)\mathrm{D}_{x}\widetilde{\widetilde{z}}_{ i}^{\,2}(x)\right).\]
Thus, for small enough \(|\pi|\), since \(f\) is Lipschitz (Assumption 5.1) and Lemma 5.6,
\[\Psi_{i}\left(X_{t_{i+1}}^{x,\pi}\right)\mathbf{1}_{\{k=1\}}+ \rho_{k}\sqrt{V_{t_{i}}^{\pi}}\mathrm{D}_{x}\widehat{v}_{i}(x) \leq\overline{\widetilde{z}}_{i}^{k}(x)+\rho_{k}\delta_{i}\sqrt{V_ {t_{i}}^{\pi}}\left(1+\mathrm{D}_{x}\widetilde{\widetilde{z}}_{i}^{\,1}(x)+ \mathrm{D}_{x}\widetilde{\widetilde{z}}_{i}^{\,2}(x)\right)\] \[\leq\overline{\widetilde{z}}_{i}^{k}(x)+\rho_{k}\delta_{i}\sqrt{V _{t_{i}}^{\pi}}\left(1+L_{1}+\sqrt{2}L_{2}\sqrt{V_{t_{i}}^{\pi}}\right).\]
Therefore
\[\mathbb{E}^{\Phi}\left[\left|\Psi_{i}(X_{t_{i}}^{\pi})+\rho_{1} \sqrt{V_{t_{i}}^{\pi}}\mathrm{D}_{x}\mathfrak{U}_{i}(X_{t_{i}}^{\pi})- \widetilde{\widetilde{Z}}_{t_{i}}^{\,1}\right|^{2}\right]\] \[\leq\mathbb{E}^{\Phi}\left[\left|\overline{\widetilde{z}}_{i}^{\, 1}(X_{t_{i}}^{\pi})-\overline{\widetilde{Z}}_{t_{i}}^{\,1}+\rho_{1}\delta_{i} \sqrt{V_{t_{i}}^{\pi}}\left[1+L_{1}+\sqrt{2}L_{2}\sqrt{V_{t_{i}}^{\pi}}\right] \right|^{2}\right]\] \[\leq\rho_{1}^{2}|\pi|^{2}\left\{\mathbb{E}\left[\left|V_{t_{i}}^{ \pi}\right|\right]\left(1+L_{1}+\sqrt{2}L_{2}\mathbb{E}\left[\left|V_{t_{i}}^{ \pi}\right|\right]\right)\right\}\leq\rho_{1}^{2}|\pi|^{2}M,\]
using Corollary 2.10 in the second inequality and the boundedness of \(\mathbb{E}[|V^{\pi}|]\) from Assumption 5.2 in the last line. The proof of the other bound is analogous.
**Lemma 5.8**.: _Under Assumptions 4.1-A.2-5.1, for sufficiently small \(|\pi|\) we have_
\[\mathbb{E}^{\Phi}\left[\left|\widehat{\mathcal{V}}_{t_{i}}-\widehat{\Omega}_{i} \left(X_{t_{i}}^{\pi}\right)\right|^{2}\right]+\delta_{i}\mathbb{E}^{\Phi} \left[\sum_{k=1}^{2}\left|\widehat{Z}_{t_{i}}^{k}-\widehat{Z}_{i}^{k}\left(X_{t _{i}}^{\pi}\right)\right|^{2}\right]\leq C\left\{\frac{C^{*}}{K}+M|\pi|^{3}\right\}\]
_on a compact \(\mathcal{Q}\), for all \(i\in\{0,\ldots,N-1\}\) and \(K\) hidden units, for some \(C>0\), where \(C^{*}\) is as in Proposition B.1 and \(M\) in Lemma 5.7._
Proof of Lemma 5.8.: Fix \(i\in\{0,\ldots,N-1\}\). Relying on the martingale representation in (5.7) and Lemma B.2, we can define the following loss function for the pair \((\mathfrak{U}_{i}(\cdot;\Theta),\Psi_{i}(\cdot;\Xi))\in\aleph_{K}^{\kappa}\) and their corresponding parameters \(\Theta\) and \(\Xi\):
\[\widehat{L}_{i}(\Theta,\Xi)\coloneqq\widetilde{L}_{i}(\Theta,\Xi)+\mathbb{E}^ {\Phi}\left[\int_{t_{i}}^{t_{i+1}}\sum_{k=1}^{2}\left|\widehat{Z}_{t}^{k}- \overline{\widetilde{Z}}_{t_{i}}^{k}\right|^{2}\right], \tag{5.8}\]
with
\[\widetilde{L}_{i}(\Theta,\Xi)\coloneqq\mathbb{E}^{\Phi}\bigg{[} \bigg{|}\widehat{\mathcal{V}}_{t_{i}}-\mathfrak{U}_{i}(X_{t_{i}}^{\pi};\Theta) +\delta_{i}\Big{\{}f\left(t_{i},\mathrm{e}^{X_{t_{i}}^{\pi}},\mathfrak{U}(X_{ t_{i}}^{\pi};\Theta),\mathcal{Z}_{i}^{1}(X_{t_{i}}^{\pi};\Theta,\Xi), \mathcal{Z}_{i}^{2}(X_{t_{i}}^{\pi};\Theta,\Xi)\right)\] \[-f\left(t_{i},\mathrm{e}^{X_{t_{i}}^{\pi}},\mathcal{V}_{t_{i}}, \overline{\widetilde{Z}}_{t_{i}}^{1},\overline{\widetilde{Z}}_{t_{i}}^{2} \right)\Big{\}}\Big{|}^{2}\bigg{]}+\delta_{i}\sum_{k=1}^{2}\mathbb{E}^{\Phi} \left[\left|\overline{\widetilde{Z}}_{t_{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_{i} }^{\pi};\Theta,\Xi)\right|^{2}\right].\]
Recall the following useful identity, valid for any \(a,b\in\mathbb{R}\) and \(\chi>0\):
\[(a+b)^{2}\leq(1+\chi)\,a^{2}+\left(1+\frac{1}{\chi}\right)b^{2}. \tag{5.9}\]
Applying (5.9) with \(\chi=\gamma\delta_{i}\) yields
\[\widetilde{L}_{i}(\Theta,\Xi)\leq\delta_{i}\sum_{k=1}^{2}\mathbb{ E}^{\Phi}\left[\left|\overline{\widetilde{Z}}_{t_{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_ {i}}^{\pi};\Theta,\Xi)\right|^{2}\right]+(1+C\delta_{i})\mathbb{E}^{\Phi}\left[ \left|\widehat{\mathcal{V}}_{t_{i}}-\mathfrak{U}_{i}(X_{t_{i}}^{\pi};\Theta) \right|^{2}\right]\] \[+\left(1+\frac{1}{C\delta_{i}}\right)\mathbb{E}^{\Phi}\left[ \left|f\left(t_{i},\mathrm{e}^{X_{t_{i}}^{\pi}},\mathfrak{U}(X_{t_{i}}^{\pi}; \Theta),\mathcal{Z}_{i}^{1}(X_{t_{i}}^{\pi};\Theta,\Xi),\mathcal{Z}_{i}^{2}(X _{t_{i}}^{\pi};\Theta,\Xi)\right)-f\left(t_{i},\mathrm{e}^{X_{t_{i}}^{\pi}}, \mathcal{V}_{t_{i}},\overline{\widetilde{Z}}_{t_{i}}^{1},\overline{\widetilde{ Z}}_{t_{i}}^{2}\right)\right|^{2}\right].\]
Now by the Lipschitz condition on \(f\) from Assumption 5.1,
\[\widetilde{L}_{i}(\Theta,\Xi)\leq(1+C\delta_{i})\mathbb{E}^{\Phi}\left[\left| \widehat{\mathcal{V}}_{t_{i}}-\mathfrak{U}_{i}(X_{t_{i}}^{\pi};\Theta)\right| ^{2}\right]+C\delta_{i}\sum_{k=1}^{2}\mathbb{E}^{\Phi}\left[\left|\overline{ \widetilde{Z}}_{t_{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_{i}}^{\pi};\Theta,\Xi) \right|^{2}\right].\]
For any \(a,b\in\mathbb{R}\), \(\chi<0\), Inequality (5.9) holds with the reverse sign, hence, with \(\chi=-\gamma\delta_{i}\),
\[\widetilde{L}_{i}(\Theta,\Xi)\geq\delta_{i}\sum_{k=1}^{2}\mathbb{ E}^{\Phi}\left[\left|\overline{\widetilde{Z}}_{t_{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_ {i}}^{\pi};\Theta,\Xi)\right|^{2}\right]+(1-\gamma\delta_{i})\mathbb{E}^{ \Phi}\left[\left|\widehat{\mathcal{V}}_{t_{i}}-\mathfrak{U}_{i}(X_{t_{i}}^{ \pi};\Theta)\right|^{2}\right]\] \[-\frac{1}{\gamma\delta_{i}}\mathbb{E}^{\Phi}\left[\left|f\Big{(}t _{i},\mathrm{e}^{X_{t_{i}}^{\pi}},\mathfrak{U}(X_{t_{i}}^{\pi};\Theta), \mathcal{Z}_{i}^{1}(X_{t_{i}}^{\pi};\Theta,\Xi),\mathcal{Z}_{i}^{2}(X_{t_{i}}^ {\pi};\Theta,\Xi)\right)-f\left(t_{i},\mathrm{e}^{X_{t_{i}}^{\pi}},\mathcal{V }_{t_{i}},\overline{\widetilde{Z}}_{t_{i}}^{1},\overline{\widetilde{Z}}_{t_{i} }^{2}\Big{)}\right|^{2}\right].\]
Again since \(f\) is Lipschitz, the arithmetic-geometric inequality implies
\[\widetilde{L}_{i}(\Theta,\Xi) \geq(1-\gamma\delta_{i})\mathbb{E}^{\Phi}\left[\left|\widehat{ \mathcal{V}}_{t_{i}}-\mathfrak{U}_{i}(X_{t_{i}}^{\pi};\Theta)\right|^{2} \right]+\sum_{k=1}^{2}\mathbb{E}^{\Phi}\left[\left|\widetilde{\widetilde{Z}}_{t _{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_{i}}^{\pi};\Theta,\Xi)\right|^{2}\right]\] \[\geq(1-\gamma\delta_{i})\mathbb{E}^{\Phi}\left[\left|\widehat{ \mathcal{V}}_{t_{i}}-\mathfrak{U}_{i}(X_{t_{i}}^{\pi};\Theta)\right|^{2} \right]+\sum_{k=1}^{2}\mathbb{E}^{\Phi}\left[\left|\widetilde{\widetilde{Z}}_{t _{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_{i}}^{\pi};\Theta,\Xi)\right|^{2}\right]\] \[\qquad-\frac{3\delta_{i}L_{f}^{2}}{\gamma}\left(\mathbb{E}^{\Phi }\left[\left|\widehat{\mathcal{V}}_{t_{i}}-\mathfrak{U}_{i}(X_{t_{i}}^{\pi}; \Theta)\right|^{2}\right]+\sum_{k=1}^{2}\mathbb{E}^{\Phi}\left[\left|\widetilde {\widetilde{Z}}_{t_{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_{i}}^{\pi};\Theta,\Xi) \right|^{2}\right]\right).\]
Taking \(\gamma=6L_{f}^{2}\) gives
\[\widetilde{L}_{i}(\Theta,\Xi)\geq(1-C\delta_{i})\mathbb{E}^{\Phi}\left[ \left|\widehat{\mathcal{V}}_{t_{i}}-\mathfrak{U}_{i}(X_{t_{i}}^{\pi};\Theta) \right|^{2}\right]+\frac{\delta_{i}}{2}\sum_{k=1}^{2}\mathbb{E}^{\Phi}\left[ \left|\widetilde{\widetilde{Z}}_{t_{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_{i}}^{ \pi};\Theta,\Xi)\right|^{2}\right].\]
For a given \(i\in\{0,\ldots,N-1\}\) take \((\Theta^{*},\Xi^{*})\in\arg\min_{\Theta,\Xi}\widehat{L}_{i}(\Theta,\Xi)\) so that \(\widehat{\mathfrak{U}}_{i}=\mathfrak{U}_{i}(\cdot;\Theta^{*})\) and \(\widehat{\mathcal{Z}}_{i}^{k}(\cdot)\coloneqq\mathcal{Z}_{i}^{k}(\cdot;\Theta ^{*},\Xi^{*})\). From (5.8), \(\widetilde{L}_{i}\) and \(\widetilde{L}_{i}\) have the same minimisers, thus combining both bounds gives for all \((\Theta,\Xi)\in\mathbb{R}^{m\times K}\times\mathbb{R}^{m\times K}\),
\[(1-C\delta_{i})\,\mathbb{E}^{\Phi}\left[\left|\widehat{\mathcal{ V}}_{t_{i}}-\widehat{\mathfrak{U}}_{i}\left(X_{t_{i}}^{\pi}\right)\right|^{2} \right]+\frac{\delta_{i}}{2}\sum_{k=1}^{2}\mathbb{E}^{\Phi}\left[\left| \widetilde{\widetilde{Z}}_{t_{i}}^{k}-\widehat{\mathcal{Z}}_{i}^{k}\right|^{2 }\right]\leq\widetilde{L}_{i}(\Theta^{*},\Xi^{*})\leq\widetilde{L}_{i}(\Theta,\Xi)\] \[\leq(1+C\delta_{i})\,\mathbb{E}^{\Phi}\left[\left|\widehat{ \mathcal{V}}_{t_{i}}-\mathfrak{U}_{i}\left(X_{t_{i}}^{\pi};\Theta\right) \right|^{2}\right]+C\delta_{i}\sum_{k=1}^{2}\mathbb{E}^{\Phi}\left[\left| \widetilde{\widetilde{Z}}_{t_{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_{i}}^{\pi}; \Theta,\Xi)\right|^{2}\right].\]
Letting \(|\pi|\) be sufficiently small gives together with Lemma 5.7
\[\mathbb{E}^{\Phi}\left[\left|\widehat{\mathcal{V}}_{t_{i}}- \widehat{\mathfrak{U}}_{i}\left(X_{t_{i}}^{\pi}\right)\right|^{2}\right] +\delta_{i}\sum_{k=1}^{2}\mathbb{E}^{\Phi}\left[\left|\widetilde{ \widetilde{Z}}_{t_{i}}^{k}-\mathcal{Z}_{i}^{k}(X_{t_{i}}^{\pi};\Theta,\Xi) \right|^{2}\right]\] \[\leq C\left\{\inf_{\Theta}\mathbb{E}^{\Phi}\left[\left|\widehat{ v}_{i}(X_{t_{i}}^{\pi})-\mathfrak{U}_{i}(X_{t_{i}}^{\pi};\Theta)\right|^{2}\right]+|\pi|^{3} \left(\rho_{1}^{2}+\rho_{2}^{2}\right)M\right\},\]
therefore
\[\mathbb{E}^{\Phi}\left[\left|\widehat{\mathcal{V}}_{t_{i}}- \widehat{\mathfrak{U}}_{i}\left(X_{t_{i}}^{\pi}\right)\right|^{2}\right]+ \delta_{i}\mathbb{E}^{\Phi}\left[\sum_{k=1}^{2}\left|\widetilde{\widetilde{Z}}_ {t_{i}}^{k}-\widehat{\mathcal{Z}}_{i}^{k}\left(X_{t_{i}}^{\pi}\right)\right|^{ 2}\right]\] \[\leq C\left\{\inf_{\Theta}\mathbb{E}^{\Phi}\left[\left|\widehat{ v}_{i}(X_{t_{i}}^{\pi})-\mathfrak{U}_{i}(X_{t_{i}}^{\pi};\Theta)\right|^{2}\right]+M|\pi|^{3} \right\}\leq C\left\{\frac{C^{*}}{K}+M|\pi|^{3}\right\},\]
by Proposition B.1 over any compact \(\mathcal{Q}=\{x\in\mathbb{R}:|x|\leq Q\}\), \(Q>0\).
The rest of the proof is similar to the proofs of [5, Theorem A.2] and [42, Theorem 4.1], but we include it in Appendix C for the sake of completeness.
## 6. Numerical results
We now showcase the performance of the RWNN scheme on a representative model from each-Markovian and non-Markovian-class. We first test our scheme in the multi-dimensional Black-Scholes (BS) setting [11] and then move to the non-Markovian setup with the rough Bergomi (rBergomi) model [4]. We develop numerical approximations to European option prices given in (3.3) and (4.2), choosing
\[f(t,x,y,z^{1},z^{2})=-ry\qquad\text{and}\qquad g_{\text{call}}\left(x\right)= \left(\text{e}^{x}-\mathscr{K}\right)^{+},\]
and discretising over the partition \(\pi=\{0=t_{0},t_{1},\ldots t_{N}=T\}\) for some \(N\in\mathbb{N}\). The precise discretisation schemes of the individual processes are given in their corresponding sections below. We remark, however, that the approximated option price for a given Monte-Carlo sample can become (slightly) negative by construction so we add an absorption feature for both models:
\[Y_{t_{i}}^{\pi}\coloneqq\max\left\{0,\widetilde{Y}_{t_{i}}^{\pi}\right\}, \qquad\text{for }i\in\{0,\ldots,N-1\}, \tag{6.1}\]
where \(\left\{\widetilde{Y}_{t_{i}}^{\pi}\right\}_{i=0}^{N}\) denotes the approximation obtained through the RWNN scheme.
**Remark 6.1**.: This is a well-studied problem and is especially prevalent in the simulation of square-root diffusions. We acknowledge that the absorption scheme possibly creates additional bias (see [50] for the case of the Heston model), however, a theoretical study in the case of the PDE-RWWN scheme is out of the scope of this paper.
The reservoir used as a random basis of RWNNs here is the classical linear reservoir from Definition 2.2. For numerical purposes, we introduce a so-called _connectivity_ parameter, a measure of how interconnected the neurons in a network are: the higher the connectivity, the more inter-dependence between the neurons (see [18] for effects of connectivity in different reservoir topologies). In practice, however, too high a connectivity can lead to overfitting and poor generalisation. Recall that our reservoir is given by
\[\Phi_{K}:\mathbb{R}^{d}\to\mathbb{R}^{K},\qquad x\mapsto\Phi_{K}(x)\coloneqq \boldsymbol{\varrho}(\text{A}x+\text{b}),\]
where only \(\text{A}\in\mathbb{R}^{K\times d}\) is affected by the connectivity parameter \(c\in(0,1]\). A value \(c=1\) means that A is dense and fully determined by sampled weights. We find that the choice \(c\approx 0.5\) results in superior performance.
All the experiments below were run on a standard laptop with an AMD Ryzen 9 5900HX processor without any use of GPU, which would most certainly speed up the algorithms further. The code for both models is available on GitHub at ZuricZ/RWNN_PDE_solver.
### Example: Black-Scholes
The Black-Scholes model [11] is ubiquitous in mathematical finance, allowing for closed-form pricing and hedging of many financial contracts. Despite its well-known limitations, it remains a reference and is the first model to check before exploring more sophisticated ones. Since it offers an analytical pricing formula as a benchmark for numerical results, it will serve as a proof of concept for our numerical scheme. Under the pricing measure \(\mathbb{Q}\), the underlying assets \(\boldsymbol{S}=(S^{1},\ldots,S^{d})\) satisfy
\[\text{d}S_{t}^{j}=rS_{t}^{j}\text{d}t+\sigma_{j}S_{t}^{j}\text{d}W_{t}^{j}, \quad\text{for }t\in[0,T],j\in\{1,\ldots,d\},\]
where \(\{W_{t}^{j}\}_{t\in[0,T]}\) are standard Brownian motions such that \(\langle W^{i},W^{j}\rangle=\rho_{i,j}\text{d}t\), with \(\rho_{i,j}\in[-1,1]\), \(r\geq 0\) is the risk-free rate and \(\sigma_{j}>0\) is the volatility coefficient. The
corresponding \(d\)-dimensional option pricing PDE is then given by
\[\frac{\partial u(t,\mathbf{S})}{\partial t}+\sum_{j=1}^{d}rS^{j}\frac{\partial u(t, \mathbf{S})}{\partial S^{j}}+\sum_{j=1}^{d}\frac{(\sigma_{i}S^{j})^{2}}{2}\frac{ \partial^{2}u(t,\mathbf{S})}{(\partial S^{j})^{2}}+\sum_{j=1}^{d-1}\sum_{k=j+1}^{d} \rho_{j,k}\sigma_{j}\sigma_{k}S^{j}S^{k}\frac{\partial^{2}u(t,\mathbf{S})}{\partial S ^{j}\partial S^{k}}=ru(t,\mathbf{S}),\]
for \(t\in[0,T)\) with terminal condition \(u(T,\mathbf{S}_{T})=g(S_{T}^{1},\ldots,S_{T}^{d})\).
To use Algorithm 1, the process \(S\) has to be discretised, for example using the standard log-Euler-Maruyama scheme, for each \(j=1,\ldots,d\) and \(i\in\{0,1,\ldots,N-1\}\):
\[\left\{\begin{array}{ll}X_{t_{i+1}}^{\pi,j}&=X_{t_{i}}^{\pi,j}+\left(r- \frac{\sigma_{j}^{2}}{2}\right)\delta_{i}+\sigma_{j}\Delta_{i}^{W^{j}},\\ S_{t_{i+1}}^{\pi,j}&=\exp\left\{X_{t_{i+1}}^{\pi,j}\right\},\end{array}\right.\]
with initial value \(X_{0}^{\pi,j}=\log\left(S_{0}^{\pi,j}\right)\). If not stated otherwise, we let \(\mathscr{K}=S_{0}=1\), \(r=0.01\), \(T=1\), and run the numerical scheme with \(N=21\) discretisation steps and \(n_{\mathrm{MC}}=50,000\) Monte-Carlo samples. The reservoir has \(K\in\{10,100,1000\}\) hidden nodes, in Sections 6.1.3 and 6.1.2 the connectivity parameter is set to \(c=0.5\).
#### 6.1.1. Convergence rate
We empirically analyse the error rate in terms of the number of hidden nodes \(K\) obtained in Corollary 5.4. To isolate the dependence on the number of nodes, we consider a single ATM vanilla Call, fix \(c=1\), \(n_{\mathrm{MC}}=50,000\), \(\sigma=0.1\) and vary \(K\in\{10,100,1000\}\). Due to our vectorised implementation of the algorithm, the reservoir basis tensor cannot fit into the random-access memory of a standard laptop for \(K\geq 10000\). The results are compared to the theoretical price only computed using the Black-Scholes pricing formula. The absorption scheme (6.1) is applied.
#### 6.1.2. ATM Call option
As a proof of concept we first test Algorithm 1 with Call options written on \(d\in\mathbb{N}\) independent assets, i.e. with \(\rho_{j,k}=0\) for \(j\neq k\) and \(V_{T}=\left(g_{\mathrm{call}}(S_{T}^{j})\right)_{j\in\{1,\ldots,d\}}\). This is only done so that the results can be directly compared to the theoretical price computed using the Black-Scholes pricing formula and is, in effect, the same as pricing \(d\) options on \(d\) independent assets, each with their own volatility parameter \(\sigma\). All the listed results in this section are for \(K=100\) hidden nodes.
In Table 1, results and relative errors are shown for \(d=5\) and \(\mathbf{\sigma}\coloneqq(\sigma_{1},\ldots,\sigma_{d})\) uniformly spaced over \([0.05,0.4]\). Next, the effects of the absorption scheme (6.1) are investigated. Curiously, the absorption performs noticeably worse compared to the basic scheme, where one does not adjust for negative paths. This leads us to believe that absorption adds a substantial bias, similar to the Heston case (see Remark 6.1). Therefore, such a scheme should only be used for purposes, when positivity of the option price paths is strictly necessary (e.g. when hedging). Finally, in Table 2, total MSE and computational times are given for different dimensions. The computational times for different dimensions are then plotted in Figure 2. It is important to note that our results do not allow us to make definitive claims about the computational times of the PDE-RWNN scheme across different dimensions. This was not the goal of our experiments, and further detailed theoretical study and experiments would be necessary to draw more definitive conclusions regarding the efficiency of the scheme in various dimensions.
\begin{table}
\begin{tabular}{l|c c c c} & \multicolumn{3}{c}{Price} \\ \hline \(\sigma\) & True & PDE (w/ abs) & PDE (w/o abs) & MC \\ \hline
0.05 & 0.02521640 & 0.02960256 & 0.02531131 & 0.02574731 \\
0.1 & 0.04485236 & 0.05523114 & 0.04467687 & 0.04547565 \\
0.15 & 0.06459483 & 0.07719949 & 0.06477605 & 0.06520783 \\
0.2 & 0.08433319 & 0.10307868 & 0.08443957 & 0.08484961 \\
0.25 & 0.10403539 & 0.12660871 & 0.10412393 & 0.10513928 \\ \multicolumn{3}{c}{Rel. Error} \\ \hline \(\sigma\) & PDE(w/ abs) & PDE (w/o abs) & MC \\ \hline
0.05 & 1.74e-01 & -3.76e-03 & -2.11e-02 \\
0.1 & 2.31e-01 & 3.91e-03 & -1.39e-02 \\
0.15 & 1.95e-01 & -2.81e-03 & -9.49e-03 \\
0.2 & 2.22e-01 & -1.26e-03 & -6.12e-03 \\
0.25 & 2.17e-01 & -8.51e-04 & -1.06e-02 \\ \end{tabular}
\end{table}
Table 1. A single run for \(d=5\) independent underlyings, where European Calls are compared to the price obtained through PDE-RWNN (_with_ and _without_ absorption) and the Monte Carlo methods along each dimension. Below, the relative errors of both methods are given. The MC method was run using the same paths as in the PDE-RWNN.
Figure 1. Empirical convergence of MSE under Black-Scholes in terms of the number of hidden nodes. Error bars mark 0.1 and 0.9 quantiles of 20 separate runs of the algorithm. The slope coefficient of the dashed line is obtained through regression of the means of individual runs, while the solid line represents \(1/K\) convergence and is shown as a reference.
#### 6.1.3. Basket option
We consider an equally weighted basket Call option with a payoff
\[g_{\mathrm{basket}}(\mathbf{S}_{T})\coloneqq\left(\frac{1}{d}\sum_{j=1}^{d}S_{T}^{ j}-\mathscr{K}\right)^{+},\]
where \(\mathscr{K}>0\) denotes the strike price. For simplicity, we consider an ATM option with \(\mathscr{K}\coloneqq\frac{1}{d}\sum_{j=1}^{d}S_{0}^{j}\) and set all \(S_{0}^{j}=1\) for \(j\in\{1,\dots,5\}\). The volatilities \(\sigma_{j}\) are uniformly spaced between \([0.05,0.25]\) and the correlation matrix is randomly chosen as
\[\mathbf{\rho}\coloneqq\begin{bmatrix}1&0.84&-0.51&-0.70&0.15\\ 0.84&1&-0.66&-0.85&0.41\\ -0.51&-0.66&1&0.55&-0.82\\ -0.70&-0.85&0.55&1&-0.51\\ 0.15&0.41&-0.82&-0.51&1\end{bmatrix},\]
so that \(\Sigma\coloneqq\operatorname{diag}(\mathbf{\sigma})\mathbf{\rho}\operatorname{diag}( \mathbf{\sigma})\). Since the distribution of a sum of Lognormal is not known explicitly, no closed-form expression is available for the option's price. Hence, the reference price is computed using Monte-Carlo with \(100\) time steps and \(400,000\) samples. In Table 3, we compare our scheme with a classical MC estimator in terms of relative error for \(K=100\) hidden nodes.
### Rough Bergomi
The rough Bergomi model belongs to the recently developed class of rough stochastic volatility models, first proposed in [4, 28, 35], where the instantaneous
\begin{table}
\begin{tabular}{l|c c} \(d\) & Total MSE (with abs) & CPU Time (s) \\ \hline
5 & 3.4826e-08 & 10.5 \\
10 & 5.4169e-08 & 16.0 \\
25 & 4.9015e-08 & 34.5 \\
50 & 1.6533e-07 & 65.0 \\
100 & 2.5340e-07 & 135.0 \\ \end{tabular}
\end{table}
Table 2. Total MSE calculated across all \(d\) assets and CPU training times for varying dimension \(d\), where \(\mathbf{\sigma}\) uniformly spaced over \([0.05,0.4]\).
Figure 2. Computational time vs number of dimensions, as in Table 2.
variance is driven by a fractional Brownian motion (or more generally a continuous Gaussian process) with Hurst parameter \(H<\frac{1}{2}\). As highlighted in many papers, they are able to capture many features of (Equities, Commodities,...) data, and clearly seem to outperform most classical models, with fewer parameters. Precise examples with real data can be found in [4] for SPX options, in [12, 29, 45, 39] for joint SPX-VIX options and in [8, 28] for estimation on historical time series, the latter being the state-of-the-art statistical analysis under the \(\mathbb{P}\)-measure. We consider here the price dynamics under \(\mathbb{Q}\) with constant initial forward variance curve \(\xi_{0}(t)>0\) for all \(t\in[0,T]\):
\[\left\{\begin{array}{rl}\frac{\mathrm{d}S_{t}}{S_{t}}&=r\mathrm{d}t+\sqrt{V_ {t}}\mathrm{d}\left(\rho_{1}\mathrm{d}W_{t}^{1}+\rho_{2}W_{t}^{2}\right),\\ V_{t}&=\xi_{0}(t)\mathcal{E}\left(\eta\sqrt{2H}\int_{0}^{t}(t-u)^{H-\frac{1}{2 }}\mathrm{d}W_{u}^{1}\right),\end{array}\right.\]
where \(\eta>0\) and \(H\in(0,1)\) is the Hurst parameter. The corresponding BSPDE reads
\[-\mathrm{d}u(t,x)=\left[\frac{V_{t}}{2}\partial_{x}^{2}u(t,x)+\rho\sqrt{V_{t}} \partial_{x}\psi(t,x)-\frac{V_{t}}{2}\partial_{x}u(t,x)-ru(t,x)\right]\mathrm{ d}t-\psi(t,x)\mathrm{d}W_{t}^{1},\]
with terminal condition \(u(T,x)=g_{\mathrm{call}}\left(\mathrm{e}^{x+rT}\right)\). While the existence of the solution was only proven in the distributional sense [5], we nevertheless apply our RWNN scheme. To test Algorithm 2, both the price and the volatility processes are discretised according to the Hybrid scheme developed in [9, 52]. We set the rBergomi parameters as follows: \(H=0.3\), \(\eta=1.9\), \(\rho=-0.7\), \(r=0.01\), \(T=1\), \(S_{0}=1\) and choose the forward variance curve to be flat with \(\xi_{0}(t)=0.235\times 0.235\). Again, we are pricing an ATM vanilla Call option with \(\mathscr{K}=S_{0}=1\). The number of discretisation steps is again \(N=21\), the number of Monte-Carlo samples \(n_{\mathrm{MC}}=50,000\) and the reservoir has \(K\in\{10,100,1000\}\) nodes with connectivity \(c=0.5\) in Section 6.2.2.
#### 6.2.1. Convergence rate
As in Section 6.1.1, we conduct an empirical analysis of the convergence error for the ATM Call. To isolate the dependence on the number of nodes we fix \(c=1\), \(n_{\mathrm{MC}}=50,000\) and vary \(K\in\{10,100,1000\}\). The reference price is computed by Monte-Carlo with 100 time steps and \(800,000\) samples. The absorption scheme has been applied and the results are displayed in Figure 3. In this section, the same random seed was used as in Section 6.1.1, to ensure consistent results across different simulations.
#### 6.2.2. ATM Call option
We now evaluate the performance of our PDE-RWNN method for option pricing in the rough Bergomi model using the market parameters listed above and compare the results to those obtained with the MC method over the same sample paths. We also investigate the effect of the absorption scheme (Table 4) and find that,
\begin{table}
\begin{tabular}{l|c c c c} & Reference & PDE (w/ abs) & PDE (w/o abs) & MC \\ \hline Price & 0.016240 & 0.018220 & 0.016131 & 0.016251 \\ \hline Rel. error & - & -1.22e-01 & -6.71e-03 & -6.50e-04 \\ \hline Time & 12.8 & 9.7s & 9.8s & 0.3s \\ \end{tabular}
\end{table}
Table 3. Comparison of prices, relative errors and CPU time of the Monte-Carlo estimator, PDE-RWNN scheme _with_ and _without_ absorption (using same sampled MC paths and \(K=100\)) and the reference price computed with 100 time steps and 400,000 samples.
interestingly, despite keeping the paths positive, the absorption scheme adds noticeable bias. Nevertheless, the relative error of the proposed scheme with absorption is comparable to the results using regular artificial networks found in the literature [5, Table 1], yet, our scheme learns much faster with orders of magnitudes faster training times.
\begin{table}
\begin{tabular}{l|c|c|c|c} & Reference & PDE (w/ abs) & PDE (w/o abs) & MC \\ \hline Price & 0.079932 & 0.0819236 & 0.079729 & 0.080310 \\ \hline Rel. error & - & 24.9e-3 & 2.54e-03 & -4.73e-03 \\ \hline Time & 10.1s & 7.4s & 7.5 & 0.4s \\ \end{tabular}
\end{table}
Table 4. Comparison of prices, relative errors and CPU time of the Monte-Carlo estimator, PDE-RWNN scheme _with_ absorption, PDE-RWNN scheme _without_ absorption both with \(K=100\) (and both using same sampled MC paths) and the reference price computed with 100 time steps and 800,000 samples.
Figure 3. Empirical convergence of MSE under rBergomi in terms of the number of hidden nodes. Error bars mark 0.1 and 0.9 quantiles of 20 separate runs of the algorithm. The slope coefficient of the dashed line is obtained through regression of the means of individual runs, while the solid line represents \(1/K\) convergence and is shown as a reference. |
2304.10211 | Spiking-Fer: Spiking Neural Network for Facial Expression Recognition
With Event Cameras | Facial Expression Recognition (FER) is an active research domain that has
shown great progress recently, notably thanks to the use of large deep learning
models. However, such approaches are particularly energy intensive, which makes
their deployment difficult for edge devices. To address this issue, Spiking
Neural Networks (SNNs) coupled with event cameras are a promising alternative,
capable of processing sparse and asynchronous events with lower energy
consumption. In this paper, we establish the first use of event cameras for
FER, named "Event-based FER", and propose the first related benchmarks by
converting popular video FER datasets to event streams. To deal with this new
task, we propose "Spiking-FER", a deep convolutional SNN model, and compare it
against a similar Artificial Neural Network (ANN). Experiments show that the
proposed approach achieves comparable performance to the ANN architecture,
while consuming less energy by orders of magnitude (up to 65.39x). In addition,
an experimental study of various event-based data augmentation techniques is
performed to provide insights into the efficient transformations specific to
event-based FER. | Sami Barchid, Benjamin Allaert, Amel Aissaoui, José Mennesson, Chaabane Djéraba | 2023-04-20T10:59:56Z | http://arxiv.org/abs/2304.10211v1 | # Spiking-Fer: Spiking Neural Network for Facial Expression Recognition With Event Cameras
###### Abstract.
Facial Expression Recognition (FER) is an active research domain that has shown great progress recently, notably thanks to the use of large deep learning models. However, such approaches are particularly energy intensive, which makes their deployment difficult for edge devices. To address this issue, Spiking Neural Networks (SNNs) coupled with event cameras are a promising alternative, capable of processing sparse and asynchronous events with lower energy consumption. In this paper, we establish the first use of event cameras for FER, named "Event-based FER", and propose the first related benchmarks by converting popular video FER datasets
to event streams. To deal with this new task, we propose "Spiking-FER", a deep convolutional SNN model, and compare it against a similar Artificial Neural Network (ANN). Experiments show that the proposed approach achieves comparable performance to the ANN architecture, while consuming less energy by orders of magnitude (up to 65.39x). In addition, an experimental study of various event-based data augmentation techniques is performed to provide insights into the efficient transformations specific to event-based FER.
## 1. Introduction
Facial Expression Recognition (FER) has received particular interest in recent years given its diverse and practical applications in computer vision, e.g., security, health, and communication. So far, efficient learning-based models have been proposed for FER (Krizhevsky et al., 2014). However, these methods often overcome the energy consumption constraint (Krizhevsky et al., 2014). The convergence between the need to reinforce the complexity of learning models and the requirement of a physical platform at the cutting edge of technology induces heavy consequences in terms of energy consumption and has repercussions on the environment of the planet.
Spiking Neural Networks (SNNs) have become popular due to practical interest in terms of addressable complexity and energy efficiency than artificial neural networks (ANN). The neurons in SNNs asynchronously transmit information through sparse and binary spikes, enabling event-driven computing. Unlike ANN requiring dense matrix operations, energy consumption occurs only when a spike is generated on a neuromorphic chip. Hence, SNNs can significantly improve the energy efficiency of artificial intelligence systems.
Although SNNs are less energy-consuming, they do not match the performance of ANNs in computer vision. Inspired by the advances in ANNs, many recent methods are proposed to improve the performance of SNNs, e.g., ANN to SNN conversion (Shi et al., 2017), innovative architectures (Shi et al., 2017), new features encoding methods (Beng et al., 2017), and data augmentation (Shi et al., 2017)). However, very few studies have focused on FER, mainly due to the lack of training data.
In this paper, we propose an innovative framework for FER using SNNs, as illustrated in Fig. 1. First, a conversion framework using the V2E converter (Vaswani et al., 2017) is proposed in order to preprocess the well-known FER video datasets and generate event-based streams which are a suitable format for SNN architecture. Then, the Spiking-FER model is trained using Surrogate Gradient Learning (Srivastava et al., 2014), which enables the applicability of the backpropagation algorithm for deep SNN architectures. Finally, several experiments are conducted in order to evaluate the proposed model and compare it to a conventional ANN in terms of recognition accuracy and energy consumption.
Our proposal brings three novelties:
* **Event-based FER Benchmark**: we provide a reproducible protocol to generate event-based FER benchmarks from the most popular video FER datasets which is, to the best of our knowledge, the first event-based benchmark proposed for FER.
* **New SNN Model Architecture**: we propose a new end-to-end deep convolutional SNN method, called "Spiking-FER" that encodes the event streams into spiking features that are then used by the output accumulator in order to predict the facial expression.
* **Event Data Augmentation for FER**: we analyze the performance of Spiking-FER and its corresponding ANN architecture depending on popular Event Data Augmentations (EDAs) to investigate their impacts on event-based FER.
The paper is structured as follows. In Section 2, we review relevant recent methods for FER and on the evolution of the SNN for event-based vision. Section 3 presents our SNN model architecture and the training process on event-based FER data. Section 4 introduces the experimental setup including datasets and evaluation protocols. The experimental results are provided in Section 5. Finally, we discuss the results and future work in Section 6.
## 2. Related Works
### Facial Expression Recognition
FER methods could be classified into two categories: static (frame-based) and dynamic (sequence-based) methods. Frame-based methods extract spatial features from still images. They rely on hand-crafted features (Shi et al., 2017) or learned features (Shi et al., 2017) using mainly CNNs (Shi et al., 2017), but recently transformer architecture has also come into play (Krizhevsky et al., 2014). Sequence-based methods are performed in different ways: either by aggregating frames, e.g., onset-apex-offset (Shi et al., 2017) or onset-apex (Shi et al., 2017), or by using consecutive frames in order to encode the temporal information. These methods use mainly deep architectures such as 3D CNN (Shi et al., 2017), Recurrent Neural Networks (RNN) (Shi et al., 2017), and Transformers (Shi et al., 2017).
Recently, motion has come into play, and has proven to be effective in sequence-based FER (Shi et al., 2017). Moreover, it has been proposed to address different challenges for FER (occlusions (Shi et al., 2017), intensity (Shi et al., 2017)), taking advantage of the fact that inter-individual variation in motion is better suited for FER than appearance features (Beng et al., 2017). Performance improvement is achieved by increasing the complexity of learning approaches, especially by taking into account spatio-temporal encoding. However, this improvement comes usually with the expense of energy consumption.
### Event-based Vision paired with Spiking Neural Networks
Recently, learning algorithms adapted from backpropagation such as surrogate gradient learning (Srivastava et al., 2014) enable the training of deep SNN architecture by solving the non-differentiability issue of spiking neurons (Goodfellow et al., 2016). Such directly trained deep architectures are the first of several attempts capable of tackling event-based vision problems of similar complexity to those addressed by ANNs currently, such as object detection (Goodfellow et al., 2016), semantic segmentation (Goodfellow et al., 2016), and object tracking (Wang et al., 2017). Furthermore, these architectures start to adapt state-of-the-art techniques from ANNs (Vision Transformer (Vaswani et al., 2017), spatio-temporal attention (Srivastava et al., 2014),...) to operate with spiking neurons thanks to this new ability of gradient-based optimization. This recent direction of directly trained SNNs coupled with event cameras demonstrates impressive efficiency by being able to outperform ANNs (Goodfellow et al., 2016) and showing reduced energy consumption by orders of magnitude (Goodfellow et al., 2016).
## 3. Methodology
### Problem Formulation
During a time interval \(\Delta_{\mathcal{T}}\), an event camera with a \(H\times W\) resolution produces a set of \(N\) asynchronous events \(\mathcal{E}=\{e_{i}\}_{i=1}^{N}\). Each event \(e_{i}\) of the sequence can be formulated as a tuple of 4 values: \(e_{i}=\{x_{i},y_{i},t_{i},p_{i}\}\), where \((x_{i},y_{i})\) correspond to the pixel coordinates, \(t_{i}\) is the timestamp, and \(p_{i}\in\{1,-1\}\) is the sign of the polarity change.
As the asynchronous nature of events is not appropriate for many computer vision approaches (Garfani et al., 2017), a popular event representation method (Garfani et al., 2017) is to discretize a stream of events \(\mathcal{E}\) into a sequence of \(T\) binary event frames \(\mathbf{X}_{T}\in\mathbb{B}^{T\times 2\times H\times W}=\{X_{t}\}_{t=1}^{T}\). In this work, it is done by accumulating events during \(T\) subsequent time intervals \(\frac{\Delta_{\mathcal{T}}}{T}\) to create the sequence of binary frames and thus the final spike tensor \(\mathbf{X}_{T}\).
Event-based FER can be defined as follows: given an event sequence \(\mathcal{E}\) obtained from capturing a subject that performs a facial expression, the objective is to recognize this expression as the appropriate label \(c\) among \(\mathcal{C}\) different classes. To do so, a model \(f_{\alpha}(\cdot)\) with a set of learnable parameters \(\alpha\) is trained such that: \(c=f_{\alpha}(\mathbf{X}_{T})\). The top of Fig. 2 illustrates the formulation of event-based FER with the related notations.
### Spiking-FER
Spiking-FER is represented by the model \(f_{\alpha}(\cdot)\), where \(\alpha\) denotes its synaptic weights. The bottom of Fig. 2 illustrates an overview of the proposed Spiking-FER architecture.
#### 3.2.1. Spiking Neuron Model
The proposed convolutional SNN architecture uses the Integrate-and-Fire (IF) neuron (Srivastava et al., 2014) as the spiking neuron model. It accumulates input spikes weighted by the synaptic weights into a'membrane potential'. When this membrane potential exceeds a certain threshold value, the neuron emits an output spike and is reset to zero. The discretized dynamics of a layer \(l\) of IF neurons from Spiking-FER at a certain time-step \(1\leq t\leq T\) is described as follows:
\[U_{t}^{l}=U_{t-1}^{l}+\mathcal{W}^{J}X_{t-1}^{l-1}-\theta X_{t}^{l} \tag{2}\] \[X_{t}^{l}=\Theta(U_{t}^{l}-\theta) \tag{1}\]
where \(U_{t}^{l}\) denotes the membrane potentials of the IF neurons, \(\mathcal{W}^{l}\) is the set of synaptic weights, \(X_{t}^{l}\in\mathbb{B}\) denotes the output spike tensor. \(X_{t}^{l}\) consists of 1's when the related element of \(U_{t}^{l}\) exceeds the threshold value \(\theta\), and 0's otherwise. For simplicity, the threshold is set to 1 for all layers (i.e., \(\theta=1\)). This mechanism, formulated in Eq. 2 is known as the Heaviside step function (\(\Theta(\cdot)\)).
#### 3.2.2. Direct Training via Surrogate Gradient
Spiking-FER is trained using Surrogate Gradient Learning (Goodfellow et al., 2016; Srivastava et al., 2014), a popular and effective training approach for deep SNN models. An SNN can be expressed as a Recurrent Neural Network where the membrane potentials are internal states. Consequently, the synaptic weights can be trained using Backpropagation Through Time (Srivastava et al., 2014). The main issue is related to the backward pass, where \(\Theta(\cdot)\) is not differentiable - i.e., its derivative is 0 almost everywhere, and +\(\infty\) at 0 - causing the gradient chain to break ("dead neuron problem" (Goodfellow et al., 2016)). Therefore, surrogate gradient learning solves this problem by employing the derivative of a continuous surrogate function \(\sigma(\cdot)\) on the backward pass as an approximation of the derivative of \(\Theta(\cdot)\). In Spiking-FER, we define \(\sigma(x)=\frac{1}{\pi}\arctan(\pi x)+\frac{1}{2}\).
#### 3.2.3. Model Architecture
Strongly related to (Goodfellow et al., 2016), Spiking-FER consists of two modules: **(1)** a deep convolutional SNN encoder that encodes the event streams into spiking features; and **(2)** an output accumulator module (Goodfellow et al., 2016) that predicts the emotion of the sample from the encoded spiking features.
The encoder is a SEW-ResNet-18 (Goodfellow et al., 2016) architecture that outputs spiking feature vectors \(F_{t}\in\mathbb{B}^{d}\), where \(d\) is the number of output channels (in SEW-ResNet-18, \(d=512\)). At each time-step, these extracted spiking features are fed into the output accumulator module responsible for making the final prediction.
As shown in the rightmost part of Fig. 2, the output accumulator module is composed of one fully connected layer of artificial neurons and one linear classifier. Firstly, it accumulates the spiking features from all time-steps to obtain a single feature vector \(\mathcal{F}\in\mathbb{R}^{d}\) such that:
\[\mathcal{F}=\sum_{t=1}^{T}\mathcal{W}\times F_{t} \tag{3}\]
, where \(\mathcal{W}\in\mathbb{R}^{d\times d}\) is the set of trainable weights in the fully connected layer. Then, the features \(\mathcal{F}\) are fed into the linear classifier to obtain the final classification prediction.
The whole network is trained end-to-end using the cross-entropy loss.
## 4. Experimental Setup
In this section, we present the experimental setup including datasets, evaluation protocols and the models configurations.
### Video-to-Events Conversion
To validate the applicability of event-based data and SNNs to FER applications, while being comparable to standard FER baselines (Kal
we convert some of the most popular video FER datasets: ADFES (Srivastava et al., 2017), CASIA (Srivastava et al., 2017), CK+ (Kang et al., 2017), and MMI (Mori et al., 2017) to an event-based format. Each video of a given FER dataset is processed by two successive steps. The first step is a standardization of all frames (Bahdan et al., 2016): the face of the represented subject is cropped and rotated based on 68 facial landmarks and converted to grayscale. Then, the resulting frame is resized to a resolution of \((200\times 200)\). The second step corresponds to the conversion of the standardized video into events using v2e (Srivastava et al., 2017), a video-to-event converter for realistic events simulation, as illustrated in Fig. 3. The code and parameters to reproduce the benchmark are available1.
Footnote 1: The code will be released upon acceptance
### Evaluation Protocol
Models that are evaluated on an event-based FER dataset follow a 10-fold cross-validation configuration: the given dataset is randomly split into 10 folds of equal size. The employed metric for every iteration is the top-1 classification accuracy. Finally, we report the mean accuracy score of the 10 folds.
### Implementation Details
The experiments are implemented in PyTorch, Tonic (Tonic, 2017) and Spikingfelly (Spikingfelly, 2017) as our SNN simulator, and run on one NVIDIA A40 GPU. We train our models (Spiking-FER and ResNet-18) during 500 epochs, using an SGD optimizer, with a learning rate of 0.01 and a cosine annealing scheduler (Toledo et al., 2017), and keep the best performance on the validation set. A low-latency regime is adopted for Spiking-FER, with \(T=6\).
### Comparison with ANN
Since the convolutional SNN encoder of Spiking-FER is a SEW-ResNet-18, we choose a ResNet-18 (He et al., 2018) model as the corresponding ANN. Similarly to the ANN model defined in (Bahdan et al., 2016), the spike tensor \(\mathbf{X}_{T}\) is fed into the 2D-CNN by concatenating all binary event frames together along the time axis.
## 5. Experiments
### Study on Event Data Augmentation
To investigate the impacts of popular event data augmentations (EDAs)(Kang et al., 2017), given in Table 1, on the model performance, the experiments are conducted in 2 successive parts: **(1)** an analysis of common EDAs; **(2)** an analysis on specific EDAs for either regularization of training with scarce datasets, or FER-specific transformation.
#### 5.1.1. Common EDAs
Since EDAs can be applied in combination, the main objective of this part is to assess which EDA has the best impact when they are combined with each other. Therefore, we run all possible combinations of common EDAs, which gives a total of 32 experiments for a given dataset, as illustrated in Fig. 4.
Figure 2. Overview of the proposed framework. _Top_) Formulation of Event-based Facial Expression Recognition. _Bottom_) The Spiking-FER architecture where the convolutional SNN encoder is expressed as a recurrent neural network.
The baseline results show that the SNN model performs better than the ANN model without augmentation - i.e., using only the original data from the event stream. Often observed in neural network training, data augmentation tends to significantly improve performance. This is especially true for FER, where databases are scarce. On ANNs, we observe that all the EDAs combinations have a positive or null impact, unlike the SNNs, where some EDAs combinations tend to decrease the performances. Among the EDA methods, the combination {_Crop_, _H Flip_ and _Noise_} significantly improves the performance on both ANNs and SNNs, except for the MMI dataset, where the improvement is less significant. This can be explained by the greater complexity of the data, where greater head pose variations and variety of facial movement patterns appear.
Then, we evaluate the accuracy scores of all folds for all experiments, which gives 320 scores. We perform a multivariate regression analysis on this population of 320 scores by considering the applied EDAs as categorical independent variables. For a given EDA, the regression analysis gives an approximation of the expected benefit in performance. Fig. 5 shows the results of the regression analysis for each dataset. According to the regression coefficients, \(Crop\) and \(HFlip\) have generally a positive impact, which suggest that they are well adapted for event-based FER. These methods cover well the small variations, e.g., face translation or image resolution changes, on the data observed in the different databases that compose the benchmark. However, _Reverse_ that reports either non-significant results or negative impacts in all cases. This can be explained by the fact that the activation of a facial expression follows a temporal sequence induced by the facial muscles. In this case, the reversal of the event flow is not consistent, especially since in this benchmark, where the sequences only go from the neutral state to the apex. _PolFlip_ highlights the differences between Spiking-FER and the ANN: while Spiking-FER constantly reports negative effects, the ANN model obtains a positive impact. This suggests that SNNs do not benefit from _PolFlip_ for event-based FER.
#### 5.1.2. Specific EDAs
We keep the best-performing combinations of common EDAs and evaluate the specific ones. For a given dataset, the best combination of common EDAs is defined as the highest mean accuracy score obtained on the 10-fold cross-validation. Fig. 6 reports the results obtained with and without these specific EDAs, adapted to the event flows for FER. Considering the performances, we note that the combination of _EventDrop_, which regularizes the training of neural networks on limited datasets, and _Mirror_, which transforms the visual aspect of a subject's face, is perfectly adapted to augment facial expressions for both ANNs and SNNs. In addition, to improve the performance of the models, the performance gap between the ANNs and SNNs models is significantly reduced, especially for the ADFES dataset. Both EDAs have been designed to adapt to inter-individual variation, e.g., face symmetry and expression activation time.
### Estimation of Energy Consumption
Similarly to (Beng et al., 2017; Wang et al., 2018), we compare the energy efficiency of Spiking-FER and a similar ANN when simulated on a 45nm CMOS chip (Wang et al., 2018).
The estimation methodology is described as follows: firstly, we quantify the spiking rate of each layer, as spiking neurons consume energy only when generating a spike. The spiking rate of a given layer \(l\) is calculated as follows:
\[Rs(l)=\frac{\text{\# spikes of $l$ over all time-steps}}{\text{\# neurons of $l$}} \tag{4}\]
Secondly, we compute the total floating-point operations (FLOPs) of a layer of spiking neurons (\(FLOPs_{\text{\small SNN}}\)) by using the FLOPs of the same layer in a non-spiking neural network (\(FLOPs_{\text{\small ANN}}\)) and the spike rate of the spiking neuron layer:
\begin{table}
\begin{tabular}{l l} \hline \hline
**EDA** & **Description** \\ \hline \(Crop\) & Spatial crop of the whole sequence with a random scale \\ \(HFlip\) & Horizontal flip of the whole sequence \\ \(Noise\) (\(BA\)) & Noisy events due to corrupted pixels in event cameras (Garfani et al., 2016). \\ \(PolFlip\) & Flip of polarity (i.e., \(p_{i}=-p_{i}\) for all events) \\ \(Reverse\) & Reverse the orders of events. \\
**EventDrop**(Wang et al., 2018) & Randomly drops events spatially, temporally or globally \\
**Mirror** & Mirrors the left or right half of the sequence \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of EDAs. Common EDAs and Specific EDAs are respectively in italic and in bold.
Figure 3. Illustration of the proposed event-based FER benchmark. The video sequences are converted into events corresponding to the output of event cameras.
\[FLOPS_{SSNN}(I) =FLOPS_{SANN}(I)\times Rs(I) \tag{5}\] \[FLOPS_{ANN}(I) =\begin{cases}k^{2}\times O^{2}\times C_{in}\times C_{out}&\text{if $l$ is Conv.}\\ C_{in}\times C_{out}&\text{if $l$ is Linear.}\end{cases} \tag{6}\]
In Equation 6, \(k\) represents the kernel size, \(O\) represents the size of output feature maps, \(C_{in}\) represents the number of input channels, and \(C_{out}\) represents the number of output channels.
Finally, the total energy consumption of a model can be estimated on CMOS technology (Kang et al., 2017) by using the total FLOPs across all layers. Table 2 presents the energy cost of relevant operations in a 45nm CMOS process. MAC operation in ANNs requires one addition (32bit FP ADD) and one FP multiplication (32bit FP MULT) (Shen et al., 2016), whereas SNNs require only one FP addition per MAC operation due to binary spike processing. The total energy consumption of ANNs and SNNs are represented by \(E_{ANN}\) and \(E_{SSNN}\), respectively.
\[E_{ANN} =\sum_{I}FLOPS_{SANN}(I)\times E_{MAC} \tag{7}\] \[E_{SSNN} =\sum_{I}FLOPS_{SSNN}(I)\times E_{AC} \tag{8}\]
Table 3 reports the mean inference energy estimation for each dataset. Similarly to previous works (Chen et al., 2017), Spiking-FER shows better energy efficiency by orders of magnitude (from 47.42\(\times\) to 65.39\(\times\) more efficient), which proves the applicability of SNNs for low-power FER application on edge devices.
## 6. Conclusion
In this work, we introduced _event-based benchmarks for Facial Expression Recognition_ (FER) and proposed a new SNN architecture named _Spiking-FER_. We applied traditional augmentation techniques adapted to event streams, along with two specific techniques - _EventDrop_(Kang et al., 2017) and _Mirror_ - that led to significant improvements in our model's performance. Our proposed approach achieved similar performance to a traditional Artificial Neural Network (ANN) while consuming much less energy (up to 65.39\(\times\)). Our future work will extend this study to other applications such as gesture or action analysis.
\begin{table}
\begin{tabular}{|c|c|c|} \multicolumn{2}{c|}{**Operation**} & \multicolumn{1}{c|}{**Energy (p)**} \\ \hline
32bit FP MULT (\(E_{MULT}\)) & 3.7 \\
32bit FP ADD (\(E_{ADD}\)) & 0.9 \\
32bit FP MAC (\(E_{MAC}\)) & 4.6 (\(=E_{MULT}+E_{ADD}\)) \\
32bit FP AC (\(E_{AC}\)) & 0.9 \\ \end{tabular}
\end{table}
Table 2. Energy table for a 45nm CMOS process (from (Kang et al., 2017)).
Figure 4. Acc. obtained according to combinations of common EDA; (A) H Flip; (B) Noise; (C) Reverse; (D) Pol Flip; (E) Crop.
Figure 5. Significance regression coefficients (p-value \(<0.05\)) calculated on the 320 scores, corresponding to each common EDA for the different datasets (higher is better).
\begin{table}
\begin{tabular}{|c| |
2304.13718 | Sparsified Model Zoo Twins: Investigating Populations of Sparsified
Neural Network Models | With growing size of Neural Networks (NNs), model sparsification to reduce
the computational cost and memory demand for model inference has become of
vital interest for both research and production. While many sparsification
methods have been proposed and successfully applied on individual models, to
the best of our knowledge their behavior and robustness has not yet been
studied on large populations of models. With this paper, we address that gap by
applying two popular sparsification methods on populations of models (so called
model zoos) to create sparsified versions of the original zoos. We investigate
the performance of these two methods for each zoo, compare sparsification
layer-wise, and analyse agreement between original and sparsified populations.
We find both methods to be very robust with magnitude pruning able outperform
variational dropout with the exception of high sparsification ratios above 80%.
Further, we find sparsified models agree to a high degree with their original
non-sparsified counterpart, and that the performance of original and sparsified
model is highly correlated. Finally, all models of the model zoos and their
sparsified model twins are publicly available: modelzoos.cc. | Dominik Honegger, Konstantin Schürholt, Damian Borth | 2023-04-26T17:55:56Z | http://arxiv.org/abs/2304.13718v1 | # Sparsified Model Zoo Twins:
###### Abstract
With growing size of Neural Networks (NNs), model sparsification to reduce the computational cost and memory demand for model inference has become of vital interest for both research and production. While many sparsification methods have been proposed and successfully applied on individual models, to the best of our knowledge their behavior and robustness has not yet been studied on large populations of models. With this paper, we address that gap by applying two popular sparsification methods on populations of models (so called model zoos) to create sparsified versions of the original zoos. We investigate the performance of these two methods for each zoo, compare sparsification layer-wise, and analyse agreement between original and sparsified populations. We find both methods to be very robust with magnitude pruning able outperform variational dropout with the exception of high sparsification ratios above 80%. Further, we find sparsified models agree to a high degree with their original non-sparsified counterpart, and that the performance of original and sparsified model is highly correlated. Finally, all models of the model zoos and their sparsified model twins are publicly available: modelzoos.cc.
Machine Learning, Sparsified Model, Sparsified Model, Sparsified Model, Sparsified Model
## 1 Introduction
In recent years, deep neural networks have gained significant momentum and popularity with the general trend of growing in size. This is mainly due to the observed relationship between model size and performance i.e. larger models tend to have an improved performance over their smaller counterparts as reported by (Kaplan et al., 2020; Tan and Le, 2019; Brock et al., 2018). Unfortunately, the increasing performance results in very high computational and environmental costs for training and inference, as the size of the models continuous to increases (Hoefler et al., 2021; Strubell et al., 2019). As an example, the image classification model CoCa, which currently achieves the highest accuracy (91.0%) on the ImageNet dataset, has 2.1 billion parameters (Yu et al., 2022). Forecasts predict that by 2025 models will exist able to achieve a performance of 95% on the ImageNet object classification but demand as much electricity for training as New York City emits in \(CO_{2}\) in a month (Thompson et al., 2022).
One approach to tackle this issue is to exploit the over-parameterization of large models of neural networks (NNs) to successfully train, but to reduce their size significantly for inference. According to (Hoefler et al., 2021) sparsification of neural network models can achieve reductions of 10-100x without significant losses in performance, even for extremely large models (Frantar and Alistarh, 2023). By pruning parameters after training, it becomes possible to reduce the required computational power for inference, save energy or deploy models on mobile devices, on embedded systems or satellites with limited storage capabilities (Giaffrida et al., 2021; Hoefler et al., 2021; Howard et al., 2017).
Related work has investigated individual methods of sparsification extensively (Blalock et al., 2020; Hoefler et al., 2021) and run large scale studies rigorously evaluating performance differences between different methods (Gale et al., 2019). Contrary to (Gale et al., 2019), who evaluate sparsification on fixed seeds and optimize hyper-parameters for best sparsification, this work evaluates the effect on specification on populations of neural network models (so called "model zoos"). Since neural networks follow a non-convex optimization and are sensitive to hyper-parameter selection, to achieve more robust results in studying sparsity we propose to shift the focus from individual models to populations of neural networks (Schurholt et al., 2021; Schurholt et al., 2022), which are trained according to controlled generating factors i.e., selection of hyper-parameters, seeds, initialization methods. To the best of our knowledge, there are no studies on sparsification on a population of neural network models.
Our contributions:(1) We generate a sparsified version of an available model zoo (Schurholt et al., 2022) using two popular sparsification methods, namely Variational Dropout
(VD) (Molchanov et al., 2017) and Magnitude Pruning (MP) (Han et al., 2015; Strom, 1997) and thus generate a dataset consisting of 33'920 trained and sparsified CNNs with 1'721'600 unique model states representing their sparsification trajectories. (2) We conduct an in-depth analysis and comparison of the sparsified model zoos and the utilized sparsification methods and find that i) both methods perform robustly on all populations, ii) MP outperforms VD except for some very high sparsity ratios and iii) higher sparsity ratios are achieved in larger layers consistently in the populations of the model zoos. Since for each individual model a dense and fully parameterised as well as a sparsified version exists, their relationships can be investigated. Particular attention is paid to investigating how robustly the methods perform on the model zoos trained on different datasets and with varying hyperparameter configurations. (3) As expected, on average with increased sparsification, we observe a performance drop in the populations. However, within the population, we can find individual models, which are less prone to the performance drop (they are sparsification-friendly) or vice verse, are affected stronger by the performance drop (they are sparsification-hard). (4) Furthermore, we examine the weight spaces of the sparsified model zoos by learning hyper-representations of the individual model parameters and are able to show that model properties such as accuracy and sparsity disentangle very well in the latent space and can be predicted from its latent representation.
## 2 Related Work
Sparsification of Neural NetworksModel sparsification has been studied in depth, (Hoefler et al., 2021) provides a survey over the different approaches. Most sparsification approaches can be categorized as 'data-free' or 'training-aware'. Data-free approaches prune models based on the structure of the neural networks. Magnitude Pruning (MP) (Han et al., 2015; Strom, 1997) as the most common representative uses the absolute value or parameters as indicator for importance, but several other approaches have been proposed (Kusupati et al., 2020; Bellec et al., 2017). Training-aware rely on data to identify parameters that have the least impact on the output, based on, e.g., first (Xiao et al., 2019; Ding et al., 2019; Lis et al., 2019; Lee et al., 2018; Srinivas & Babu, 2015) or second order (Hassibi et al., 1993; Cun et al., 1990; Dong et al., 2017; Wang et al., 2019; Theis et al., 2018; Ba et al., 2016; Martens & Grosse, 2015) approximations of the loss functions. Variational methods like Variational Dropout (VD) (Molchanov et al., 2017) explicitly model the distribution of parameters and remove those with high amount of noise.
A large comparative study of sparsification methods found that simple MP can match or outperform more complicated VD on large models (Gale et al., 2019). Similarly, (neuralmagic) offers a selection of sparsified large-scale NNs. Despite the great diversity of sparsification methods and the application of those methods to a diverse range of NNs, sparsification has not yet been applied and studied on a large population of CNNs.
Populations of Neural NetworksRecently, populations of models have become an object of study. Several approaches predict model properties from model features (Yak et al., 2019; Jiang et al., 2019; Corneanu et al., 2020; Martin & Mahoney, 2019; Unterthiner et al., 2020; Eilertsen et al., 2020) or compare models based on their activations (Raghu et al., 2017; Morcos et al., 2018; Nguyen et al., 2020). Other methods leverage zoos for transfer or meta learning (Liu et al., 2019; Shu et al., 2021; Ramesh & Chaudhari, 2022).
Another line of work investigates the weight space of trained models (Lucas et al., 2021; Benton et al., 2021; Ainsworth et al., 2022; Ilharco et al., 2022). Recently,
Figure 1: An overview of the approach. (Left:) A population of neural network models is trained according to latent generating factors such as dataset, architecture and hyperparameter. (Middle:) The given model zoos are sparsified given magnitude pruning and variational dropout. (Right:) Models in the populations are analyzed and entire model zoos are compared with each other and its sparsified counterpart. Additional representation of the zoos are trained to analyse the underlying structure of the sparsified zoos.
several methods have been proposed to learn representations of trained models (Denil et al., 2013; Berardi et al., 2022; Peebles et al., 2022; Ashkenazi et al., 2022; Wang et al., 2023; Navon et al., 2023). (Schurholt et al., 2021; 2022a) proposed a self-supervised approach to learn representations of populations of models, which they dub hyperrepresentations and show to disentangle model properties and be useful to generate new models. Nonetheless, there are only few structured datasets of model zoos. (Gavrikov & Keuper, 2022) publish and analyse a dataset of convolutional filters. (Schurholt et al., 2022c) provide a large dataset of diverse, pre-trained models, which form the basis for our sparsification work.
## 3 Generating Sparsified Model Zoo Twins
To analyse sparsity on populations, we apply two sparsification methods on existing pre-trained model zoos, as outlined in Figures 1 and 2. We select magnitude pruning and variational dropout as representative for data-free and training-aware methods, since they can be applied to small or medium sized CNNs that were already trained to convergence and the methods are appropriate for scaling to large populations of models.
The model zoos of (Schurholt et al., 2022c) serve as a starting point of the sparsification process. We refer to these model zoos as original model zoos. (Schurholt et al., 2022c) establish a setting of varying architectures \(\mathcal{A}\) and hyperparameters \(\lambda\) on different datasets \(\mathcal{D}\) for the generation of their zoos, which we adopt for this work. The zoos were trained on MNIST (LeCun et al., 1998), Fashion-MNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011), USPS (Hull, 1994), CIFAR-10 (Krizhevsky, 2009) and STL-10 (Coates et al., 2011) using a small CNN architecture. To sparsify the model zoos, we apply both sparsification methods to the last state of each model in the zoos. To ensure that the sparsified versions of the CNNs can be compared with their original versions, the generating factors \(\mathcal{A}\), \(\lambda\) and \(\mathcal{D}\) of the original models remain unchanged, except for the learning rate.
Magnitude PruningTo sparsify model zoos with MP, we select several sparsity ratios and sparsify each model in the zoo accordingly. The corresponding fraction of weights with smallest absolute value is set to zero and removed from the set of learnable parameters. We use global unstructured MP and rely on the pytorch implementation (Paganini, 2019). MP generally hurts the performance, so we fine-tune the pruned models on their original dataset to recover for a fixed number of epochs. During fine-tuning we document each epoch by saving the current state dict of the model and report the test accuracy and generalization gap.
Variational DropoutFollowing a similar setup, we apply VD for defined number of epochs on the last state of every model in the model zoos. Following (Gale et al., 2019), we reduce the learning rate compared to the original zoos. After training, the parameters with high variance (\(\alpha\geq 3\)) are removed from the NNs. As VD includes training, we do not fine-tuning the models further. We document each training epoch by saving the state dict as well es the accuracy ratio, test accuracy and generalization gap.
Figure 2: (Left:) Test accuracy over the initial training over a fixed number of epochs in the original MNIST Seed model zoo. (Right:) Test accuracy and sparsity over the 25 epochs of VD sparsification starting from the last epoch of the original training.
## 4 Experiments
This section outlines the experimental setup, evaluation, and analysis of generated sparsified populations of NN models.
### Experimental Setup
We sparsify 14 model zoos with VD and 10 model zoos with MP using the the methods introduced above. In the case of MP, we sparsify each zoo with sparsity levels \([10,20,30,40,50,60,70,80,90]\)%. This is followed by 15 epochs of fine-tuning, in which the pruned weights do not receive a weight update. For our experiments, we use the pruning library of PyTorch (Paganini, 2019). For VD, each weight parameter of the model receives an additional parameter \(\sigma\). Each model is trained for 25 epochs and the learnable parameters \(\mathbf{w}\) and \(\sigma\) are optimized. Both \(\mathbf{w}\) and \(\sigma\) are aggregated in a per-parameter value \(\alpha\). Weights are pruned for \(\alpha>3\). For the implementation of VD, we adapted the fully-connected and convolutional layers of PyTorch based on the code of several previous works (Ryzhikov, 2021; Gale et al., 2019; Molchanov et al., 2017).
Computing Infrastructure:The model zoos were sparsified on nodes with up to 4 CPUs and 64g RAM. Sparsifying a zoo of 1000 models takes 2-3 days. Large and more complex model zoos consisting of roughly 2600 models and greater diversity in terms of hyperparameters may take up to 11 days. Hyper-representations are trained on a GPU of a NVIDIA DGX2 station for up to 12 hours.
### Evaluation
For every model at every state, we record test accuracy, generalization gap and sparsity ratio as fundamental meters to evaluate models. Further, we compute the agreement between original and sparsified models and learn hyper-representations, to evaluate the structure of populations of sparsified models.
Model AgreementAs one measure for evaluation, we compute the pairwise agreement of models within the sparsified and original model zoos. The models agree when both predict the same class given same test data. Per model pair (\(k\) and \(l\)) this is summed up as follows:
\[\kappa_{aggr}=\frac{1}{N}\sum_{i=1}^{N}\lambda_{y_{i}}, \tag{1}\]
for test samples \(i=1,...,N\), where \(\lambda_{y_{i}}\) = 1, if \(y_{i}^{k}\) = \(y_{i}^{l}\) and \(\lambda_{y_{i}}\) = 0 otherwise.
Hyper-Representation LearningFor a deeper understanding of the weight spaces of the model zoos created with VD, we train a attention based auto-encoder (AE) proposed by (Schurholt et al., 2022; \(\mathbf{b}\)). We learn task-agnostic hyper-representations in a self-supervised learning setting. Such representations can provide a proxy to how structured the sparsification process is. Explicitly, it provides insights in how well weights and alphas can be compressed and how well the latent space disentangles model properties like accuracy or sparsity. We adapt the AE to take non-sparsified weights as input and reconstruct to weights and sparsification maps (\(\alpha\)). To improve the reconstruction quality, we introduce a new loss normalisation for the reconstruction of the alpha parameters defined as
\[\mathcal{L}_{MSE}^{\alpha}=\frac{1}{M}\sum_{i=1}^{M}\Big{|}\Big{|}\tanh\bigl{(} \frac{\hat{\alpha}_{i}-t}{r}\bigr{)}-\tanh\bigl{(}\frac{\alpha_{i}-t}{r} \bigr{)}\Big{|}\Big{|}_{2}^{2}, \tag{2}\]
where \(t\) refers to the pruning threshold and \(r\) to the selected range of interest. With that, we force the model to pay attention to the active range around the threshold that determines sparsification. Details of the model are shown in Appendix G and G.
### Experimental Results and Analysis
In this section we analyze the 24 sparsified model zoos. Due to the large scope of the results we only show highlights here and provide full details in Appendix B and C.
Robust Performance on Population Level:As previous work investigated sparsification on single models, or hyperparameter optimization of sparsification, the robustness of sparsification methods on populations has not yet been evaluated. Related work indicates that pruning the excess parameters of a model reduces overfitting and thus improves test accuracy and generalization (Hoefler et al., 2021; Bartoldson et al., 2020). With further increasing sparsity, functional parts of the models are removed and the performance drops. To investigate the performance of the methods we consider the sparsity-ratio, test accuracy and generalization gap (train accuracy - test accuracy) as metrics. In our experiments, magnitude pruning and variational dropout have showed remarkably robust sparsification performance on a population basis, preserving the original accuracy for considerable levels of sparsity. As illustrated in 2, the distribution of the performance metrics of the individual models in the zoo is very consistent and the variation from the top to the worst performing models is low. Although the standard deviation of the performance is higher on model zoos trained on a more sophisticated image dataset (e.g. CIFAR-10), comparable results are achieved. The results furthermore confirm on a population level, that the generalization gap of is lower for models with moderate sparsification levels.
**Larger Layers Achieve Higher Sparsity Ratios** The previous results indicate considerable robustness and consistency in the sparsification results within and between model zoos. To shed further light on sparsification patterns, we investigate the sparsification per layer. Within zoos, the sparsification ratios per layer are remarkably consistent, see Figure 3. Across all zoos, our experiments show that larger layers are more strongly pruned, since a positive relationship between the number of parameters of a layer and the corresponding sparsity ratio exists. This relationship is shown in Figure 4. Detailed results regarding the sparsity per layer can be found in Appendix 4, D and E. This may indicate that the allocation of parameters in the architecture for the original model zoos of (Schurholt et al., 2022c) was not optimal. This is in line with the literature (Hoefler et al., 2021), which states that pruning works particularly well for over-parameterized models.
**Magnitude Pruning outperforms Variational Dropout** Related work found that MP can outperform VD, especially for moderate sparsity ratios (Gale et al., 2019). Our results confirm that on population level. MP outperforms VD for sparsification levels of up to 80% consistently, see Figure 5, Appendix B and C. At higher sparsification levels, MP shows steep drops in performance. VD on some zoos is more stable and thus shows higher performance at higher sparsification levels, justifying the larger parameter count and computational load.
**Agreement between Twin and Original Model Zoos** By analysing the agreement between the original and twin models, we investigate how well the two methods preserve the behavior of the original models, beyond loss or accuracy. The agreement is evaluated for six model zoos at the sparsity ratio 60%, 70%, 80% or 90%. The sparsity ratio was selected such that a favourable accuracy-sparsity trade-off is achieved in variational dropout.
The results show relatively high levels of agreement between 60 and 80 % for both methods. Unsurprisingly, the agreement is higher for overall higher levels of accuracy. Generally, MP achieves higher accuracy and agreement, and appears to therefore preserve the original behavior of the model better. Our results indicate that simple performance metrics like accuracy may be a good proxy to estimate preserved behavior like agreement.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**Magnitude Pruning**} & \multicolumn{3}{c}{**Variational Dropout**} \\
**Model Zoo** & **Accuracy** & **Sparsity** & **Agreement** & **Accuracy** & **Sparsity** & **Agreement** \\ \hline MNIST (s) & 83.7 (13.5) & 80.0 (0.0) & 82.1 (13.0) & 87.6 (1.2) & 78.0 (1.1) & **83.4 (1.4)** \\ USS (s) & 73.8 (1.7) & 90.0 (0.0) & **73.1** (1.3) & 82.3 (1.5) & 85.8 (6.8) & **86.6 (1.0)** \\ SVHN (s) & 70.7 (7.7) & 60.0 (0.0) & **74.9** (1.5) & 62.2 (2.2) & 52.8 (2.9) & 57.5 (5.0) \\ FMNIST (s) & 73.6 (1.3) & 70.0 (0.0) & **70.7 (1.9)** & 69.3 (1.2) & 72.0 (1.0) & 76.2 (2.1) \\ CIFAR-10 (s) & 47.3 (1.2) & 70.0 (0.0) & **78.4 (1.2)** & 40.5 (1.5) & 67.1 (2.8) & 61.0 (2.7) \\ STL-10 (s) & 40.6 (0.8) & 70.0 (0.0) & **85.9 (2.8)** & 35.9 (1.2) & 66.5 (1.3) & 54.4 (2.8) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Agreement overview between original and twin Seed model zoos. The values mean (std) are reported in %. Higher values indicate higher agreement. _Agreement_ denotes Agreement.
Figure 4: Binned scatter plot of all model zoos sparsified with variational dropout at epoch 5, 10, 15 and 20. Epoch 25 is not shown because certain model zoos collapsed at high sparsity ratios and this would distort the plot. The x-axis shows the logarithmized layer size, the y-axis the logarithmized mean sparsity level. The error bar represents the standard deviation of the sparsity.
Figure 5: Mean accuracy per zoo over sparsity for a selection of model zoos sparsified with VD and MP. MP outperforms VD up to sparsity levels of 80%. At higher sparsity, MP performance drops, VD performance is more stable.
Figure 3: Sparsification Frequency per weight for the MNIST zoo at different VD epochs. Within layers, there is remarkable consistency. Further, different layers are pruned in different phases
### Performance of Original and Sparsified Models are Correlated
The sparsification of populations show remarkably robust results, as indicated above. Nonetheless, there is a spread in the performance of sparsified models, see Figure 2. In practice, it is relevant to identify candidates for high performance at high sparsity before sparsification. As first approximation, we compute the correlation between model performance before and after sparsification. We use Pearson's r as well as Kendall's tau coefficients, the former measures the covariance normalized by the product of variances, the latter measures agreement in rank order between the two paired samples. The results show a remarkable high correlation between original and sparsified models, see Table 2. For fixed sparsity levels with MP, the Pearson's r correlation is above 90% with a single exception. The Kendall's tau is similarly high, indicating that the rank order of samples remains preserved to a high degree. Since the sparsification levels of VD zoos are not as consistent, the correlation values are lower, but confirm the finding. Consequently, based on the results of the sparsified populations, the best performing models will likely be the best or among the best sparsified models.
### Disentangled Representations learned from Weight Space
With the revised AE and its novel loss normalisation we are able to not only reconstruct the weight spaces of the CNNs but also the alpha parameters needed for the pruning decision in VD. The results are remarkable in that both accuracy and sparsity are highly predictable and thus disentangled very well in latent space. What is more, both weights as well as alphas are reconstructed well, indicating a high degree of structure in populations of sparsified models. This opens the door for future attempts to zero-shot sparsify models impressing such structure on pre-trained models. The results are shown in Table 3.
## 5 Conclusion
In this work, we have analyzed sparsification on large populations of neural networks. Using magnitude pruning and variational dropout as underlying sparsification approach, we have created ten sparsified model zoo twins representing common computer vision datasets. In total, we have created 23'920 sparsified models with 1'726'000 documented model states. We can confirm, that both approaches - magnitude pruning (MP) and variational dropout (VD) - perform well on population level with respect to sparsification ratio and accuracy. For sparsification ratios below 80%, MP outperforms VD. At higher sparsification ratios, both methods degrade, but VD is more stable. Sparsified models show high agreement with their original models, with no clear preference between the two sparsification approaches. We further find that performance before and after sparsification is highly correlated, indicating that the best performing model is the best candidate for sparsification. The sparsification characteristics per layer within the zoos are surprisingly consistent. This gives rise to learning hyper-representations on sparsified model zoos, which shows to be unexpectedly successful. That indicates that sparsification is highly structured, which may be exploited for zero-shot sparsification.
|
2306.05717 | A Novel Satellite Selection Algorithm Using LSTM Neural Networks For
Single-epoch Localization | This work presents a new approach for detection and exclusion (or
de-weighting) of pseudo-range measurements from the Global Navigation Satellite
System (GNSS) in order to improve the accuracy of single-epoch positioning,
which is an essential prerequisite for maintaining good navigation performance
in challenging operating contexts (e.g., under Non-Line of Sight and/or
multipath propagation). Beyond the usual preliminary hard decision stage, which
can mainly reject obvious outliers, our approach exploits machine learning to
optimize the relative contributions from all available satellites feeding the
positioning solver. For this, we construct a customized matrix of pseudorange
residuals that is used as an input to the proposed longshort term memory neural
network (LSTM NN) architecture. The latter is trained to predict several
quality indicators that roughly approximate the standard deviations of
pseudo-range errors, which are further integrated in the calculation of
weights. Our numerical evaluations on both synthetic and real data show that
the proposed solution is able to outperform conventional weighting and signal
selection strategies from the state-of-theart, while fairly approaching optimal
positioning accuracy. | Ibrahim Sbeity, Christophe Villien, Christophe Combettes, Benoît Denis, E Veronica Belmega, Marwa Chafii | 2023-06-09T07:29:37Z | http://arxiv.org/abs/2306.05717v1 | # A Novel Satellite Selection Algorithm Using LSTM Neural Networks For Single-epoch Localization
###### Abstract
This work presents a new approach for detection and exclusion (or de-weighting) of pseudo-range measurements from the Global Navigation Satellite System (GNSS) in order to improve the accuracy of single-epoch positioning, which is an essential prerequisite for maintaining good navigation performance in challenging operating contexts (e.g., under Non-Line of Sight and/or multipath propagation). Beyond the usual preliminary hard decision stage, which can mainly reject obvious outliers, our approach exploits machine learning to optimize the relative contributions from all available satellites feeding the positioning solver. For this, we construct a customized matrix of pseudo-range residuals that is used as an input to the proposed long-short term memory neural network (LSTM NN) architecture. The latter is trained to predict several quality indicators that roughly approximate the standard deviations of pseudo-range errors, which are further integrated in the calculation of weights. Our numerical evaluations on both synthetic and real data show that the proposed solution is able to outperform conventional weighting and signal selection strategies from the state-of-the-art, while fairly approaching optimal positioning accuracy.
Global Navigation Satellite System, Satellite Selection, Single-epoch Positioning, Machine (Deep) Learning, Long-Short Term Memory Neural Network
## I Introduction
Accurate and resilient positioning services based on the Global Navigation Satellite System (GNSS) have become essential into a variety of outdoor applications, such as autonomous vehicles and unmanned aerial vehicles, blueforce and first responders tracking, seamless end-to-end logistics and supply chains optimization, large-scale crowd-sensing, etc.
By itself, single-epoch GNSS localization is a key enabler that can provide tracking filters with input observations so as to refine further the mobile position and exploit its dynamics through hybrid data fusion (e.g., combining GNSS with other modalities from inertial sensors, odometers, etc.). Beyond, single-epoch localization is also and foremost typically used to initialize the navigation processor in charge of tracking the GNSS filtered solution [1].
The signals received from satellites can be severely affected by Non-Line of Sight (NLOS) and multipath (MP) propagation in harsh environments, such as urban canyons, hence leading to strongly biased pseudo-range measurements. In this context, selecting the most reliable and/or most informative measurements (while discarding the most harmful ones) is of primary importance to preserve the accuracy of this preliminary single-epoch positioning stage and, hence, of the overall navigation system. This down-selection step is even more critical and challenging as several tens of measurements are typically made available within modern GNSS receivers at each time epoch (i.e., while considering multiple satellites from different constellations), making exhaustive search computationally prohibitive. Most basic selection approaches mainly exploit signal features at single link level (e.g. carrier-to-noise power density ratio \(C/N_{0}\), elevation angle \(\theta\), etc.) so as to exclude - or mitigate the influence of - satellites that would presumably contribute to large positioning errors [2, 3].
For the remaining satellites fulfilling these basic single-link quality criteria, many selection techniques have been proposed such as, subset-testing [4], RANSAC [5], iterative reweighting [6], etc. These methods involve different tradeoffs between computational complexity and performance. However, because of the huge combinatorial complexity of testing all possible subsets of satellites, exhaustive search is not tractable and this selection problem remains an open issue to the best of our knowledge. For instance, most conventional selection approaches (e.g., [7]) rely on the spatial distribution of intermediary positioning results conditioned upon specific subsets of the available satellites to determine the most harmful contributions through posterior consensus. However, they mainly exclude satellites with strongly biased pseudo-ranges, which may still be insufficient to draw the best possible accuracy out of selected pseudo-ranges, given that their respective - and even joint - negative influence is not properly mitigated.
In this paper, we introduce a novel pre-processing approach suited to single-epoch stand-alone positioning, which aims at overcoming major drawbacks of conventional selection techniques. More specifically, we redefine the initial satellites selection problem as a weighting problem, where one first
originality lies in the application of machine learning (ML) into the domain/space of pseudo-range residuals (i.e., relying on intermediary positioning results conditioned on specific subsets of satellites) for determining the best satellite weights.
In order to fully exploit the potential of deep learning tools in harnessing hidden correlations between pseudo-range measurements, as well as possible joint effects from discarding several satellites at a time (on final positioning performance), we exploit a long-short term memory neural network (LSTM NN), which is fed with a customized pseudo-range residuals matrix (processed as a whole), representing a second originality of our contribution. This NN is trained to predict quality factors that account for the link-wise standard deviations of pseudo-range errors. These predictions are finally used to compute nearly-optimal satellite weights within a standard weighted least squares (WLS) positioning solver.
To sum up, our main contributions are two fold. First, we introduce a novel deep learning-based technique to solve the satellite selection problem dedicated to improve the accuracy of single-epoch positioning. The main ingredients of our approach are the LSTM NN architecture coupled with the new and customized pseudo-range residual matrix used as the NN input. Second, we improve the computation of the measurement weights in comparison with conventional parametric methods.
Finally, our approach is tested on both synthetic simulation data and real-field experimental data from extensive measurement campaigns, which were conducted with a dedicated test platform under typical vehicular mobility in a variety of scenarios and environments.
Note that our approach can be beneficial to both real-time location-based applications requiring accurate positioning information (e.g., autonomous vehicles, unmanned aerial vehicles...) and offline post-processing applications (e.g., aiming at correcting raw online GNSS trajectory a posteriori).
The rest of this paper is structured as follows. First, Section II introduces the system model and the general problem formulation. On this occasion, we also recall representative satellites selection techniques from the literature. Then, our construction of the residuals matrix, its use as an input to the LSTM NN, as well as the underlying machine learning model, are detailed in Section III. Finally, numerical results on both synthetic and real datasets are analyzed in Section IV.
## II Problem Formulation
We start by describing the problem under study and then we summarize the most representative existing work.
### _Single epoch solution_
For the sake of simplicity and without any loss of generality, we consider a set of \(N\) single band, single constellation, pseudo-ranges measurements \(\{\rho^{i}\}_{i=1...N}\) (the same formulation would apply for carrier phase, pseudo-range rates, etc.), while for the experimental validations reported in Sec. IV we will deal with multi-band and multi-constellation scenario. For those pseudo-range measurements, the appropriate compensations computed from ephemeris data (satellite vehicle (SV) clock bias, ionospheric and tropospheric delays, Sagnac correction etc.) have already been applied. In the absence of multi-path or any strong bias, the \(i\)-th SV measurement can be modeled as
\[\rho^{i}=\sqrt{(x-x^{i})^{2}+(y-y^{i})^{2}+(z-z^{i})^{2}}+c\ \delta+\eta^{i}, \tag{1}\]
where \(\rho^{i}\) is the pseudo-range between a receiver \(R\) and the \(i\)-th SV, and \((x^{i},y^{i},z^{i})\) and \((x,y,z)\) the coordinates for the \(i\)-th satellite and the receiver, respectively. The parameter \(c\) is the speed of light, and \(\delta\) is the clock bias between the receiver and the considered constellation; \(\eta^{i}\) is the observation noise which represents the receiver noise and residual errors from ionosphere and troposphere delays, etc. Although ionosphere and troposphere residual errors (i.e. after correction from navigation message) are highly correlated over time, they could be considered as independent and zero mean for single epoch processing. Hence, we can assume that the observation noise follows a centered Gaussian distribution \(\eta^{i}\sim\mathcal{N}(0,\,\sigma_{i}^{2})\).
Our aim is to estimate the vector \(\mathbf{X}=[x,y,z,\delta]^{\top}\) from the measurements \(\{\rho^{i}\}_{i=1...N}\). A widely used and efficient solution is provided by the maximum-likelihood estimator (MLE) [8], which simplifies to a weighted least-squares for our Gaussian noise model
\[\hat{\mathbf{X}}=\arg\min_{\mathbf{X}}\ \sum_{i=1}^{N}\omega^{i}(\rho^{i}-h^{i}( \mathbf{X}))^{2}, \tag{2}\]
with the observation function for satellite \(S_{i}\) defined as
\[h^{i}(\mathbf{X})=\sqrt{(x-x^{i})^{2}+(y-y^{i})^{2}+(z-z^{i})^{2}}+c\ \delta, \tag{3}\]
and the weights equal to
\[\omega^{i}=\frac{1}{(\sigma^{2})^{i}}. \tag{4}\]
The solution can be easily computed using an optimization algorithm such as Gauss-Newton or Levenberg-Marquardt [9].
### _GNSS Satellite Selection and Weighting Problems_
Generally, a first basic SV selection based on satellite elevation or \(C/N_{0}\) thresholds is performed to excluded presumably strongly biased measurements. Then, the standard deviation of the remaining measurements will be estimated using an empirical functions, for example, as the following,
\[(\sigma^{2})^{i}=\frac{1}{\sin^{2}{(\theta^{i})}}\left(\sigma_{\rho Z}^{2}+ \frac{\sigma_{\rho c}^{2}}{(C/N_{0})^{i}}+\sigma_{\rho a}^{2}(a^{2})^{i} \right), \tag{5}\]
where this functions mainly depends on satellite elevation \(\theta_{0}\), \(C/N_{0}\), acceleration \(a^{i}\), and other empirically calibrated coefficients (\(\sigma_{\rho Z}^{2},\sigma_{\rho c}^{2},\sigma_{\rho a}^{2}\)) that are hard to be tuned.
However, some measurements could be strongly biased by multi-path for instance, and violate the expected Gaussian model resulting in a significant degradation of the solution accuracy. It is thus of primary importance to exclude these measurements from the solution, either by discarding them
or by assigning them a zero weight, which is called de-weighting. Such measurements can be efficiently detected at the navigation processor stage based on innovation monitoring tests for instance [10], but this requires that the tracking filter has converged and that the predicted state (i.e., position, receiver clock offset, etc.) is accurate enough. Nevertheless, for single epoch processing, no predicted solution is available and SV selection relies on measurements only with very limited prior knowledge. This implies that the detection of \(k\) faults among \(N\) measurements could potentially results in a huge number of subsets, \(C_{k}^{N}\), to test in case of an exhaustive search, which may be intractable in real-time and even for post-processing. As an example, assuming at most 10 faults among 40 measurements would result in more than \(847\times 10^{6}\) subsets to test, which is computationally prohibitive.
### _Existing Works_
In GNSS, fault detection and exclusion (FDE) usually rely on statistical tests and consistency checks to identify and reject corrupted signals to improve the integrity of the navigation solution. Various FDE techniques already exist in the literature, such as classical FDE [11, 12], the brute force subset testing approach [13], ARAIM techniques [14, 15], and others that depend on the Range Consensus (RANCO) [7]. More recent works [16] have proposed a new FDE algorithm that is based on both a standalone FDE block making use of the residual test relying on WLS, and an FDE-based Extended Kalman Filter (EKF). This solution alternates between the two branches, based on a covariance matrix threshold. Typically, the EKF utilizing FDE is employed when the covariance matrix falls below a pre-defined threshold. If the covariance exceeds this limit, the FDE is used on its own. This algorithm was shown to provide significant performance gains in terms of accuracy compared to conventional state-of-the-art FDE algorithms. For this reason, we choose it as a reference for benchmark purposes in this paper. However, since we consider by definition a single-epoch localization application, we have configured this algorithm to use only its standalone FDE block.
## III Proposed System Architecture
As mentioned above, we herein consider a single-epoch stand-alone positioning framework based on pseudo-ranges, without performing differential corrections, where measurements are just pre-processed based on the ephemeris data (i.e., compensating for ionospheric, tropospheric, and Sagnac effects, as well as satellite clock errors). One main objective is hence to leverage the complex - and likely hidden - inter-dependencies and joint effects (over multiple links) through supervised deep machine learning, so as to efficiently weight (or de-weight) the contributions from all satellites.
Overcoming the difficulty of labeling the data per link on the one hand, while still observing the joint effects on positioning performance of multiple links from various standpoints (e.g., link quality, Geometric Dilution of Precision (GDoP), etc.) on the other hand, ML is thus applied directly into the space/domain of pseudo-range residuals. For this sake, our approach consists in constructing a matrix of such residuals, where the \(i\)-th row contains all the residuals computed from a solution while excluding the \(i\)-th measurement. Then, using this matrix as the input of a LSTM NN. The latter is trained to predict the weights \(\hat{\omega}^{i}\) related to the underlying distribution of pseudo-range errors according to (4) for valid measurements, noting that a single measurement is observed from each distribution \(\mathcal{N}(0,\,\sigma_{i}^{2})\). In addition, we expect that the algorithm will predict nearly null weights \(\hat{\omega}^{i}\approx 0\) for strongly biased satellites to exclude them. Accordingly, we turn the initial (hard) selection problem into a (soft) weighting problem. The overall system architecture of our proposed approach is illustrated in Fig. 1.
### _Residual Matrices Construction_
At each navigation epoch, we assume that multiple (\(N\)) satellite signals are received and a new matrix of positioning residuals \(\mathbf{M}\) is constructed, as follows. We generate \(N\) subsets \(S_{n}\) of \(N-1\) satellites, where we exclude one distinct satellite (i.e, \(n^{th}\) satellite) at a time.
\[S_{n}=\{\rho^{i}\},\ i=1\ldots N,\ i\neq n \tag{6}\]
For each subset \(S_{n}\), we calculate the corresponding solution \(X_{n}\) using (2). Then, for each of the \(N\) resulting positions \(\{\mathbf{X}_{1},...,\mathbf{X}_{N}\}\) we calculate the \(N-1\) pseudo-range residuals
\[\delta\rho_{X_{n}}^{i}=\rho^{i}-h^{i}(\mathbf{X}_{n}),\ \ i\neq n \tag{7}\]
Coefficient \([\mathbf{M}]_{n,i}\) (i.e. row \(n\), column \(i\)) of the residual matrix \(\mathbf{M}\) is simply given by the corresponding residual for non-diagonal coefficient, or by an arbitrary large value \(\gamma\) for the diagonal terms indicating that the satellite has been excluded (the pseudo-code for constructing the residual matrix is shown in Algorithm 1)
\[[\mathbf{M}]_{n,i}=\begin{cases}\delta\rho_{X_{n}}^{i},&i\neq n\\ \gamma,\ i=n\end{cases} \tag{8}\]
Each row \(n\) of the matrix will thus provides residuals associated with the exclusion of the \(n\)-th measurement. Although it assumes a single fault per subset (i.e., row), the motivation of building such a matrix is that it may be able to reveal hidden inter dependencies between the measurements and their effects on the computed solution, while being fed as a single input to the neural network.
### _Long-Short Term Memory Neural Network_
The overall structure of the proposed residual matrix and more particularly, the evolution of the pseudo-range residual patterns over the \(N\) subsets (i.e., over the matrix rows), both provide rich information about the sought inter-dependencies and their combined effects, which can be advantageously captured by machine learning tools. By analogy, the pseudo-range residuals matrix can be interpreted as a concatenation of \(N\) feature column vectors, each containing the feature values for \(N\) pseudo time steps. Each feature column vector thus corresponds to the set of pseudo-range residuals of one single satellite from each of the \(N\) subsets.
In this kind of problems, the LSTM NN algorithm, which is a type of recurrent neural network (RNN) [17], has the advantage of keeping memory over multiple (possibly distant) pseudo time steps. Hence, is also suited to exploit the correlations across the matrix rows in our case, even if we explicitly deal with a single-epoch problem. Similar applications of the LSTM NN to other time-invariant problems have already been considered. For instance, in [18], LSTM NN was used to process data with long-range interdependence (i.e., using geometric properties of the trajectory for unconstrained handwriting recognition).
## IV Numerical Results
Here, we analyse the performance of our proposed deep learning based approach obtained on synthetic and real data.
### _Synthetic Data_
In order to thoroughly test and validate the feasibility of our proposed approach in capturing the hidden inter-dependencies within the residual matrix, but also its ability to accurately approximate the standard deviations of pseudo-range errors, we first generate a dataset of synthetic simulation data mimicking the behavior of a real GNSS systems. This allows us to illustrate the main performance trends, as well as to validate our approach before applying it to real-world data (See Section IV-B).
This dataset consists of \(180,000\) epochs, each containing \(N=60\) pseudo-range measurements with respect to \(60\) different satellites. The random noise terms affecting those pseudo-ranges are supposed to be independent and identically distributed (i.i.d.), following a distribution obtained as a mixture of normal and exponential distributions. The latter shall indeed account for the possibility to experience either typical errors expected from so-called "good" satellites or much more harmful (and likely positively biased) outlier measurements from "bad" satellites
\[f(x)=\alpha\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}+(1- \alpha)\lambda e^{-\lambda x}, \tag{9}\]
where \(\alpha\) is the mixture parameter, which represents the probability that the measurement error realization is drawn from the normal distribution of mean \(\mu\) and standard deviation \(\sigma\), and \(\lambda\) is the decay rate of the exponential distribution. All these parameters were fine-tuned so as to fit the empirical distribution of real pseudo-range measurements, which were collected along with a reference ground-truth system in various operating conditions. The latter will be used in Subsection IV-B for further performance assessment.
The generated synthetic data was then divided into three disjoint datasets: training, validation, and testing, with respective proportions of \(60\%\), \(20\%\), and \(20\%\)[19], in order to ensure a robust evaluation of the performance of our approach. The process of dividing the data into three sets is a common practice in machine learning, known as the train-validation-test split. The training dataset is used to train the neural network and learn the underlying patterns in the data. The validation dataset is used to evaluate the performance of the neural network during training, and fine-tune the hyperparameters based on its performance. The testing dataset is used as an independent measure to evaluate the generalization performance of the prediction, which is an indication of how well the neural network will perform on new unseen data. This technique helps to prevent from overfitting effects, which occur when a neural network fits too closely the training dataset but performs poorly on new unseen data.
Note that labelling in our decision problem is a very challenging task since finding the best satellites' set is not tractable due to prohibitive combinatorial complexity (even offline and knowing the reference). This is the reason why we decided to convert the initial decision problem into a weighting
Fig. 1: The complete architecture of our proposed approach.
problem. Following a supervised training, the data was hence labeled as:
\[\omega^{i}=1/(\rho^{i}-h(\mathbf{X}_{true}))^{2}, \tag{10}\]
where \(\mathbf{X}_{true}\) is the ground truth position. This intuitive choice is validated by our performance curves in Fig. 2 and 3.
Using this synthetic dataset, an empirical evaluation was first conducted on multiple neural network architectures, including convolutional neural networks (CNN), fully connected neural networks (FCNN), and various other types of recurrent neural networks (Simple-RNN, LSTM, Bi-LSTM and Gated recurrent units (GRU)). The simulation results then confirmed that the LSTM algorithm could provide the best WLS positioning performance (i.e., after applying the best weights out of NN predictions) in our problem. Its architecture was hence further optimized empirically during the training phase, based on real data. The best neural network consisted of one hidden LSTM layer containing \(512\) neurons followed by a dense layer containing \(60\) neurons with a ReLu activation function. On this occasion, the mean squared error (MSE) was used as loss function, and an early stopping callback was implemented to prevent overfitting. The used neural weights optimizer during training was _Adam_[20].
Finally, the WLS positioning performance based on our prediction method has been compared with theoretical bounds (i.e., Cramer Rao lower bound (CRLB)) [21], as well as two genie-aided solutions on the one hand, and a suboptimal unweighted strategy on the other hand (i.e., as a worst-case baseline solution with equally-weighted pseudo-ranges from all the satellites). More precisely, the first tested method, referred to as "Ground-truth Weights", utilizes weights as in (10) with the ground-truth position, before applying WLS positioning. The second method, named "Predicted weights", uses the proposed algorithm to predict link-wise quality factors which is used to compute weights as in (4) for WLS positioning. The third method, referred to as the "Genie-aided" solution, is based on the assumption that a prior knowledge of the biased satellites is available, and utilizes this information to completely exclude biased satellites from the localization solution. This method serves here as a first reference in our benchmark, accounting for the best performance that could be achieved with perfect detection and exclusion of biased satellites. Finally, in the fourth "Equal weights" method, all the satellite measurements are taken into account in WLS positioning (i.e., with the same relative importance, regardless of their actual errors). This somehow represents a worst-case assumption without any exclusion, and serves as another reference in our benchmark, accounting for the performance that can be achieved without any prior knowledge, neither of the biases themselves, nor of their respective statistics.
Fig. 2 and 3 show the empirical cumulative density function (CDF) of horizontal and vertical positioning errors obtained with the four weighting strategies above. Those methods were first applied on a dataset including \(9\%\) of strongly biased satellites (i.e., among the \(60\) available). When compared to the evenly weighted positioning solution, the proposed approach is then shown to yield a typical accuracy improvement in \(95\%\) of the tested epochs (i.e., CDF at the characteristic \(95\%\)-quantile) of about \(1.02\) m in terms of horizontal errors (See Fig. 2) and \(1.18\) m in terms of vertical errors (See Fig. 3). On the other hand, when compared with the "Genie-aided" approach, the difference in accuracy for \(95\%\) of the tested epochs is about \(0.59\) m in terms of horizontal error, and \(0.77\) m in terms of vertical error.
Fig. 4 presents the 1-\(\sigma\) confidence ellipse for various weighting approaches in comparison with the CRLB. The results are acquired through the execution of Monte Carlo trials for a specified satellite and receiver geometry. The "Genie-aided" weighting approach shows close agreement with the CRLB in terms of accuracy, while the evenly weighted approach demonstrates inferior accuracy. The "Predicted weights" extracted from our approach, exhibits accuracy that lies between the CRLB and the "Equal weights" with a tendency towards the CRLB. The "Ground-truth Weights" approach outperforms the CRLB since noise mitigation is based on a perfect knowledge
Fig. 3: Empirical CDF of vertical positioning error for various measurements weighting methods, based on synthetic data.
Fig. 2: Empirical CDF of horizontal positioning error for various measurements weighting methods, based on synthetic data.
of the exact bias, which would not be available to any real estimator.
Still based on synthetic data, a sensitivity study was also conducted to analyze the performance of our approach for different percentages of strongly biased satellites. A preliminary analysis of the real dataset revealed that concrete real-life cases could have up to \(9\%\) biased satellites (i.e., among all the available satellites per epoch). Accordingly, we generated different synthetic datasets, while varying the percentage of strongly-biased satellites (among the \(60\)).
Fig. 5 shows the evolution of the positioning error at \(95\%\) of the CDF (i.e., the \(95\%\)-percentile) as a function of this varying ratio. The results indicate that, as the proportion of biased satellites increases, the performance gain achieved with our approach in comparison with the equally weighted approach tends to grow more rapidly than the performance degradation observed in comparison with the idealized "Genie-aided weights" approach, remaining in the same order of magnitude (i.e., less than \(2\) m vs. more than \(2.6\) m with the "Equal weights" strategy). This allowed us to further validate the robustness of our approach (i.e., far beyond the toy case of one single biased satellite), before applying it to real-world data, as discussed in the next section.
### _Real Data_
Here, we present the most representative results obtained by extensive experimental testing and validation based on real-life data, which was collected from multiple measurement campaigns in a variety of operating conditions. These conditions included open skies, dense urban areas, and various mobility regimes. The data was collected using cutting-edge pieces of equipment and more specifically, an ulox ZED-F9P receiver, which is dual-band and endowed with Real-Time Kinematics (RTK) capabilities. A side cm-level ground-truth referencing system was also utilized to ensure the accuracy and integrity of the collected data, as well as to establish the ground-truth information. The tested GNSS receiver is capable of receiving up to \(N=60\) satellite signals from multiple GNSS constellations (i.e. GPS, GLONASS, GALILEO, etc.) over multiple frequencies (i.e. L1, L2, E1, and E5 bands). Overall, this testing phase consisted of \(56\) distinct sessions for a total of \(181,000\) epochs, providing a comprehensive, representative and diverse dataset for evaluation.
The number of satellites received during each epoch could fluctuate, depending on the operating conditions. In certain environments, such as crowded urban areas for instance, the number of available satellites could be significantly lower than in open sky conditions as illustrated in Fig. 6. The figure depicts the variation in satellite availability during navigation in two conditions: open sky and urban areas. During this session, the availability in open sky regions remains stable at approximately \(30\) satellites, whereas in narrow urban canyons such as the one depicted in Fig. 6, the availability drops to a minimum of \(9\) satellites, as indicated by color-coded fluctuations. It is worth noting that in other sessions, the number of available satellites may increase up to \(45\). This variability is also influenced by the position of the satellites at the time of navigation. As a result, the dimension of our residuals matrix could have significantly varied as well. To address this issue, we employed padding to resize all the residuals matrices to a common matrix size. Additionally, we leveraged the ability of LSTM networks to handle variable-length input sequences by masking the non-present steps of the sequence, represented by rows of the residuals matrix in our representation.
For the purpose of optimizing the neural network architecture with a minimal number of layers and neurons while
Fig. 4: Comparison of Positioning confidence ellipses for various approaches with the CRLB.
Fig. 5: \(95\%\)-quantile of positioning error for different measurements weighting methods, as a function of the ratio of biased satellites (among \(60\)), based on synthetic data.
preserving performance, we employed a technique known as grid search. This involved varying the number of hidden layers and the number of neurons per layer as hyperparameters. To prevent overfitting, we utilized an early stopping callback during the training process. The performance of each trained network was evaluated on an unseen test dataset. Through this process, we could determine that the optimal architecture consisted of \(2\) hidden layers with \(893\) neurons in each (See Fig. 1). Moreover, the neural network was trained using labels that were defined according to (10), where the \(\mathbf{X}_{true}\) stands for the ground truth position collected from the reference ground-truth system.
Similar to the tests performed on synthetic data, Fig. 7 and 8 show quite significant improvements in terms of both horizontal and vertical errors, in comparison with the state-of-the-art solution from [16] (See Subsec. II-C). Typically, our approach exhibits a performance gain of \(0.61\) m (resp. \(1.38\) m) in terms of horizontal error at \(68\)% (resp. \(95\)%) of the CDF. As for the vertical error, we also observe an improvement of \(0.66\) m (resp. \(1.43\) m) at \(68\)% (resp. \(95\)%) of the CDF. This consistent performance improvement across all data points and for the different problem dimensions illustrates again the robustness of our proposal in improving localization accuracy under various operating conditions.
As shown in Fig. 9, which depicts a segment of one navigation session in a particularly penalizing environment (urban canyon), our approach could provide reliable single-epoch positioning results all along the tested trajectory, whereas the state-of-the-art approach fails in several important portions. More generally, over all the tested sessions, the state-of-the-art approach was shown to fail in providing any solution in about 8% of the tested epochs (in average), while our approach could systematically provide a reliable positioning solution. Our proposal thus seems even more particularly suited to severe operating conditions and/or challenging environments.
Fig. 8: Empirical CDF of horizontal error for various measurements weighting/selection strategies (incl. [16]), based on real field data.
Fig. 6: Variation of satellite availability while navigating in different conditions (open sky vs urban area).
Fig. 7: Empirical CDF of horizontal error for various measurements weighting/selection strategies (incl. [16]), based on real field data.
Fig. 9: Example of positioning traces obtained for one of our field navigation sessions (urban canyon), for the proposed approach (green dots), the approach in [16] from recent state-of-the-art (red dots), and the ground-truth reference system (blue solid line).
## V Conclusions
In this paper, we have introduced a new pre-processing technique for single-epoch standalone GNSS positioning based on deep machine learning, which aims at optimally weighting the pseudo-range contributions from available satellites. In particular, we rely on an LSTM neural network architecture, which is fed by a customized matrix of conditional pseudo-range residuals. Performance assessment on both synthetic data and real data resulting from multiple navigation sessions show the high potential and relevance of our approach in challenging operating contexts, where conventional parametric satellites selection techniques would fail. Accordingly, this solution is suited to real-time applications requiring good continuity of the navigation service, as well to offline applications necessitating high-accuracy traces retrieval.
Future works could consider utilizing other satellite features/metrics (e.g., \(C/N_{0}\), elevation) as complementary information channels (i.e., besides the matrix of residuals currently in use) while feeding the learning process.
|
2301.09834 | Implementation of the Critical Wave Groups Method with Computational
Fluid Dynamics and Neural Networks | Accurate and efficient prediction of extreme ship responses continues to be a
challenging problem in ship hydrodynamics. Probabilistic frameworks in
conjunction with computationally efficient numerical hydrodynamic tools have
been developed that allow researchers and designers to better understand
extremes. However, the ability of these hydrodynamic tools to represent the
physics quantitatively during extreme events is limited. Previous research
successfully implemented the critical wave groups (CWG) probabilistic method
with computational fluid dynamics (CFD). Although the CWG method allows for
less simulation time than a Monte Carlo approach, the large quantity of
simulations required is cost prohibitive. The objective of the present paper is
to reduce the computational cost of implementing CWG with CFD, through the
construction of long short-term memory (LSTM) neural networks. After training
the models with a limited quantity of simulations, the models can provide a
larger quantity of predictions to calculate the probability. The new framework
is demonstrated with a 2-D midship section of the Office of Naval Research
Tumblehome (ONRT) hull in Sea State 7 and beam seas at zero speed. The new
framework is able to produce predictions that are representative of a purely
CFD-driven CWG framework, with two orders of magnitude of computational cost
savings. | Kevin M. Silva, Kevin J. Maki | 2023-01-24T06:14:58Z | http://arxiv.org/abs/2301.09834v1 | Implementation of the Critical Wave Groups Method with Computational Fluid Dynamics and Neural Networks
###### Abstract
Accurate and efficient prediction of extreme ship responses continues to be a challenging problem in ship hydrodynamics. Probabilistic frameworks in conjunction with computationally efficient numerical hydrodynamic tools have been developed that allow researchers and designers to better understand extremes. However, the ability of these hydrodynamic tools to represent the physics quantitatively during extreme events is limited. Previous research successfully implemented the critical wave groups (CWG) probabilistic method with computational fluid dynamics (CFD). Although the CWG method allows for less simulation time than a Monte Carlo approach, the large quantity of simulations required is cost prohibitive. The objective of the present paper is to reduce the computational cost of implementing CWG with CFD, through the construction of long short-term memory (LSTM) neural networks. After training the models with a limited quantity of simulations, the models can provide a larger quantity of predictions to calculate the probability. The new framework is demonstrated with a 2-D midship section of the Office of Naval Research Tumblehome (ONRT) hull in Sea State 7 and beam seas at zero speed. The new framework is able to produce predictions that are representative of a purely CFD-driven CWG framework, with two orders of magnitude of computational cost savings.
Computational Fluid Dynamics, Neural Networks, Extreme Events, Wave Groups, Machine Learning, Seakeeping, Ship Hydrodynamics
## Introduction
Ensuring the safety of a vessel in extreme ocean conditions is a crucial consideration for designers and operators. Designers optimize the design for normal operating conditions while ensuring that it will withstand the most extreme conditions. Due to the stochastic nature of the waves and the rarity of extreme events, identifying wave sequences that lead to extremes with a Monte-Carlo approach is expensive. Different probabilistic frameworks have been developed both to identify extremes and calculate the probability of their occurrence. These probabilistic methods include extrapolation-type approaches such as Peaks-Over-Threshold (Campbell and Belenky, 2010) and the Envelope Peaks-Over-Threshold (EPOT) methods (Belenky and Campbell, 2011; Campbell and Belenky, 2010), as well as perturbation-type approaches like the split-time method Belenky (1993); Belenky et al. (2010, 2011).
Another category of extreme event methodologies are wave group methods that allow for actual observations of extreme events. These wave group methods include the Design Loads Generator (DLG) from Alford (2008); Alford et al. (2011); Kim (2012), where response amplitude operators (RAO) estimate an extreme value distribution and wave trains are designed to satisfy the estimated distribution. An additional wave group approach is the sequential sampling
methodologies from Mohamad and Sapsis (2018); Gong et al. (2020) where wave groups are parameterized by their overall length and amplitude. Therefore, knowing the probability of each wave group and predicting the corresponding maximum response enables the development of a probability density function (PDF).
Another wave group method is the critical wave groups method (CWG), first developed with regular waves in Themelis and Spyrou (2007) and then extended to irregular waves in Anastopoulos et al. (2016); Anastopoulos and Spyrou (2016, 2017, 2019). The underlying idea of the CWG method is that the probability of a response exceeding a threshold is equal to probability of all the pairs of wave groups and ship motion states at the moment of encounter that result in a threshold exceedance. The _critical_ wave groups are those that lead to a near-exceedance of the threshold, and therefore any wave group of similar form and ship encounter conditions with larger wave heights also result in an exceedance. Starting from the largest wave in the group with a given height and period, the CWG method utilizes a Markov chain to construct a deterministic wave group based on the most likely successive wave. The majority of the research with the CWG method has considered a single degree-of-freedom (DoF) ordinary differential equation (ODE) model for roll. With an ODE, the encounter conditions can be treated as initial conditions and the simulation of the response due to excitation from the wave groups can be instantiated impulsively. However, if the critical wave group method is to be implemented with higher fidelity hydrodynamic tools or model tests, both the wave groups and encounter conditions must be physically realizable. A wave group can not impulsively appear and meet the ship with a given initial condition in a wave basin or high-fidelity time-domain numerical simulation. Thus, description of the fluid and body state and its history must be prescribed, which is impractical for computational fluid dynamics (CFD) and impossible in a model testing environment.
Silva and Maki (2021) addresses the issues with explicitly prescribing encounter conditions and instantaneously starting simulations of the deterministic wave groups by introducing the concept of natural initial conditions. First, the ship response in random seas is simulated with CFD. Then, the deterministic wave groups constructed from Markov chain predictions in the CWG method are embedded into the previously simulated random wave trains, such that the body state of interest occurs at the moment of encountering the deterministic wave group. Embedding the wave group results in a composite wave train that has both an encounter condition and wave group that corresponds to the probability of exceedance calculation developed for CWG. This methodology of forming composite wave trains is applicable to both low and high-fidelity hydrodynamic tools as well as model testing, and allows for repeatable realizations of extreme events. Although the CWG method was successfully implemented with CFD, significant computational cost remains due to the quantity of simulations required to identify critical wave groups for a variety of different encounter conditions and wave group parameters. Therefore, a methodology is required to reduce the total quantity of CFD simulations.
Previous research by Mohamad and Sapsis (2018) and Gong et al. (2020) utilized the concept of sequential sampling and Gaussian Process Regression (GPR) to address the issue of computational expense when predicting statistics of extreme events for marine dynamical systems. By developing a GPR surrogate model of the ship response with fewer simulations and improving the model through an optimization that targets the extremes, they achieved a converged model that exhibits the same statistical behavior of the underlying dynamics with few simulations. However, the GPR models in Mohamad and Sapsis (2018) and Gong et al. (2020) create a mapping between a a wave group parameterization and the maximum response due to that wave group, and do not retain any of the temporal information, thus, understanding the mechanisms leading to extremes is difficult. The present paper aims to retain the temporal response during extreme events by incorporating the methodology developed in Xu et al. (2021); Silva and Maki (2022), where long short-term memory (LSTM) neural networks are trained to learn the time-accurate ship response due to instantaneous wave elevation. A large component of the implementation of CWG with CFD is the construction of the composite wave trains. The utilization of LSTM allows for the entire ship response to be represented in the surrogate model, rather than simply characterizing the statistics that summarize the particular response (_e.g._ maximum). However, the previous studies that utilize a LSTM neural network for ship motion prediction only consider a random nominal wave field (Xu et al., 2021; Silva and Maki, 2022; del Aguila Ferrandis et al., 2021; D'Agostino et al., 2022). The present paper focuses on developing a model that can predict the response time-history due to an excitation from the composite wave trains with embedded deterministic wave groups within random seas. The end result is a trained LSTM neural network that can identify all the critical wave groups rapidly for various response thresholds to calculate the probabilities of exceedance for a particular response. In contrast to previous work, the LSTM neural network model developed in the present paper is trained for extreme motions that exhibit strong nonlinearity.
The objective of the current research is to build off the implementation of the CWG with CFD in Silva and Maki (2021) and the LSTM framework from Xu et al. (2021) and Silva and Maki (2022) to develop a new framework that utilizes an LSTM neural network model to reduce the computational burden and provide predictions of the critical wave groups. The improved framework will result in predictions of the probability of exceedance for various response thresholds and a trained neural network model, capable of identifying more critical wave groups to observe with CFD and providing a higher resolution in the probability of exceedance calculations. Additionally, the present paper explores and compares both a general and ensemble modeling approach. The general approach utilizes a single neural network to train all the
composite wave trains over the entire parameter range of interest, while the ensemble model approach builds several models where each is responsible for wave groups within a subset of the total parameter space of interest.
The remainder of the current paper is organized as follows. A brief summary of the CWG method and the previously developed framework from Silva and Maki (2021) is presented, followed by an overview of the considered neural network architecture and methodology. Then, the proposed improved framework with a neural network driven surrogate is detailed along with the two modeling approaches. Finally, the new framework is demonstrated with a case study of a midship section of the Office of Naval Research Tumblehome (ONRT) hull form experiencing extreme roll. The two neural network modeling approaches are compared and the effect of training data quantity on accuracy is explored.
## Critical Wave Groups Method
The present paper derives directly from the CWG method developed in Themelis and Spyrou (2007); Anastopoulos et al. (2016); Anastopoulos and Spyrou (2016, 2017, 2019). The main idea of the CWG method is to identify wave groups for a selected set of ship states at the moment of encountering the wave group (encounter conditions) that lead to a near-exceedance of a specified response threshold. In previous research with ODE models of roll, these encounter conditions are typically referred to as initial conditions. Wave groups in the CWG method are constructed systematically with Markov chains and the statistical relationship between successive wave heights and periods. The memoryless property of Markov chains allows for predictions of the most likely successive waves, given the height and period of the current wave. Therefore, wave groups containing \(j\) waves can be constructed solely by prescribing the height and period of the largest wave of the group (\(H_{c}\), \(T_{c}\)). Fig. 1 shows a given wave group with the heights and periods predicted through the Markov Chain predictions. Additional constraints such as the location of zero crossing and size of crest relative to wave height are assumed in accordance with Anastopoulos and Spyrou (2019) in order to produce a continuous representation of a wave group with a Fourier basis through trigonometric interpolation (Nathan, 1975).
Without considering encounter conditions, categorizing the deterministic wave groups by \(H_{c}\), \(T_{c}\) and \(j\), critical wave groups for a given \(T_{c}\) and \(j\) can be found by only varying the value of \(H_{c}\). An infinite number of values of \(T_{c}\) are possible. Therefore, all wave periods are discretized into \(m\) intervals of length \(\Delta T\), where \(T_{c}\) is representative of the wave periods in that interval. A \(\Delta T\) of 1 is utilized for the present paper in accordance with the findings of Anastopoulos and Spyrou (2019). When considering the effect of encounter conditions on identifying critical wave groups, the possible values must also be discretized into \(k\) components that are representative of the intervals with which they correspond. A single critical wave group can be identified for wave groups where the period of the largest wave is \(T_{c}\), the run length is \(j\), and the encounter condition is \(ec_{k}\), by varying the height of the largest wave in the group. These critical wave groups can be combined into the expression in Eqn. (1), where the probability of the response \(\phi\) exceeding
Figure 1: Markov chain construction of wave groups and additional geometric constraints.
\(\phi_{\text{crit}}\) is expressed as the combination of the probability of observing groups larger than the critical wave groups \(wg_{m,j}^{(k)}\) and the probability of the encounter conditions \(ec_{k}\)(Anastopoulos and Spyrou, 2019).
\[p\left[\phi>\phi_{\text{crit}}\right]=\sum_{k}\,\sum_{m}\,\left(1-\prod_{j} \left(1-p\left[wg_{m,j}^{(k)}\right]\right)\right)\times p\left[ec_{k}\right] \tag{1}\]
The probability of encountering groups larger than the critical wave groups \(wg_{m,j}^{(k)}\) is calculated by first identifying the values of \(H_{c}\) that correspond to the near-exceedance of a specified threshold for all the different combinations of \(T_{c}\), \(j\), and \(ec_{k}\). Then, the probability of exceeding each of those critical wave groups that are uniquely described by \(H_{c}\), \(T_{c}\), and \(j\) is calculated with Eqn. (2), where the probability of encountering a wave group larger than \(wg_{m,j}^{(k)}\) for the given period range \(m\), run length \(j\), and initial condition \(k\) is equal to the joint probability of encountering a wave group with heights larger than each wave height in the critical group, \(\mathbf{h}_{cr}^{(k)}\), and the periods of each wave in the group being within the wave period range \(T_{cr,m}\). The probability of the encounter conditions \(p\left[ec_{k}\right]\) can be found by sampling a probability distribution of the quantities of interest through simulations in random irregular waves. Detailed descriptions of Eqn. (1) and (2) can be found in Anastopoulos and Spyrou (2019).
### CWG-CFD Framework
The implementation of the CWG method with CFD (CWG-CFD) was first introduced in Silva and Maki (2021) and an overview of the method is shown in Fig. 2. The framework begins with a selection of a seaway of interest and the spectrum is sampled to produce random irregular wave time histories to both simulate in CFD and develop the statistical relationships between successive wave heights and periods. The successive wave statistics are then utilized with Markov chains to construct the deterministic wave groups necessary for the CWG method. The CFD simulations in random waves are used to develop the probability distribution of encounter conditions, as well as identify sequences of waves that lead to motion states of interest for prescribing different ship motion states at the moment of wave group encounter, referred to as natural initial conditions. The deterministic wave groups and natural initial conditions are combined into composite wave trains that are then simulated with CFD. The resulting maximum responses from the simulated composite wave trains are then utilized to identify the critical wave groups for various response thresholds, which are then leveraged in the probability of exceedance calculation. To produce physically realizable wave groups with specified ship motion states at the moment of encounter, Silva and Maki (2021) introduced the natural initial condition concept. Simulations of a ship in random irregular waves are used to identify motion states of interest and the series of waves they originated from. Then, deterministic wave groups constructed from the CWG method are embedded into the wave train, such that the motion states of interest occur at the start of the wave group. Embedding the wave group in this manner results in a single and repeatable composite wave train that can be simulated, preserving the integrity of the CWG methodology with a deterministic wave group and prescribed encounter state. The natural initial condition concept can be also extended to other hydrodynamic simulation tools and model tests, as well as different wave group probabilistic frameworks where wave groups are constructed and varying the encounter state when encountering the wave group is desired.
Fig. 3 shows the blending process for embedding the wave group, \(\eta_{\text{cwg}}\left(\mathbf{x},t\right)\), into an irregular wave train, \(\eta_{\text{lc}}\left(\mathbf{x},t\right)\), to create a single composite wave train, \(\eta_{\text{c}}\left(\mathbf{x},t\right)\). The composite wave train is described by:
\[\eta_{\text{c}}=(1-\beta_{2})\left[(1-\beta_{1})\eta_{\text{ic}}+\beta_{1} \eta_{\text{cwg}}\right]+\beta_{2}\eta_{\text{lc}} \tag{3}\]
where each blending function is defined as:
\[\beta\,=\frac{1}{2}\left(1+\tanh\left(\frac{t\,-\,t_{b}}{t_{o}}\right)\right) \tag{4}\]
The functions \(\beta_{1}\) and \(\beta_{2}\) correspond to the blending at the beginning and end of the wave group, respectively. Fig. 3 shows the blending process for embedding the wave group into an irregular wave train to create a single composite wave train. The two parameters in Eqn. (4), \(t_{b}\) and \(t_{o}\), control time shift and scale of the overlap between \(\eta_{\text{crwg}}\left(\mathbf{x},t\right)\) and \(\eta_{\text{lc}}\left(\mathbf{x},t\right)\). The time shift \(t_{b}\) is selected to be \(T_{p}\)/10 s from the start or end of the wave group to enforce that 95% of the composite wave train is the irregular wave train at \(t\) = \(t_{b}\). The time scale \(t_{o}\) is selected with Eqn. (5), where the factor of 0.9 corresponds to approximately 95% of the first signal at the beginning of the blending interval and 95% of the second signal at the end of the blending interval, and \(T_{p}\) is the peak modal period of the seaway. A composite wave shares the same \(t_{o}\) for \(\beta_{1}\) and \(\beta_{2}\), but the value of \(t_{b}\) depends on the beginning and end of the wave group.
Figure 3: Formation of a composite wave by embedding a deterministic wave group into an irregular wave train.
Figure 2: Flow chart of CWG-CFD framework from Silva and Maki (2021).
\[t_{o}\;=\;\frac{T_{p}}{10\cdot\tanh^{-1}(0.9)} \tag{5}\]
Fig. 4 demonstrates how a critical wave group is identified for a given response \(\phi\) exceeding a threshold \(\phi_{\text{crit}}\). Each curve in Fig. 4 represents the wave elevation, \(\eta\), and corresponding response, \(\phi\), time histories for a given set of composite waves with identical encounter conditions and wave groups with the same period of the largest wave \(T_{c}\) and run length \(j\). Varying only the height of the largest wave in the group, a critical group can be found that results in a near exceedance of \(\phi_{\text{crit}}\). At each desired threshold, encounter condition, wave period range, and run length, the procedure shown in Fig. 4 can be performed for a series of values for the height of the largest wave in the group to identify all of the critical wave groups. The corresponding probabilities of each of those critical wave groups and encounter conditions are combined with Eqn. 2 to calculate the probability of exceedance at a given threshold.
### Neural Network Model
The present paper relies heavily on the methodology developed in Xu et al. (2021) and Silva and Maki (2022) for constructing a system identification model that represents the dynamical response of a vessel in waves with LSTM neural networks. Ship motions in waves are an example of a causal dynamic system. Thus, their output not only depends on the current force excitation (eg. waves), but also the previous excitation as well. The output of a discrete dynamical system \(y_{t}\) can be described by:
\[y_{t}=f(x_{t},x_{t-1},x_{t-2},\cdots) \tag{6}\]
where \(f\) is a mapping function and \(x_{t}\) corresponds to the input at time index \(t\). Eqn. (6) demonstrates that the output state \(y_{t}\) not only depends on the current input \(x_{t}\), but also previous values (\(x_{t-1},x_{t-2},...\)). The overall goal of training the model is to develop the best nonlinear mapping \(f\) that describes the underlying dynamics. In the current methodology, the input \(x_{t}\) corresponds to a description wave time-history at multiple wave probes in the CFD domain, the output \(y_{t}\) is the response of the vessel, and the mapping is found through the construction and training of an LSTM neural network model.
Fig. 5 displays an example neural network architecture with five LSTM layers, followed by a dense layer. Inputs and outputs are denoted as \(x_{t}\) and \(y_{t}\) respectively, where \(t\) is the time step index that ranges from 1 to \(T\). \(C_{t}^{n}\) and \(h_{t}^{n}\) correspond to state and output of LSTM cell \(n\) at time index \(t\). The stacking of LSTM layers allows for the output of the previous layers to be used as the input for the next layer. Generally, adding layers to the neural network model architecture allows for a greater level of abstraction within the trained model, which allows for generalized predictions of input scenarios that are not considered during the model training. The dense layer in Fig. 5 employs a linear activation function and receives the last of the LSTM layers as input, and outputs the final result of the neural network model. The
Figure 4: Identification of a critical wave group for a given set of wave groups with similar shapes.
model architecture in this paper is implemented with the toolbox Keras (Chollet et al., 2015) with Tensorflow (Abadi et al., 2015) as its backend.
The first step of building the model is to identify the input and corresponding outputs and split the data into training and testing subsets. The model is trained with the training dataset, while the test set is independent of the training and is utilized to validate the model. The inputs and outputs are then standardized such that the mean of each is zero and the standard deviation is one. The scaling of the features avoids instances of one quantity dominating the training process. Next, the model enters an optimization process to minimize the difference between the model predictions and the training data, which is then quantified with a loss function, or metric of how close the model predictions are to the data. The loss function utilized in the present paper is the mean-squared error averaged over all time steps shown in Eqn. (7), where \(\hat{y}\) is the prediction of the output sequence and \(y\) is the true value.
\[L(\hat{y},y)=\frac{1}{T}\sum_{t=1}^{T}(\hat{y}_{t}-y_{t})^{2} \tag{7}\]
The training of the model at every iteration starts with forward propagation, where the model computes from the input layer towards the output layer with the current model parameters. Then, the loss is computed with outputs from the current model. After, backward propagation computes the loss derivative with respect to each of the parameters within the model from the output layer to the input layer. Based on the loss derivatives, the model parameters are updated to minimize the loss function and the process is repeated. The case studies presented in this paper train the neural networks with an Adam optimizer (Kingma and Ba, 2014).
Figure 5: Neural network architecture from Xu et al. (2021)
In the current paper, the input to the neural network is the wave elevation time history at 27 wave probes in the CFD domain in accordance with Silva and Maki (2022) and the output is the ship heave and roll temporal response. The developed models all utilize two LSTM layers followed by a dense layer, with each LSTM layer containing 50 cells. This model architecture follows the ship motion example considered in Xu et al. (2021). Overall, the LSTM approach can produce a time-accurate representation of the ship response, where the time histories for heave and roll motion are compared for a composite wave train that is not included in the training dataset for the neural-network model. Generating the temporal response of the motions for a given composite wave train provides insight into how extremes occur as opposed to approaches taken in Mohamad and Sapsis (2018) and Gong et al. (2020), which only consider the maximum value of the response in their surrogate models. The ability to produce the response time histories with the LSTM neural network becomes much more useful when considering complex 6-DoF motions.
The current paper also implements the Monte Carlo Dropout approach developed by Gal and Ghahramani (2016, 2016) to quantify the uncertainty in the LSTM neural network model predictions. Dropout is typically employed as a regularization technique during training to avoid overfitting a model by randomly excluding a portion of the neural network. The Monte Carlo Dropout approach applies the dropout to the prediction as well and provides an ensemble of stochastic predictions, which can be converted into a mean prediction with an uncertainty estimate. The present methodology implements the Monte Carlo Dropout method by adding a dropout layer after each LSTM layer in Fig. 5.
## CWG-CFD-LSTM Framework
The CWG-CFD framework developed in Silva and Maki (2021) is capable of producing predictions of extreme events and the present paper aims to recover the same quantitative calculations, but at a fraction of the computational cost with the addition of LSTM neural networks. The CWG-CFD-LSTM approach outlined in Fig. 6 is identical to the CWG-CFD framework from Silva and Maki (2021), up to the point of constructing the composite wave trains with the natural initial conditions and embedded deterministic wave groups. The CWG-CFD-LSTM differs starting with the random selection of a specified quantity of composite wave trains. Then, the ship response due to each of those composite wave trains is simulated with CFD. An LSTM neural network model (or multiple) is then trained with the CFD simulations of ship response due to the composite wave trains. After training, the different neural network models are then employed to simulate all the remaining composite wave trains and identify the critical wave groups. The critical wave groups, encounter conditions, and their respective probabilities are then considered with Eqn. (1) to calculate the probability of exceedance.
The current paper considers both a general and an ensemble modeling approach. The general approach considers a single neural network model that is trained with randomly selected composite wave trains. The ensemble approach utilizes several models, each responsible for composite wave trains, with a specified period of the largest wave \(T_{c}\) and run length \(j\). Both modeling approaches are explored because although the general approach allows for more training runs per model. When compared to an ensemble approach in terms of total training runs required, it must predict the dynamics over a larger parameter range. The ensemble model approach only considers wave groups within a smaller subset of the total parameter range. Therefore, the ensemble model approach only needs differentiate between the height of the largest wave in the group and the various encounter conditions, which can provide added accuracy and faster convergence with respect to training data quantity.
### Case Study
The proposed CWG-CFD-LSTM framework is demonstrated with the same case study considered in the CWG-CFD framework developed in Silva and Maki (2021) of a two-dimensional (2-D) midship section of the ONRT geometry (Bishop et al., 2005) shown in Fig. 7, with hull and fluid properties shown in Table 1. The case study uses the open-source toolkit OpenFOAM(r), to simulate the ship motion response due to nonlinear generated seaways with customized CFD solvers and libraries developed by the Computational Ship Hydrodynamics Laboratory (CSHL) at The University of Michigan (Filip et al., 2017; Piro and Maki, 2013). The response of a 2-D ONRT midship section in waves demonstrates the ability of the LSTM models to represent the underlying nonlinear response simulated with CFD and showcases how the presented approach can produce similar extreme statistics with a reduction in the computational cost, in comparison to a purely CFD-driven CWG framework. The case study uses a JONSWAP spectrum (Hasselmann et al., 1973) with a peak enhancement factor \(\gamma=3.3\), a significant wave height \(H_{s}=7.5\) m, and a peak modal period \(T_{p}=15\) s, corresponding to a Sea State 7 (NATO, 1983). The 2-D midship section is only permitted to heave and roll, and is constrained in the other DoF with waves traveling from beam seas. The considered case study is setup to predict the extreme roll of the midship section and uses roll and roll velocity as encounter conditions.
Figure 6: Flow chart of proposed CWG-CFD-LSTM framework.
The training matrix, neural network architecture, and hyper-parameters for the case study are presented in Table 2. The CWG-CFD-LSTM framework is evaluated with training datasets of 50, 100, 200, or 400 training runs for both the ensemble and general neural network modeling approaches. The training dataset is identical between the general and ensemble modeling approaches, and the smaller sized training datasets are a subset of the larger training runs. For example, the models with 100 total training runs utilize the same 50 runs as the models trained with only 50 training runs. For each total quantity level of training data, the runs are segregated equally across the 10 \(T_{c}\) and \(j\) pairs from Silva and Maki (2021). For each \(T_{c}\) and \(j\) pair, training runs are selected randomly in terms of \(H_{c}\) and the encounter conditions. For the ensemble approach, a separate model is constructed for each of the \(T_{c}\) and \(j\) pairs. The general approach utilizes all of the training data as the ensemble model approach, but only builds a single model. For example, the ensemble model approach for 400 total training runs contains 40 runs for each of the 10 \(T_{c}\) and \(j\), while the general approach would train off the same 400 runs. The same is true for other training dataset sizes. This breakdown of training data ensures that there is not any bias in the training, and both modeling approaches have the same information available. Each model utilizes the same architecture and training methodology and is evaluated against 25,100 validation runs that correspond to all of CFD simulations required to calculate the probability of exceedance in Silva and Maki (2021) for both the \(L_{2}\) (formulated as the root mean squared error) and \(L_{\infty}\) error described in Eq. (8) and (9) respectively.
\[L_{2}(y,\hat{y})=\sqrt{L(y,\hat{y})}=\sqrt{\frac{1}{T}\sum_{i=1}^{T}(y_{i}-\hat {y}_{i})^{2}} \tag{8}\]
\[L_{\infty}(y,\hat{y})=\max_{i=1,\cdots,T}|y_{i}-\hat{y}_{i}| \tag{9}\]
Fig. 8 and 9 compare the \(L_{2}\) and \(L_{\infty}\) error respectively for heave and roll utilizing both the ensemble and general approaches with various quantities of training data. The error calculation in Eq. (8) and (9) are performed for each of the validation runs. The triangle and rectangle markers in Fig. 8 and 9 correspond to the median error and the error bars denote the 25\({}^{\text{th}}\) and 75\({}^{\text{th}}\) percentiles. For both the general and ensemble approach, the \(L_{2}\) and \(L_{\infty}\) error for heave and
\begin{table}
\begin{tabular}{l|c|r} \hline Properties & Units & Value \\ \hline Draft, \(T\) & m & 5.5 \\ Beam, \(B\) & m & 18.8 \\ Roll Gyradius & m & 7.118 \\ Vertical Center of Gravity, \(KG\) (ABL) & m & 7.881 \\ Transverse Metacentric Height, \(GMT\) & m & 1.5 \\ Density of Water, \(\rho_{w}\) & kg/m\({}^{3}\) & 1000 \\ Density of Air, \(\rho_{a}\) & kg/m\({}^{3}\) & 1 \\ Kinematic Viscosity of Water, \(\nu_{w}\) & m\({}^{2}\)/s & 1e-06 \\ Kinematic Viscosity of Air, \(\nu_{a}\) & m\({}^{2}\)/s & 1.48e-05 \\ \hline \end{tabular}
\end{table}
Table 1: Loading condition and fluid properties of 2-D ONRT midship section.
Figure 7: 2-D ONRT midship section geometry and computational mesh.
roll decreases as the quantity of training data increases. Additionally, the size of the error bars for the \(L_{2}\) and \(L_{\infty}\) error decreases as the quantity of training data increases. Overall, the general approach provides better predictions for heave, but the roll predictions are similar between the two modeling approaches.
\begin{table}
\begin{tabular}{l|l} \hline Properties & Value \\ \hline Total Training Runs & 50, 100, 200, 400 \\ Training Runs per Model (Ensemble) & 5, 10, 20, 40 \\ Total Validation Runs & 25,100 \\ Time Steps per Run & 200 \\ Units per Layer & 50 \\ Layers & 2 \\ Dropout & 0.1 \\ Learning Rate & 0.001 \\ Epochs & 2,000 \\ \hline \end{tabular}
\end{table}
Table 2: Training matrix, neural network architecture, and hyper-parameters for the case study.
Figure 8: Comparison of \(L_{2}\) error for heave and roll.
Figs. 8 and 9 showcase the overall performance and accuracy of the neural network in terms of \(L_{2}\) and \(L_{\infty}\) but do not show the actual temporal LSTM prediction error. Fig. 10 demonstrates the validation runs that resulted in the smallest \(L_{\infty}\) error for heave and roll for the general approach model trained with 400 runs. CFD is compared in Fig. 10 to the LSTM predictions with uncertainty estimates from the Monte Carlo Dropout approach that corresponds to two standard deviations. The LSTM predictions are able to match both the phasing and magnitude of the CFD predictions well. Fig. 11 shows the validation runs with the largest \(L_{\infty}\) error. For both heave and roll, portions of the LSTM predictions match the CFD well, while other parts of the time-history are not as well predicted, especially after significant response magnitudes. Overall, the uncertainty is on the order of 1-2 deg for the largest roll angles and is typically larger at the response peaks.
Fig. 8 through 11 provide overall assessments on the accuracy of the LSTM models to reproduce the temporal response of the CFD simulations. However, the CWG methodology is concerned with the extremes, and therefore, the absolute maximums are of greater importance than the temporal predictions. Although, the prediction time-histories can provide insight into the mechanisms causing extremes. The CFD and LSTM predictions of the maximum roll due to each composite wave train are compared in Fig. 12 for the ensemble and general approaches. Each marker in Fig. 12 for a particular model corresponds to a single composite wave train and the corresponding CFD and LSTM predictions. The solid black line denotes identical values for CFD and LSTM. Like the comparisons of \(L_{\infty}\), the 400 training run models follows similar trends, while there seems to be more spread in the data for models trained with less data. Both approaches demonstrate convergence towards better correlation.
Fig. 12 shows how the absolute maximum roll compares for each individual composite wave train. The probability of occurrence of the composite wave trains calculates the probability of exceedance shown in Fig. 13 for both modeling
Figure 11: Comparison of time-histories with the largest \(L_{\infty}\) error for heavy and roll with a model trained with 400 simulations.
Figure 12: Comparison of the absolute maximum roll for each composite wave run with CFD and LSTM models with varying amounts of training data
approaches. With 200 training runs, both approaches are able to represent the CWG-CFD results from Silva and Maki (2021). When less training data is available, the general model is more accurate. However, with at least 200 training runs available, both approaches are able to reproduce the CWG-CFD prediction of probability of exceedance.
Fig. 14 compares the uncertainty in the LSTM predictions with both approaches for 400 total training runs. The uncertainty for both approaches increases as the roll angle of interest increases. Although the uncertainty is low in the time-history predictions in Fig. 8 through 11, the uncertainty compounds across all the considered composite wave trains. The compounding of uncertainty and the sensitivity of probability of exceedance calculation yields uncertainty in the probability calculation that is of the same order of magnitude as the probability itself at the largest roll angles. The large uncertainty highlights that as a response of interest becomes more extreme and more rare, the probability calculation is detrimentally sensitive to underlying error.
Figure 14: Comparison of probability of exceedance with uncertainty estimates for approaches that trained with 400 runs.
Figure 13: Probability of exceedance of roll in Sea State 7.
The CWG-CFD method in Silva and Maki (2021) reduces the computational cost compared to the Monte-Carlo simulation for the same case study by roughly five orders of magnitude at a roll angle of 57.5 deg (Fig. 13). This reduction in computational cost is with respect to the estimated exposure time needed to observe a given most probable maximum roll angle with Eqn. (10) from Ochi (1998). In Eqn. (10), \(\overline{y_{n}}\) is the most probable maximum response, \(T\) is the exposure time in hours, \(m_{0}\) and \(m_{1}\) are the zeroth and second spectral moments, respectively.
\[\overline{y_{n}} = \sqrt{m_{0}}\left[2\ln\left(\frac{60^{2}T}{2\pi}\sqrt{\frac{m_{2 }}{m_{0}}}\right)\right]^{\frac{1}{2}} \tag{10}\]
The total central processing unit (CPU) and exposure time of the CWG-CFD and CWG-CFD-LSTM methods are in Fig. 15 and compared to the required simulation time for Monte Carlo analysis estimated with Eqn. (10). The CPU time corresponds to the total computational cost of the methodologies and is specific to the considered mesh, software, and computing system. Meanwhile, the exposure time is the actual time simulated, which is consistent across systems and would be applicable to model testing as well as a 3-D simulation. The CPU and exposure time required of the CWG implementations also includes the random wave simulations considered in the identification of the natural initial conditions and the associated probability distribution of the encounter conditions. The CWG-CFD-LSTM model with 400 training runs, results in two orders of magnitude reduction in the total computational cost of the CWG-CFD method. Thus, the CWG-CFD-LSTM methodology yields a total estimated reduction of seven orders of magnitude in computational cost to produce a probability of exceedance of up to a threshold of 57.5 deg. The cost of the CWG-CFD-LSTM methods is close to that of the Monte Carlo for producing probability of exceedance predictions only up to roll angles of 30 deg.
The current paper constitutes significant progress, not only in the development of a computationally efficient framework for extreme ship response quantification, but also demonstrates a way of training neural networks to produce real-time observations of extremes. Previous research training neural networks to represent the dynamical response of vessels has focused on random waves and has not explored the application to extremes. Neural networks in general are better at interpolation than extrapolation and perform poorly in predicting extreme events when trained with random data. However, the present neural nets are trained with composite wave trains containing large and rare deterministic wave groups that lead to extremes. Therefore, the composite wave trains can, in principle, be applied to training robust neural network surrogate models that are capable of performing Monte Carlo analysis that recovers the entire PDF of each DoF, rather than simplify the probability of exceedance that is calculated within the CWG method. Fig. 16 compares CFD and LSTM predictions of heave and roll PDF in logarithmic scale with only the general modeling approach, for 100 hours of exposure time. The roll and negative portion of the heave are well represented, but the positive heave is under-predicted. Similar to the probability of exceedance comparisons, the PDF starts to converge around 200 training runs. Although the LSTM model performs well and is able to recover the underlying PDF, the presented CFD results only consider a 100 hour exposure window, which is not enough to induce significant extremes. The exposure time projections predicted with Eqn. (10) and shown in Fig. 15 predicts a most probable maximum of 35 deg with a 100 hour exposure window, which qualitatively agrees with the absolute maximum roll angles observed in Fig. 16. To achieve the larger roll angles in Fig. 13, significantly more exposure time is required. Therefore, although the prediction of the PDF with the LSTM neural networks is encouraging, further research is needed to evaluate whether CWG-CFD-LSTM training methodology is suitable for recovering the entire extreme PDF through a Monte Carlo prediction with the neural network.
Figure 15: Required CPU and exposure time for the CFD, CWG-CFD, and CWG-CFD-LSTM methods.
## Conclusion
A framework is presented that expands upon the research of Silva and Maki (2021) and Xu et al. (2021) to predict extreme response statistics with the CWG method, CFD, and LSTM neural networks. The new CWG-CFD-LSTM framework is demonstrated with a case study of a 2-D midship section of the ONRT in Sea State 7. The results of the framework are compared to a CWG-CFD method with various amounts of training data, with a general approach, where a single neural network model is trained for all composite wave trains as well as an ensemble approach, where multiple models are trained, each responsible for composite wave trains that contain a wave group with the same \(T_{c}\) and \(j\). Both approaches are able produce motions responses and probabilistic predictions that are representative of the CWG-CFD method, with 200 total training runs, but with two orders of magnitude of computational cost savings. The CWG-CFD-LSTM framework in total, results in an estimated reduction of seven orders of magnitude in computational cost compared to a Monte Carlo type approach. The work is an important step forward in developing a generalized framework that renders the CWG method accessible to both CFD and experiments with LSTM as a surrogate to represent the underlying dynamical processes.
Figure 16: Comparison of the PDF in a logarithmic scale for each DoF with models trained with the general approach.
## Acknowledgments
This work is supported by the Department of Defense (DoD) Science, Mathematics, and Research for Transformation (SMART) scholarship, the Naval Surface Warfare Center Carderock Division (NSWCCD) Extended Term Training (ETT), and the NSWCCD Naval Innovative Science and Engineering (NISE) programs. The authors would also like to acknowledge and thank the Office of Naval Research for its support of this research under contracts N00014-20-1-2096, led by the program manager Woei-Min Lin.
|
2301.02791 | Faithful and Consistent Graph Neural Network Explanations with Rationale
Alignment | Uncovering rationales behind predictions of graph neural networks (GNNs) has
received increasing attention over recent years. Instance-level GNN explanation
aims to discover critical input elements, like nodes or edges, that the target
GNN relies upon for making predictions. %These identified sub-structures can
provide interpretations of GNN's behavior. Though various algorithms are
proposed, most of them formalize this task by searching the minimal subgraph
which can preserve original predictions. However, an inductive bias is
deep-rooted in this framework: several subgraphs can result in the same or
similar outputs as the original graphs. Consequently, they have the danger of
providing spurious explanations and failing to provide consistent explanations.
Applying them to explain weakly-performed GNNs would further amplify these
issues. To address this problem, we theoretically examine the predictions of
GNNs from the causality perspective. Two typical reasons for spurious
explanations are identified: confounding effect of latent variables like
distribution shift, and causal factors distinct from the original input.
Observing that both confounding effects and diverse causal rationales are
encoded in internal representations, \tianxiang{we propose a new explanation
framework with an auxiliary alignment loss, which is theoretically proven to be
optimizing a more faithful explanation objective intrinsically. Concretely for
this alignment loss, a set of different perspectives are explored: anchor-based
alignment, distributional alignment based on Gaussian mixture models,
mutual-information-based alignment, etc. A comprehensive study is conducted
both on the effectiveness of this new framework in terms of explanation
faithfulness/consistency and on the advantages of these variants. | Tianxiang Zhao, Dongsheng Luo, Xiang Zhang, Suhang Wang | 2023-01-07T06:33:35Z | http://arxiv.org/abs/2301.02791v2 | # Faithful and Consistent Graph Neural Network Explanations with Rationale Alignment
###### Abstract
Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over recent years. Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions. Though various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions. However, an inductive bias is deep-rooted in this framework: several subgraphs can result in the same or similar outputs as the original graphs. Consequently, they have the danger of providing spurious explanations and failing to provide consistent explanations. Applying them to explain weakly-performed GNNs would further amplify these issues. To address this problem, we theoretically examine the predictions of GNNs from the causally perspective. Two typical reasons for spurious explanations are identified: confounding effect of latent variables like distribution shift, and causal factors distinct from the original input. Observing that both confounding effects and diverse causal rationales are encoded in internal representations, we propose a new explanation framework with an auxiliary alignment loss, which is theoretically proven to be optimizing a more faithful explanation objective intrinsically. Concretely for this alignment loss, a set of different perspectives are explored: anchor-based alignment, distributional alignment based on Gaussian mixture models, mutual-information-based alignment, etc. A comprehensive study is conducted both on the effectiveness of this new framework in terms of explanation faithfulness/consistency and on the advantages of these variants. For our codes, please refer to the following URL link: [https://github.com/Tianxiang/Zhao/GraphNNExplanation](https://github.com/Tianxiang/Zhao/GraphNNExplanation).
Graph Neural Network, Explainable AI, Machine Learning
## 1 Introduction
Graph-structured data is ubiquitous in the real world, such as social networks [1, 2], molecular structures [3, 4] and knowledge graphs [5]. With the growing interest in learning from graphs, graph neural networks (GNNs) are receiving more and more attention over the years. Generally, GNNs adopt message-passing mechanisms, which recursively propagate and fuse messages from neighbor nodes on the graphs. Hence, the learned node representation captures both node attributes and neighborhood information, which facilitate various downstream tasks such as node classification [6, 7, 8], graph classification [9], and link prediction [10].
Despite the success of GNNs for various domains, as with other neural networks, GNNs lack interpretability. Understanding the inner working of GNNs can bring several benefits. First, it enhances practitioners' trust in the GNN model by enriching their understanding of the model characteristics such as if the model is working as desired. Second, it increases the models' transparency to enable trusted applications in decision-critical fields sensitive to fairness, privacy and safety challenges, such as healthcare and drug discovery [11]. Thus, studying the explainability of GNNs is attracting increasing attention and many efforts have been taken [12, 13, 14].
Particularly, we focus on post-hoc instance-level explanations. Given a trained GNN and an input graph, this task seeks to discover the substructures that can explain the prediction behavior of the GNN model. Some solutions have been proposed in existing works [12, 15, 16]. For example, in search of important substructures that predictions rely upon, GNNExplainer learns an importance matrix on node attributes and edges via perturbation [12]. The identified minimal substructures that preserve original predictions are taken as the explanation. Extending this idea, PGExplainer trains a graph generator to utilize global information in explanation and enable faster inference in the inductive setting [13]. SubgraphX constraints explanations as connected subgraphs and conduct Monte Carlo tree search based on Shapley value [17]. These methods can be summarized in a label preserving framework, i.e., the candidate explanation is formed as a masked version of the original graph and is identified as the minimal discriminative substructure that preserves the predicted label.
However, due to the complexity of topology and the combinatory number of candidate substructures, existing label preserving methods are insufficient for a faithful and consistent explanation of GNNs. They are unstable and are prone to give spurious correlations as explanations. A failure case is shown in Figure 1, where a GNN is trained on Graph-SST5 [18] for sentiment classification. Each node represents a word and each edge denotes syntactic dependency between nodes. Each graph is labeled based on the sentiment of the sentence. In the figure, the sentence "Sweet home alabama isn't going to win any academy awards, but this date-night diversion will definitely win some hearts" is labeled _positive_. In the first run, GNNExplainer [12] identifies the explanation as "definitely win some hearts", and in the second run, it identifies "win academy awards" to be the explanation instead. Different explanations obtained
by GNNExplainer break the criteria of **consistency**, i.e., the explanation method should be deterministic and consistent with the same input for different runs [19]. Consequently, explanations provided by existing methods may fail to faithfully reflect the decision mechanism of the to-be-explained GNN.
Inspecting the inference process of target GNNs, we find that the inconsistency problem and spurious explanations can be understood from the causality perspective. Specifically, existing explanation methods may lead to spurious explanations either as a result of different causal factors or due to the confounding effect of distribution shifts (identified subgraphs may be out of distribution). These failure cases originate from a particular inductive bias that predicted labels are sufficiently indicative for extracting critical input components. This underlying assumption is rooted in optimization objectives adopted by existing works [12, 13, 17]. However, our analysis demonstrates that the label information is insufficient to filter out spurious explanations, leading to inconsistent and unfaithful explanations.
Considering the inference of GNNs, both confounding effects and distinct causal relationships can be reflected in the internal representation space. With this observation, we propose a novel objective that encourages alignment of embeddings of raw graph and identified subgraph in internal embedding space to obtain more faithful and consistent GNN explanations. Specifically, to evaluate the semantic similarity between two inputs and incorporate the alignment objective into explanation, we design and compare strategies with various design choices to measure similarity in the embedding space. These strategies enable the alignment between candidate explanations and original inputs and are flexible to be incorporated into various existing GNN explanation methods. Particularly, aside from directly using Euclidean distance, we further propose three distribution-aware strategies. The first one identifies a set of anchoring embeddings and utilizes relative distances against them. The second one assumes a Gaussian mixture model and captures the distribution using the probability of falling into each Gaussian center. The third one learns a deep neural network to estimate mutual information between two inputs, which takes a data-driven approach with little reliance upon prior domain knowledge. Further analysis shows that the proposed method is in fact optimizing a new explanation framework, which is more faithful in design. Our main contributions are:
* We point out the faithfulness and consistency issues in rationales identified by existing GNN explanation models. These issues arise due to the inductive bias in their label-preserving framework, which only uses predictions as the guiding information;
* We propose an effective and easy-to-apply countermeasure by aligning intermediate embeddings. We implement a set of variants with different alignment strategies, which is flexible to be incorporated to various GNN explanation models. We further conduct a theoretical analysis to understand and validate the proposed framework.
* Extensive experiments on real-world and synthetic datasets show that our framework benefits various GNN explanation models to achieve more faithful and consistent explanations.
## 2 Related Work
In this section, we review related works, including graph neural networks and interpretability of GNNs.
### _Graph Neural Networks_
Graph neural networks (GNNs) are developing rapidly in recent years, with the increasing need for learning on relational data structures [1, 20, 21]. Generally, existing GNNs can be categorized into two categories, i.e., spectral-based approaches [6, 22, 23] based on graph signal processing theory, and spatial-based approaches [24, 25, 26] relying upon neighborhood aggregation. Despite their differences, most GNN variants can be summarized with the message-passing framework, which is composed of pattern extraction and interaction modeling within each layer [27]. Specifically, GNNs model messages from node representations. These messages are then propagated with various message-passing mechanisms to refine node representations, which are then utilized for downstream tasks [8, 10, 21]. Explorations are made by disentangling the propagation process [28, 29, 30] or utilizing external prototypes [31, 32]. Research has also been conducted on the expressive power [33, 34] and potential biases introduced by different kernels [35, 36] for the design of more effective GNNs. Despite their success in network representation learning, GNNs are uninterpretable black box models. It is challenging to understand their behaviors even if the adopted message passing mechanism and parameters are given. Besides, unlike traditional deep neural networks where instances are identically and independently distributed, GNNs consider node features and graph topology jointly, making the interpretability problem more challenging to handle.
### _GNN Interpretation Methods_
Recently, some efforts have been taken to interpret GNN models and provide explanations for their predictions [37]. Based on the granularity, existing methods can be generally grouped into two categories: (1) instance-level explanation [12], which provides explanations on the prediction for each instance by identifying important substructures; and (2) model-level explanation [38, 39], which aims to understand global decision rules captured by the target GNN. From the methodology perspective, existing methods can be categorized as (1) self-explainable GNNs [39, 40], where
Fig. 1: Explanation results achieved by a leading baseline GNNExplainer on the same input graph from Graph-SST5. Red edges formulate explanation substructures.
the GNN can simultaneously give prediction and explanations on the prediction; and (2) post-hoc explanations [12, 13, 17], which adopt another model or strategy to provide explanations of a target GNN. As post-hoc explanations are model-agnostic, i.e., can be applied for various GNNs, in this work, we focus on post-hoc instance-level explanations [12], i.e., given a trained GNN model, identifying instance-wise critical substructures for each input to explain the prediction. A comprehensive survey can be found in [41].
A variety of strategies for post-hoc instance-level explanations have been explored in the literature, including utilizing signals from gradients based [38, 42], perturbed predictions based [12, 13, 17, 43], and decomposition based [38, 44]. Among these methods, perturbed prediction-based methods are the most popular. The basic idea is to learn a perturbation mask that filters out non-important connections and identifies dominating substructures preserving the original predictions [18]. The identified important substructure is used as an explanation for the prediction. For example, GNNExplainer [12] employs two soft mask matrices on node attributes and graph structure, respectively, which are learned end-to-end under the maximizing mutual information (MMI) framework. PGExplainer [13] extends it by incorporating a graph generator to utilize global information. It can be applied in the inductive setting and prevent the onerous task of re-learning from scratch for each to-be-explained instance. SubgraphX [17] expects explanations to be in the form of sub-graphs instead of bag-of-edges and employs Monte Carlo Tree Search (MCTS) to find connected subgraphs that preserve predictions measured by the Shapley value. To promote faithfulness in identified explanations, some works introduced terminologies from the causality analysis domain, via estimating the individual causal effect of each edge [45] or designing interventions to prevent the discovery of spurious correlations [46]. Ref. [47] connects the idea of identifying minimally-predictive parts in explanation with the principle of information bottleneck [48] and designs an end-to-end optimization framework for GNN explanation.
Despite the aforementioned progress in interpreting GNNs, most of these methods discover critical substructures merely upon the change of outputs given perturbed inputs. Due to this underlying inductive bias, existing label-preserving methods are heavily affected by spurious correlations caused by confounding factors in the environment. On the other hand, by aligning intermediate embeddings in GNNs, our method alleviates the effects of spurious correlations on interpreting GNNs, leading to faithful and consistent explanations.
## 3 Preliminary
### _Problem Definition_
We use \(\mathcal{G}=\{\mathcal{V},\mathcal{E};\mathbf{F},\mathbf{A}\}\) to denote a graph, where \(\mathcal{V}=\{v_{1},\ldots,v_{n}\}\) is a set of \(n\) nodes and \(\mathcal{E}\in\mathcal{V}\times\mathcal{V}\) is the set of edges. Nodes are accompanied by an attribute matrix \(\mathbf{F}\in\mathbb{R}^{n\times d}\), and \(\mathbf{F}[i,:]\in\mathbb{R}^{1\times d}\) is the \(d\)-dimensional node attributes of node \(v_{i}\). \(\mathcal{E}\) is described by an adjacency matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\). \(A_{ij}=1\) if there is an edge between node \(v_{i}\) and \(v_{j}\); otherwise, \(A_{ij}=0\). For _graph classification_, each graph \(\mathcal{G}_{i}\) has a label \(Y_{i}\in\mathcal{C}\), and a GNN model \(f\) is trained to map \(\mathcal{G}\) to its class, i.e., \(f:\{\mathbf{F},\mathbf{A}\}\mapsto\{1,2,\ldots,C\}\). Similarly, for _node classification_, each graph \(\mathcal{G}_{i}\) denotes a \(K\)-hop subgraph centered at node \(v_{i}\) and a GNN model \(f\) is trained to predict the label of \(v_{i}\) based on node representation of \(v_{i}\) learned from \(\mathcal{G}_{i}\). The purpose of explanation is to find a subgraph \(\mathcal{G}^{\prime}\), marked with binary importance mask \(\mathbf{M}_{A}\in[0,1]^{n\times n}\) on adjacency matrix and \(\mathbf{M}_{F}\in[0,1]^{n\times d}\) on node attributes, respectively, e.g., \(\mathcal{G}^{\prime}=\{\mathbf{A}\odot\mathbf{M}_{A};\mathbf{F}\odot\mathbf{ M}_{F}\}\), where \(\odot\) denotes elementwise multiplication. These two masks highlight components of \(\mathcal{G}\) that are important for \(f\) to predict its label. With the notations, the _post-hoc instance-level_ GNN explanation task is:
_Given a trained GNN model \(f\), for an arbitrary input graph \(\mathcal{G}=\{\mathcal{V},\mathcal{E};\mathbf{F},\mathbf{A}\}\), find a subgraph \(\mathcal{G}^{\prime}\) that can explain the prediction of \(f\) on \(\mathcal{G}\). The obtained explanation \(\mathcal{G}^{\prime}\) is depicted by importance mask \(\mathbf{M}_{F}\) on node attributes and importance mask \(\mathbf{M}_{A}\) on adjacency matrix._
### _MMI-based Explanation Framework_
Many approaches have been designed for post-hoc instance-level GNN explanation. Due to the discreteness of edge existence and non-grid graph structures, most works apply a perturbation-based strategy to search for explanations. Generally, they can be summarized as Maximization of Mutual Information (MMI) between predicted label \(\hat{Y}\) and explanation \(\mathcal{G}^{\prime}\), i.e.,
\[\begin{split}\min_{\mathcal{G}^{\prime}}&-I(\hat{Y},\mathcal{G}^{\prime}),\\ \text{s.t.}&\mathcal{G}^{\prime}\sim\mathcal{P}( \mathcal{G},\mathbf{M}_{A},\mathbf{M}_{F}),\quad\mathcal{R}(\mathbf{M}_{F}, \mathbf{M}_{A})\leq c\end{split} \tag{1}\]
where \(I()\) represents mutual information and \(\mathcal{P}\) denotes the perturbations on original input with importance masks \(\{\mathbf{M}_{F},\mathbf{M}_{A}\}\). For example, let \(\{\hat{\mathbf{A}},\hat{\mathbf{F}}\}\) represent the perturbed \(\{\mathbf{A},\mathbf{F}\}\). Then, \(\mathbf{A}=\mathbf{A}\odot\mathbf{M}_{A}\) and \(\hat{\mathbf{F}}=\mathbf{Z}+(\mathbf{F}-\mathbf{Z})\odot\mathbf{M}_{F}\) in GNNExplainer [12], where \(\mathbf{Z}\) is sampled from marginal distribution of node attributes \(\mathbf{F}\). \(\mathcal{R}\) denotes regularization terms on the explanation, imposing prior knowledge into the searching process, like constraints on budgets or connectivity distributions [13]. Mutual information \(I(\hat{Y},\mathcal{G}^{\prime})\) quantifies consistency between original predictions \(\hat{Y}=f(\mathcal{G})\) and prediction of candidate explanation \(f(\mathcal{G}^{\prime})\), which promotes the positiveness of found explanation \(\mathcal{G}^{\prime}\). Since mutual information measures the predictive power of \(\mathcal{G}^{\prime}\) on \(Y\), this framework essentially tries to find a subgraph that can best predict the original output \(\hat{Y}\). During training, a relaxed version [12] is often adopted as:
\[\begin{split}\min_{\mathcal{G}^{\prime}}&\ H_{C}(\hat{Y},P(\hat{Y}^{\prime}\mid\mathcal{G}^{\prime})),\\ \text{s.t.}&\mathcal{G}^{\prime}\sim\mathcal{P}( \mathcal{G},\mathbf{M}_{A},\mathbf{M}_{F}),\quad\mathcal{R}(\mathbf{M}_{F}, \mathbf{M}_{A})\leq c\end{split} \tag{2}\]
where \(H_{C}\) denotes cross-entropy. With this same objective, existing methods mainly differ from each other in optimization and searching strategies.
Different aspects regarding the quality of explanations can be evaluated [19]. Among them, two most important criteria are **faithfulness** and **consistency**. Faithfulness measures the descriptive accuracy of explanations, indicating how truthful they are compared to behaviors of the target model. Consistency considers explanation invariance, which
checks that identical input should have identical explanations. However, as shown in Figure 1, the existing MMI-based framework is sub-optimal in terms of these criteria. The cause of this problem is rooted in its learning objective, which uses prediction alone as guidance in search of explanations. Due to the complex graph structure, the prediction alone as a guide could result in spurious explanations. A detailed analysis will be provided in the next section.
## 4 Analyze Spurious Explanations
With "spurious explanations", we refer to those explanations lie outside the genuine rationale of prediction on \(\mathcal{G}\), making the usage of \(\mathcal{G}^{\prime}\) as explanations anecdotal. As examples in Figure 1, despite rapid developments in explaining GNNs, the problem w.r.t faithfulness and consistency of detected explanations remains. To get a deeper understanding of reasons behind this problem, we will examine the behavior of target GNN model from the causality perspective. Figure 2(a) shows the Structural Equation Model (SEM), where variable \(C\) denotes discriminative causal factors and variable \(S\) represents confounding environment factors. Two paths between \(\mathcal{G}\) and the predicted label \(\hat{Y}\) can be found.
* \(\mathcal{G}\to C\rightarrow\hat{Y}\): This path presents the inference of target GNN model, i.e., critical patterns \(C\) that are informative and discriminative for the prediction \(\hat{Y}\) would be extracted from input graph, upon which the target model is dependent. Causal variables are determined by both the input graph and learned knowledge by the target GNN model.
* \(\mathcal{G}\gets S\rightarrow\hat{Y}\): We denote \(S\) as the confounding factors, such as depicting the overall distribution of graphs. It is causally related to both the appearance of input graphs and the prediction of target GNN models. A masked version of \(\mathcal{G}\) could create out-of-distribution (OOD) examples, resulting in spurious causality to prediction outputs. For example in the chemical domain, removing edges (bonds) or nodes (atoms) may obtain invalid molecular graphs that never appear during training. In the existence of distribution shifts, model predictions would be less reliable.
Figure 2(a) provides us with a tool to analyze \(f\)'s behaviors. From the causal structures, we can observe that spurious explanations may arise as a result of failure in recovering the original causal rationale. \(\mathcal{G}^{\prime}\) learned from Equation 1 may preserve prediction \(\hat{Y}\) due to confounding effect of distribution shift or different causal variables \(C\) compared to original \(\mathcal{G}\). Weakly-trained GNN \(f(\cdot)\) that are unstable or non-robust towards noises would further amplify this problem as the prediction is unreliable.
To further understand the issue, we build the correspondence from SEM in Figure 2(a) to the inference process of GNN \(f\). Specifically, we first decompose \(f()\) as a feature extractor \(f_{ext}()\) and a classifier \(f_{cls}()\). Then, its inference can be summarized as two steps: (1) encoding step with \(f_{ext}()\), which takes \(\mathcal{G}\) as input and produce its embedding in the representation space \(E_{C}\); (2) classification step with \(f_{cls}()\), which predicts output labels on input's embedding. Connecting these inference steps to SEM in Figure 2(a), we can find that:
* The causal path \(\mathcal{G}\to C\rightarrow\hat{Y}\) lies behind the inference process with representation space \(E_{C}\) to encode critical variables \(C\);
* The confounding effect of distribution shift \(S\) works on the inference process via influencing distribution of graph embedding in \(E_{C}\). When masked input \(\mathcal{G}^{\prime}\) is OOD, its embedding would fail to reflect its discriminative features and deviate from real distributions, hence deviating the classification step on it.
To summarize, we can observe that spurious explanations are usually obtained due to the following two reasons:
1. The obtained \(\mathcal{G}^{\prime}\) is OOD graph. During inference of target GNN model, the encoded representation of \(\mathcal{G}^{\prime}\) is distant from those seen in the training set, making the prediction unreliable;
2. The encoded discriminative representation does not accord with that of the original graph. Different causal factors (\(C\)) are extracted between \(\mathcal{G}^{\prime}\) and \(\mathcal{G}\), resulting in false explanations.
## 5 Methodology
Based on the discussion above, in this section, we focus on improving the faithfulness and consistency of GNN explanations and correcting the inductive bias caused by simply relying on prediction outputs. We first provide an intuitive introduction to the proposed countermeasure, which takes the internal inference process into account. We then design four concrete algorithms to align \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) in the latent space, to promote that they are seen and processed in the same manner. Finally, theoretical analysis is provided to justify our strategies.
### _Alleviate Spurious Explanations_
Instance-level post-hoc explanation dedicates to finding discriminative substructures that the target model \(f\) depends upon. The traditional objective in Equation 2 can identify minimal predictive parts of input, however, it is dangerous to directly take them as explanations. Due to diversity in graph topology and combinatory nature of sub-graphs, multiple distinct substructures could be identified leading to the same prediction, as discussed in Section 4.
For an explanation substructure \(\mathcal{G}^{\prime}\) to be faithful, it should follow the same rationale as the original graph \(\mathcal{G}\) inside the internal inference of to-be-explained model \(f\). To achieve this goal, the explanation \(\mathcal{G}^{\prime}\) should be aligned to \(\mathcal{G}\) w.r.t the decision mechanism, reflected in Figure 2(a). However, it is non-trivial to extract and compare the critical
Fig. 2: (a) Prediction rules of \(f\), in the form of SCM. (b) An example of anchor-based embedding alignment.
causal variables \(C\) and confounding variables \(S\) due to the black box nature of the target GNN model to be explained.
Following the causal analysis in Section 4, we propose to take an alternative approach by looking into internal embeddings learned by \(f\). Causal variables \(C\) are encoded in representation space extracted by \(f\), and out-of-distribution effects can also be reflected by analyzing embedding distributions. An assumption can be safely made: _if two graphs are mapped to embeddings near each other by a GNN layer, then these graphs are seen as similar by it and would be processed similarly by following layers_. With this assumption, a proxy task can be designed by aligning internal graph embeddings between \(\mathcal{G}^{\prime}\) and \(\mathcal{G}\). This new task can be incorporated into Framework 1 as an auxiliary optimization objective.
Let \(\mathbf{h}_{v}^{l}\) be the representation of node \(v\) at the \(l\)-th GNN layer with \(\mathbf{h}_{v}^{0}=\mathbf{F}[v,:]\). Generally, the inference process inside GNN layers can be summarized as a message-passing framework:
\[\begin{split}\mathbf{m}_{v}^{l+1}&=\sum_{u\in \mathcal{N}(v)}\text{Message}_{l}(\mathbf{h}_{v}^{l},\mathbf{h}_{u}^{l},A_{v, u}),\\ \mathbf{h}_{v}^{l+1}&=\text{Update}_{l}(\mathbf{h}_ {v}^{l},\mathbf{m}_{v}^{l+1}),\end{split} \tag{3}\]
where \(\text{Message}_{l}\) and \(\text{Update}_{l}\) are the message function and update function at \(l\)-th layer, respectively. \(\mathcal{N}(v)\) is the set of node \(v\)'s neighbors. Without loss of generality, the graph pooling layer can also be presented as:
\[\mathbf{h}_{v^{\prime}}^{l+1}=\sum_{v\in\mathcal{V}}P_{v,v^{\prime}}\cdot \mathbf{h}_{v}^{l}. \tag{4}\]
where \(P_{v,v^{\prime}}\) denotes mapping weight from node \(v\) in layer \(l\) to node \(v^{\prime}\) in layer \(l+1\) inside the myriad of GNN for graph classification. We propose to align embedding \(\mathbf{h}_{v}^{l+1}\) at each layer, which contains both node and local neighborhood information.
### _Distribution-Aware Alignment_
Achieving alignment in the embedding space is not straightforward. It has several distinct difficulties. (1) It is difficult to evaluate the distance between \(\mathcal{G}\) and \(\mathcal{G}^{\prime}\) in this embedding space. Different dimensions could encode different features and carry different importance. Furthermore, \(\mathcal{G}^{\prime}\) is a substructure of the original \(\mathcal{G}\), and a shift on unimportant dimensions would naturally exist. (2) Due to the complexity of graph/node distributions, it is non-trivial to design a measurement of alignments that is both computationally and can correlate well to distance on the distribution manifold.
To address these challenges, we design a strategy to identify explanatory substructures and preserve their alignment with original graphs in a distribution-aware manner. The basic idea is to utilize other graphs to obtain a global view of the distribution density of embeddings, providing a better measurement of alignment. Concretely, we obtain representative node/graph embeddings as anchors and use distances to these anchors as the distribution-wise representation of graphs. Alignment is conducted on obtained representation of graph pairs. Next, we go into details of this strategy.
* First, using graphs \(\{\mathcal{G}_{i}\}_{i=1}^{m}\) from the same dataset, a set of node embeddings can be obtained as \(\{\{\mathbf{h}_{v,i}^{l}\}_{v\in\mathcal{V}^{\prime}_{i}}\}_{i=1}^{m}\) for each layer \(l\), where \(\mathbf{h}_{v,i}\) denotes embedding of node \(v\) in graph \(\mathcal{G}_{i}\). For node-level tasks, we set \(\mathcal{V}^{\prime}_{i}\) to contain only the center node of graph \(\mathcal{G}_{i}\). For graph-level tasks, \(\mathcal{V}^{\prime}_{i}\) contains nodes set after graph pooling layer, and we process them following \(\{\sum_{v\in\mathcal{V}^{\prime}_{i}}\mathbf{h}_{v,i}^{l+1}/|\mathcal{V}^{ \prime}_{i}|\}_{i=1}^{m}\) to get global graph representation.
* Then, a clustering algorithm is applied to the obtained embedding set to get \(K\) groups. Clustering centers of these groups are set to be anchors, annotated as \(\{\mathbf{h}^{l+1,k}\}_{k=1}^{k}\). In experiments, we select DBSCAN [10] as the clustering algorithm, and tune its hyper-parameters to get around \(20\) groups.
* At \(l\)-th layer, \(\mathbf{h}_{v}^{l+1}\) is represented in terms of relative distances to those \(K\) anchors, as \(\mathbf{s}_{v}^{l}\in\mathbb{R}^{1\times K}\) with the \(k\)-th element calculated as \(\mathbf{s}_{v}^{l+1,k}=\|\mathbf{h}_{v}^{l+1}-\mathbf{h}_{v}^{l+1,k}\|_{2}\).
Alignment between \(\mathcal{G}^{\prime}\) and \(\mathcal{G}\) can be achieved by comparing their representations at each layer. The alignment loss is computed as:
\[\mathcal{L}_{align}(f(\mathcal{G}),f(\mathcal{G}^{\prime}))=\sum_{l}\sum_{v \in\mathcal{V}^{\prime}}\|\mathbf{s}_{v}^{l}-\mathbf{s}_{v}^{\prime}\|_{2}^{2}. \tag{5}\]
This metric provides a lightweight strategy for evaluating alignments in the embedding distribution manifold, by comparing relative positions w.r.t representative clustering centers. This strategy can naturally encode the varying importance of each dimension. Fig. 2(b) gives an example, where \(\mathcal{G}\) is the graph to be explained and the red stars are anchors. \(\mathcal{G}^{\prime}_{1}\) and \(\mathcal{G}^{\prime}_{2}\) are both similar to \(\mathcal{G}\) w.r.t absolute distances; while it is easy to see \(\mathcal{G}^{\prime}_{1}\) is more similar to \(\mathcal{G}\) w.r.t to the anchors. In other words, the anchors can better measure the alignment to filter out spurious explanations.
This alignment loss is used as an auxiliary task incorporated into MMI-based framework in Equation 2 to get faithful explanation as:
\[\begin{split}\underset{\mathcal{G}^{\prime}}{\text{min}}H_{C} \big{(}\hat{Y},P(\hat{Y}^{\prime}\mid\mathcal{G}^{\prime})\big{)}+\lambda \cdot\mathcal{L}_{align},\\ \text{s.t.}\quad\mathcal{G}^{\prime}\sim\mathcal{P}(\mathcal{G}, \mathbf{M}_{A},\mathbf{M}_{F}),\quad\mathcal{R}(\mathbf{M}_{F},\mathbf{M}_{A}) \leq c\end{split} \tag{6}\]
where \(\lambda\) controls the balance between prediction preservation and embedding alignment. \(\mathcal{L}_{align}\) is flexible to be incorporated into various existing explanation methods.
### _Direct Alignment_
As a simpler and more direct implementation, we also design a variant based on absolute distance. For layers without graph pooling, the objective can be written as \(\sum_{l}\sum_{v\in\mathcal{V}}\|\mathbf{h}_{v}^{l}-\mathbf{h}_{v}^{l}\|_{2}^{2}\). For layers with graph pooling, as the structure could be different, we conduct alignment on global representation \(\sum_{v\in\mathcal{V}^{\prime}}\mathbf{h}_{v}^{l+1}/|\mathcal{V}^{\prime}|\), where \(\mathcal{V}^{\prime}\) denotes node set after pooling.
## 6 Extended Methodology
In this section, we further examine more design choices for the strategy of alignment to obtain faithful and consistent explanations. Instead of using heuristic approaches, we explore two new directions: (1) statistically sound distance measurements based on the Gaussian mixture model, (2) fully utilizing the power of deep neural networks to capture distributions in the latent embedding space. Details of these two alignment strategies will be introduced below.
### _Gaussian-Mixture-based Alignment_
In this strategy, we model the latent embeddings of nodes (or graphs) using a mixture of Gaussian distributions, with representative node/graph embeddings as anchors (Gaussian centers). The produced embedding of each input can be compared with those prototypical anchors, and the semantic information of inputs taken by the target model would be encoded by relative distances from them.
Concretely, we first obtain prototypical representations, annotated as \(\{\mathbf{h}^{l,k}\}_{k=1}^{K}\), by running the clustering algorithm on collected embeddings \(\{(\mathbf{h}^{l}_{v,i})_{v\in V}\}_{i=1}^{m}\) from graphs \(\{\mathcal{G}_{i}\}_{i=1}^{m}\), in the same strategy as introduced in Sec. 5.2. Cluttering algorithm DBSCAN [49] is adopted and we tune its hyper-parameters to get around \(20\) groups.
Next, the probability of encoded representation \(\mathbf{h}^{l}\) falling into each prototypical Gaussian centers \(\{\mathbf{h}^{l,k}\}_{k=1}^{K}\) can be computed as:
\[p_{v}^{l,k}=\frac{\exp(-\|\mathbf{h}^{l}_{v}-\mathbf{h}^{l,k}\|_{2}^{2}/2 \sigma^{2})}{\sum_{j=1}^{K}\exp(-\|\mathbf{h}^{l}_{v}-\mathbf{h}^{l,k}\|_{2}^{ 2}/2\sigma^{2})} \tag{7}\]
This distribution probability can serve as a natural tool for depicting the semantics of the input graph learned by the GNN model. Consequently, the distance between \(\mathbf{h}^{l}_{v}\) and \(\mathbf{h}^{l}_{v}\) can be directly measured as the KL-divergence w.r.t this distribution probability:
\[d(\mathbf{p}^{{}^{\prime}l}_{v},\mathbf{p}^{l}_{v})=\sum_{k\in[1,\dots,K]}p^{{ }^{\prime}l,k}_{v}\cdot\log(\frac{p^{{}^{\prime}l,k}_{v}}{p^{l,k}_{v}}), \tag{8}\]
where \(\mathbf{p}^{{}^{\prime}l}_{v}\in\mathbb{R}^{K}\) denotes the distribution probability of candidate explanation embedding, \(\mathbf{h}^{{}^{\prime}l}_{v}\). Using this strategy, the alignment loss between original graph and the candidate explanation is computed as:
\[\mathcal{L}_{align}\big{(}f(\mathcal{G}),f(\mathcal{G}^{\prime})\big{)}= \sum_{l}\sum_{v\in\mathcal{V}^{\prime}}d(\mathbf{p}^{{}^{\prime}l}_{v}, \mathbf{p}^{l}_{v}), \tag{9}\]
which can be incorporated into the revised explanation framework proposed in Eq. 6.
**Comparison** Compared with the alignment loss based on relative distances against anchors in Eq. 5, this new objective offers a better strategy in taking distribution into consideration. Specifically, we can show the following two advantages:
* In obtaining the distribution-aware representation of each instance, this variant uses a Gaussian distance kernel (Eq. 7) while the other one uses Euclidean distance in Sec. 5.2, which may amplify the influence of distant anchors. We can prove this by examining the gradient of changes in representation w.r.t GNN embeddings \(\mathbf{h}_{v}\). In the \(l\)-th layer at dimension \(k\), the gradient of the previous variant can be computed as: \[\frac{\partial s^{l,k}_{v}}{\partial\mathbf{h}^{l}_{v}}=2\cdot(\mathbf{h}^{l}_ {v}-\mathbf{h}^{l,k}_{v})\] (10) On the other hand, the gradient of this variant is: \[\frac{\partial p^{l,k}_{v}}{\partial\mathbf{h}^{l}_{v}}\approx-\frac{\exp(-\| \mathbf{h}^{l}_{v}-\mathbf{h}^{l,k}\|_{2}^{2}/2\sigma^{2})\cdot(\mathbf{h}^{l} _{v}-\mathbf{h}^{l,k}_{v})}{\sigma^{2}\cdot\sum_{j=1}^{K}\exp(-\|\mathbf{h}^{l} _{v}-\mathbf{h}^{l,k}\|_{2}^{2}/2\sigma^{2})}\] (11) It is easy to see that for the previous variant, the magnitude of gradient would grow linearly w.r.t distances towards corresponding anchors. For this variant, on the other hand, the term \(\frac{\exp(-\|\mathbf{h}^{l}_{v}-\mathbf{h}^{l,k}\|_{2}^{2}/2\sigma^{2})}{ \sigma^{2}\cdot\sum_{j=1}^{K}\exp(-\|\mathbf{h}^{l}_{v}-\mathbf{h}^{l,k}\|_{ 2}^{2}/2\sigma^{2})}\) would down-weight the importance of those distant anchors while up-weight the importance of similar anchors, which is more desired in obtaining distribution-aware representations.
* In computing the distance between representations of two inputs, this variant adopts the KL divergence as in Eq. 8, which would be scale-agnostic compared to the other one directly using Euclidean distance as in Eq. 5. Again, we can show the gradient of alignment loss towards obtained embeddings that encode distribution information. It can be observed that for the previous variant: \[\frac{\partial d(\mathbf{s}^{l}_{v},\mathbf{s}^{\prime}_{v})}{\partial s^{ \prime}_{v}}=2\cdot(s^{l,k}_{v}-s^{\prime,k}_{v})\] (12) For this variant based on Gaussian mixture model, the gradient can be computed as: \[\frac{\partial d(\mathbf{p}^{l}_{v},\mathbf{p}^{\prime}_{v})}{\partial p^{ \prime}_{v}}=1+\log(\frac{p^{\prime}_{v}l,k}{p^{\prime}_{v}})\] (13) It can be observed that the previous strategy focuses on representation dimensions with a large absolute difference, while would be sensitive towards the scale of each dimension. On the other hand, this strategy uses the summation between the logarithm of relative difference with a constant, which is scale-agnostic towards each dimension.
### _MI-based Alignment_
In this strategy, we further consider the utilization of deep models to capture the distribution and estimate the semantic similarity of two inputs, and incorporate it into the alignment loss for discovering faithful and consistent explanations. Specifically, we train a deep model to estimate the mutual information (MI) between two input graphs, and use its prediction as a measurement of alignment between original graph and its candidate explanation. This strategy circumvents the reliance on heuristic strategies and is purely data-driven, which can be learned in an end-to-end manner.
To learn the mutual information estimator, we adopt a neural network and train it to be a Jensen-Shannon MI estimator [50]. Concretely, we train this JSD-based estimator on top of intermediate embeddings with the learning objective as follows, which offers better stability in optimization:
\[\begin{split}\min_{g_{mi}}\mathcal{L}_{mi}=&\mathbb{E}_ {\mathcal{G}\in\{\mathcal{G}_{i}\}_{i=1}^{m}}\mathbb{E}_{v\in\mathcal{G}} \mathbb{E}_{l}[\mathbf{E}_{\mathbf{h}^{l,k}_{v}}+sp(-T^{l}(\mathbf{h}^{l}_{v}, \mathbf{h}^{l,+}_{v}))\\ &+\mathbb{E}_{\mathbf{h}^{l,-}_{v}}-sp(T^{l}(\mathbf{h}^{l}_{v}, \mathbf{h}^{l,-}_{v}))],\end{split} \tag{14}\]
where \(\mathbb{E}\) denotes expectation. In this equation, \(T^{l}(\cdot)\) is a compatibility estimation function in the \(l\)-th layer, and we denote \(\{T^{l}(\cdot)\}_{l}\) as the MI estimator \(g_{mi}\). Activation function \(sp(\cdot)\) is the _softplus_ function, and \(\mathbf{h}^{+}_{v}\) represents the embedding of augmented node \(v\) that is a **positive pair** of \(v\) in the original graph. On the contrary, \(\mathbf{h}^{-}_{v}\) denotes the embedding of augmented node \(v\) that is a **negative pair** of original input. A positive pair is obtained through randomly dropping intermediate neurons, corresponding to masking out a ratio of original input, and a negative pair is obtained
as embeddings of different nodes. This objective can guide \(g_{mi}\) to capture the correlation or similarity between two input graphs encoded by the target model. With this MI estimator learned, alignment loss between \(\mathcal{G}\) and \(\mathcal{G}^{{}^{\prime}}\) can be readily computed:
\[\mathcal{L}_{align}\big{(}f(\mathcal{G}),f(\mathcal{G}^{\prime})\big{)}=\sum_{l }\sum_{v\in\mathcal{V}^{\prime}}sp(-T^{l}(\mathbf{h}_{v}^{{}^{\prime}l}, \mathbf{h}_{v}^{l})), \tag{15}\]
which can be incorporated into the revised explanation framework proposed in Eq. 6.
In this strategy, we design a data-driven approach by capturing the mutual information between two inputs, which circumvents the potential biases of using human-crafted heuristics.
## 7 Theoretical Analysis
With those alignment strategies and our new explanation framework introduced, next, we want to look deeper and provide theoretical justifications for the proposed new loss function in Eq. 6. In this section, we first propose a new explanation objective to prevent spurious explanations based on our analysis in Sec. 4. Then, we theoretically show that it squares with our proposed loss function with mild relaxations.
### _New Explanation Objective_
From previous discussions, it is shown that \(\mathcal{G}^{\prime}\) obtained via Equation 1 cannot be safely used as explanations. One main drawback of existing GNN explanation methods lies in the inductive bias that the same outcomes do not guarantee the same causes, leaving existing approaches vulnerable towards spurious explanations. An illustration is given in Figure 3. Objective proposed in Equation 1 optimizes the mutual information between explanation candidate \(\mathcal{G}^{\prime}\) and \(\hat{Y}\), corresponding to maximize the overlapping between \(H(\mathcal{G}^{\prime})\) and \(H(\hat{Y})\) in Figure 3(a), or region \(S_{1}\cup S_{2}\) in Figure 3(b). Here, \(H\) denotes information entropy. However, this learning target cannot prevent the danger of generating spurious explanations. Provided \(\mathcal{G}^{\prime}\) may fall into the region \(S_{2}\), which cannot faithfully represent graph \(\mathcal{G}\). Instead, a more sensible objective should be maximizing region \(S_{1}\) in Figure 3(b). The intuition behind this is that in the search input space that causes the same outcome, identified \(\mathcal{G}^{\prime}\) should account for both representative and discriminative parts of original \(\mathcal{G}\), to prevent spurious explanations that produce the same outcomes due to different causes. Concretely, finding \(\mathcal{G}^{\prime}\) that maximize \(S_{1}\) can be formalized as:
\[\min_{\mathcal{G}^{\prime}}-I(\mathcal{G},\mathcal{G}^{\prime}, \hat{Y}),\] (16) s.t. \[\mathcal{G}^{\prime}\sim \mathcal{P}(\mathcal{G},\mathbf{M}_{A},\mathbf{M}_{F})\quad \mathcal{R}(\mathbf{M}_{F},\mathbf{M}_{A})\leq c\]
### _Connecting to Our Method_
\(I(\mathcal{G},\mathcal{G}^{\prime},\hat{Y})\) is intractable as the latent generation mechanism of \(\mathcal{G}\) is unknown. In this part, we expand this objective, connect it to Equation 6, and construct its proxy optimizable form as:
\[I(\mathcal{G},\mathcal{G}^{\prime},\hat{Y}) =\sum_{y\sim\mathcal{Y}}\sum_{\mathcal{G}}\sum_{\mathcal{G}^{ \prime}}P(\mathcal{G},\mathcal{G}^{\prime},y)\] \[\quad\cdot\log\frac{P(\mathcal{G}^{\prime},y)P(\mathcal{G}, \mathcal{G}^{\prime})P(\mathcal{G},y)}{P(\mathcal{G},\mathcal{G}^{\prime},y)P( \mathcal{G})P(\mathcal{G}^{\prime})P(y)}\] \[=\sum_{y\sim\mathcal{Y}}\sum_{\mathcal{G}^{\prime}}P(\mathcal{G},\mathcal{G}^{\prime},y)\cdot\log\frac{P(\mathcal{G},\mathcal{G}^{\prime})}{P( \mathcal{G})P(\mathcal{G}^{\prime})}\] \[\quad+\sum_{\mathcal{G}^{\prime}}\sum_{\mathcal{G}^{\prime}}P( \mathcal{G},\mathcal{G}^{\prime})\cdot\log\frac{P(\mathcal{G},\mathcal{G}^{ \prime})}{P(\mathcal{G})P(\mathcal{G}^{\prime})}\] \[\quad-\sum_{\mathcal{G}^{\prime}}\sum_{y\sim\mathcal{Y}}\sum_{ \mathcal{G}}P(\mathcal{G},y,\mathcal{G}^{\prime})\cdot\log\frac{P(\mathcal{G}, y,\mathcal{G}^{\prime})}{P(\mathcal{G},y)P(\mathcal{G}^{\prime})}]\] \[= I(\mathcal{G}^{\prime},\hat{Y})+I(\mathcal{G},\mathcal{G}^{\prime})\] \[-\sum_{y\sim\mathcal{Y}}\sum_{\mathcal{G}}P(\mathcal{G},y)\sum_{ \mathcal{G}^{\prime}}P(\mathcal{G}^{\prime}|\mathcal{G},y)\cdot\log P(\mathcal{ G}^{\prime}|\mathcal{G},y)\] \[+\sum_{\mathcal{G}^{\prime}}\sum_{y\sim\mathcal{Y}}\sum_{ \mathcal{G}}P(\mathcal{G},y,\mathcal{G}^{\prime})\cdot\log P(\mathcal{G}^{ \prime})\] \[= I(\mathcal{G}^{\prime},\hat{Y})+I(\mathcal{G},\mathcal{G}^{\prime} )+H(\mathcal{G}^{\prime}|\mathcal{G},\hat{Y})-H(\mathcal{G}^{\prime}).\]
Since both \(H(\mathcal{G}^{\prime}|\mathcal{G},\hat{Y})\) and \(H(\mathcal{G}^{\prime})\) depicts entropy of explanation \(\mathcal{G}^{\prime}\) and are closely related to perturbation budgets, we can neglect these two terms and get a surrogate optimization objective for \(\max_{\mathcal{G}^{\prime}}I(\mathcal{G},\mathcal{G}^{\prime},\hat{Y})\) as \(\max_{\mathcal{G}^{\prime}}I(\hat{Y},\mathcal{G}^{\prime})+I(\mathcal{G}^{ \prime},\mathcal{G})\).
In \(\max_{\mathcal{G}^{\prime}}I(\hat{Y},\mathcal{G}^{\prime})+I(\mathcal{G}^{ \prime},\mathcal{G})\), the first term \(\max_{\mathcal{G}^{\prime}}I(\hat{Y},\mathcal{G}^{\prime})\) is the same as Eq.(1). Following [12], We relax it as \(\min_{\mathcal{G}^{\prime}}H_{C}(\hat{Y},\hat{Y}^{\prime}|\mathcal{G}^{\prime})\), optimizing \(\mathcal{G}^{\prime}\) to preserve original prediction outputs. The second term, \(\max_{\mathcal{G}^{\prime}}I(\mathcal{G}^{\prime},\mathcal{G})\), corresponds to maximizing consistency between \(\mathcal{G}^{\prime}\) and \(\mathcal{G}\). Although the graph generation process is latent, with the safe assumption that embedding \(\mathbf{E}_{\mathcal{G}}\) extracted by \(f\) is representative of \(\mathcal{G}\), we can construct a proxy objective \(\max_{\mathcal{G}^{\prime}}I(\mathbf{E}_{\mathcal{G}^{\prime}},\mathbf{E}_{ \mathcal{G}})\), improving the consistency in the embedding space. In this work, we optimize this objective by aligning their representations, either optimizing a simplified distance metric or conducting distribution-aware alignment.
## 8 Experiment
In this section, we conduct a set of experiments to evaluate the benefits of the proposed auxiliary task in providing instance-level post-hoc explanations. Experiments are conducted on \(5\) datasets, and obtained explanations are
Fig. 3: Illustration of our proposed new objective.
evaluated with respect to both faithfulness and consistency. Particularly, we aim to answer the following questions:
* **RQ1** Can the proposed framework perform strongly in identifying explanatory sub-structures for interpreting GNNs?
* **RQ2** Is the consistency problem severe in existing GNN explanation methods? Could the proposed embedding alignment improve GNN explainers over this criterion?
* **RQ3** Can our proposed strategies prevent spurious explanations and be more faithful to the target GNN model?
### _Experiment Settings_
#### 8.1.1 Datasets
We conduct experiments on five publicly available benchmark datasets for explainability of GNNs. The key statistics of the datasets are summarized in Table I.
* BA-Shapes [12]: A node classification dataset with a Barabasi-Albert (BA) graph of \(300\) nodes as the base structure. \(80\) "house" motifs are randomly attached to the base graph. Nodes in the base graph are labeled as \(0\) and those in the motifs are labeled based on positions. Explanations are conducted on those attached nodes, with edges inside the corresponding motif as ground-truth.
* Tree-Grid [12]: A node classification dataset created by attaching \(80\) grid motifs to a single \(8\)-layer balanced binary tree. Nodes in the base graph are labeled as \(0\) and those in the motifs are labeled as \(1\). Edges inside the same motif are used as ground-truth explanations for nodes from class 1.
* Infection [51]: A single network initialized with an ER random graph. \(5\%\) of nodes are labeled as infected, and other nodes are labeled based on their shortest distances to those infected ones. Labels larger than 4 are clipped. Following [51], infected nodes and nodes with multiple shortest paths are neglected. For each node, its shortest path is used as the ground-truth explanation.
* Mutag [12]: A graph classification dataset. Each graph corresponds to a molecule with nodes for atoms and edges for chemical bonds. Molecules are labeled with consideration of their chemical properties, and discriminative chemical groups are identified using prior domain knowledge. Following PGExplainer [13], chemical groups \(NH_{2}\) and \(NO_{2}\) are used as ground-truth explanations.
* Graph-SST5 [18]: A graph classification dataset constructed from text, with labels from sentiment analysis. Each node represents a word and edges denote word dependencies. In this dataset, there is no ground-truth explanation provided, and heuristic metrics are usually adopted for evaluation.
#### 8.1.2 Baselines
To evaluate the effectiveness of the proposed framework, we select a group of representative and state-of-the-art instance-level post-hoc GNN explanation methods as baselines. The details are given as follows:
* GRAD [13]: A gradient-based method, which assigns importance weights to edges by computing gradients of GNN's prediction w.r.t the adjacency matrix.
* ATT [13]: It utilizes average attention weights inside self-attention layers to distinguish important edges.
* GNNExplainer [12]: A perturbation-based method which learns an importance matrix separately for every instance.
* PGExplainer [13]: A parameterized explainer that learns a GNN to predict important edges for each graph, and is trained via testing different perturbations;
* Gem [45]: Similar to PGExplainer but from the causal view, based on the estimated individual causal effect.
* RG-Explainer [43]: A reinforcement learning (RL) enhanced explainer for GNN, which constructs \(\mathcal{G}^{\prime}\) by sequentially adding nodes with an RL agent.
Our proposed algorithms in Section 5.2 are implemented and incorporated into two representative GNN explanation frameworks, i.e., GNNExplainer [12] and PGExplainer [13].
#### 8.1.3 Configurations
Following existing work [13], a three-layer GCN [6] is trained on each dataset as the target model, with the train/validation/test data split as 8:1:1. For graph classification, we concatenate the outputs of global max pooling and global mean pooling as the graph representation. All explainers are trained using ADAM optimizer with weight decay set to \(5e\)-\(4\). For GNNExplainer, learning rate is initialized to \(0.01\) with training epoch being \(100\). For PGExplainer, learning rate is initialized to \(0.003\) and training epoch is set as \(30\). Hyper-parameter \(\lambda\), which controls the weight of \(\mathcal{L}_{align}\), is tuned via grid search. Explanations are tested on all instances.
#### 8.1.4 Evaluation Metrics
To evaluate _faithfulness_ of different methods, following [18], we adopt two metrics: (1) AUROC score on edge importance and (2) Fidelity of explanations. On benchmarks with oracle explanations available, we can compute the AUROC score on identified edges as the well-trained target GNN should follow those predefined explanations. On datasets without ground-truth explanations, we evaluate explanation quality with fidelity measurement following [18]. Concretely, we observe prediction changes by sequentially removing edges following assigned importance weight, and a faster performance drop represents stronger fidelity.
To evaluate _consistency_ of explanations, we randomly run each method \(5\) times, and report average structural hamming distance (SHD) [52] among obtained explanations. A smaller SHD score indicates stronger consistency.
### _Explanation Faithfulness_
To answer **RQ1**, we compare explanation methods in terms of AUROC score and explanation fidelity.
#### 8.2.1 AUROC on Edges
In this subsection, AUROC scores of different methods are reported by comparing assigned edge importance
\begin{table}
\begin{tabular}{l|c c c c c} \hline & BA- & Tree- & Infection & Mutag & SST-5 \\ & Shapes & Grid & & & \\ \hline Level & Node & Node & Node & Graph & Graph \\ \hline Graphs & \(1\) & \(1\) & \(1\) & \(4,337\) & \(11,855\) \\ Avg.Node & \(700\) & \(1,231\) & \(1,000\) & \(30.3\) & \(19.8\) \\ Avg.Edge & \(4,110\) & \(3,410\) & \(4,001\) & \(61.5\) & \(18.8\) \\ \hline Classes & \(4\) & \(2\) & \(5\) & \(2\) & \(5\) \\ \hline \end{tabular}
\end{table} TABLE I: Statistics of datasets
weight with ground-truth explanations. For baseline methods GRAD, ATT, Gem, and RG-Explainer, their performances reported in their original papers are presented. GNNExplainer and PGExplainer are re-implemented, upon which four alignment strategies are instantiated and tested. Each experiment is conducted \(5\) times, and we summarize the average performance in Table II. A higher AUROC score indicates more accurate explanations. From the results, we can make the following observations:
* Across all four datasets, with both GNNExplainer or PGExplainer as the base method, incorporating embedding alignment can improve the quality of obtained explanations;
* Among proposed alignment strategies, those distribution-aware approaches, particularly the variant based on Gaussian mixture models, achieve the best performance. In most cases, the variant utilizing latent Gaussian distribution demonstrates stronger improvements, showing the best results on \(3\) out of \(4\) datasets;
* On more complex datasets like Mutag, the benefit of introducing embedding alignment is more significant, e.g., the performance of PGExplainer improves from 83.7% to 95.9% with Align_Gaus. This result also indicates that spurious explanations are severer with increased dataset complexity.
#### 8.2.2 Explanation Fidelity
In addition to comparing to ground-truth explanations, we also evaluate the obtained explanations in terms of fidelity. Specifically, we sequentially remove edges from the graph by following importance weight learned by the explanation model and test the classification performance. Generally, the removal of really important edges would significantly degrade the classification performance. Thus, a faster performance drop represents stronger fidelity. We conduct experiments on Tree-Grid and Graph-SST5. Each experiment is conducted \(3\) times and we report results averaged across all instances on each dataset. PGExplainer and GNNExplainer are used as the backbone method. We plot the curves of prediction accuracy concerning the number of removed edges in Fig. 4. From the figure, we can observe that when the proposed embedding alignment is incorporated, the classification accuracy from edge removal drops much faster, which shows that the proposed embedding alignment can help to identify more important edges used by GNN for classification, hence providing better explanations. Furthermore, distribution-aware alignment strategies like the variant based on Gaussian mixture models demonstrate stronger fidelity in most cases. Besides, it can be noted that on Tree-Grid the fidelity of mutual-information-based alignment is dependent on the number of edges, and achieves better results with edge number within \([8,15]\).
From these two experiments, we can observe that embedding alignment can obtain explanations of better faithfulness and is flexible to be incorporated into various models such as GNNExplainer and PGExplainer, which answers RQ1.
PGExplainer. These results validate the effectiveness of our proposal in obtaining consistent explanations.
### _Ability in Avoiding Spurious Explanations_
Existing graph explanation benchmarks are usually designed to be less ambiguous, containing only one oracle cause of labels, and identified explanatory substructures are evaluated via comparing with the ground-truth explanation. However, this result could be misleading, as faithfulness of explanation in more complex scenarios is left untested. Real-world datasets are usually rich in spurious patterns and a trained GNN could contain diverse biases, setting a tighter requirement on explanation methods. Thus, to evaluate if our framework can alleviate the spurious explanation issue and answer **RQ3**, we create a new graph-classification dataset: MixMotif, which enables us to train a biased GNN model, and test whether explanation methods can successfully expose this bias.
Specifically, inspired by [14], we design three types of base graphs, i.e., Tree, Ladder, and Wheel, and three types of motifs, i.e., Cycle, House, and Grid. With a mix ratio \(\gamma\), motifs are preferably attached to base graphs. For example, Cycle is attached to Tree with probability \(\frac{2}{3}\gamma+\frac{1}{3}\), and to others with probability \(\frac{1-\gamma}{3}\). So are the cases for House to Ladder and Grid to Wheel. Labels of obtained graphs are set as type of the motif. When \(\gamma\) is set to \(0\), each motif has the same probability of being attached to the three base graphs. In other words, there's no bias on which type of base graph to attach for each type of motif. Thus, we consider the dataset with \(\gamma=0\) as clean or bias-free. We would expect GNN trained on data with \(\gamma=0\) to focus on the motif structure for motif classification. However, when \(\gamma\) becomes larger, the spurious correlation between base graph and the label would exist, i.e., a GNN might utilize the base graph structure for motif classification instead of relying on the motif structure. For each setting, the created dataset contains \(3,000\) graphs, and train:evaluation:test are split as \(5:2:3\).
In this experiment, we set \(\gamma\) to \(0\) and \(0.7\) separately, and train GNN \(f_{0}\) and \(f_{0.7}\) for each setting. Two models are tested in graph classification performance. Then, explanation methods are applied to and fine-tuned on \(f_{0}\). Following that, these explanation methods are applied to explain \(f_{0.7}\) using found hyper-parameters. Results are summarized in Table V.
From Table V, we can observe that (1) \(f_{0}\) achieves almost perfect graph classification performance during testing. This high accuracy indicates that it captures the genuine pattern, relying on motifs to make predictions. Looking at explanation results, it is shown that our proposal offers more faithful explanations, achieving higher AUROC on motifs. (2) \(f_{0.7}\) fails to predict well with \(\gamma=0\), showing that there are biases in it and it no longer depends solely on the motif structure for prediction. Although ground-truth explanations are unknown in this case, a successful explanation should expose this bias. However, PGExplainer would produce similar explanations as the clean model, still highly in accord with motif structures. Instead, for explanations produced by embedding alignment, AUROC score would drop from \(0.795\) to \(0.266\), exposing the change in prediction rationales, hence able to expose biases. (3) In summary, our proposal can provide more faithful explanations for both clean and mixed settings, while PGExplainer would suffer from spurious explanations and fail to faithfully explain GNN's predictions, especially in the existence of biases.
### _Hyperparameter Sensitivity Analysis_
In this part, we vary the hyper-parameter \(\lambda\) to test the sensitivity of the proposed framework toward its values. \(\lambda\) controls the weight of our proposed embedding alignment task. To keep simplicity, all other configurations are kept unchanged, and \(\lambda\) is varied within the scale \([1e-3,1e-2,1e-1,1,10,1e2,1e3)\). PGExplainer is adopted as the base method. Experiments are randomly conducted \(3\) times on dataset Tree-Grid and Mutag. Averaged results are visualized in Figure 5. From the figure, we can make the following observations:
* For all four variants, increasing \(\lambda\) has a positive effect at first, and the further increase would result in a performance drop. For example on the Tree-Grid dataset, best results of variants based on anchors, latent Gaussian mixture models and mutual information scores are all obtained with \(\lambda\) around \(1\). When \(\lambda\) is small, the explanation alignment regularization in Eq. 6 will be underweighted.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline & \multicolumn{5}{c}{Top-K Edges} \\ \hline Methods & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline GNNExplainer & 1.12 & 1.74 & 2.65 & 3.40 & 4.05 & 4.78 \\ +Align\_Emb & 1.05 & 1.61 & 2.33 & 3.15 & 3.77 & 4.12 \\ +Align\_Anchor & 1.06 & 1.59 & 2.17 & 3.06 & 3.54 & 3.95 \\ \hline +Align\_MI & 1.11 & 1.68 & 2.42 & 3.23 & 3.96 & 4.37 \\ +Align\_Gaus & 1.03 & 1.51 & 2.19 & 3.02 & 3.38 & 3.85 \\ \hline PGExplainer & 0.91 & 1.53 & 2.10 & 2.57 & 3.05 & 3.42 \\ +Align\_Emb & 0.55 & 0.96 & 1.13 & 1.31 & 1.79 & 2.04 \\ +Align\_Anchor & **0.51** & **0.90** & **1.05** & 1.27 & 1.62 & 1.86 \\ \hline +Align\_MI & 0.95 & 1.21 & 1.73 & 2.25 & 2.67 & 2.23 \\ +Align\_Gaus & 0.59 & 1.34 & 1.13 & **0.84** & **1.25** & **1.15** \\ \hline \end{tabular}
\end{table} TABLE IV: Consistency of explanation in terms of average SHD distance across \(5\) rounds of random running on Mutag.
\begin{table}
\begin{tabular}{l|c c c c c} \hline & \multicolumn{5}{c}{Top-K Edges} \\ \hline Methods & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline GNNExplainer & 0.86 & 1.85 & 2.48 & 3.14 & 3.77 & 4.39 \\ +Align\_Emb & 0.77 & 1.23 & 1.28 & 0.96 & 1.81 & 2.72 \\ +Align\_Anchor & 0.72 & 1.06 & 0.99 & 0.53 & 1.52 & 2.21 \\ \hline +Align\_MI & 0.74 & 1.11 & 1.08 & 1.32 & 1.69 & 2.27 \\ +Align\_Gaus & 0.68 & 1.16 & 1.13 & 0.72 & 1.39 & 2.13 \\ \hline PGExplainer & 0.74 & 1.23 & 0.76 & 0.46 & 0.78 & 1.38 \\ +Align\_Emb & 0.11 & 0.15 & 0.13 & **0.11** & 0.24 & 0.19 \\ +Align\_Anchor & 0.07 & 0.12 & 0.13 & 0.16 & 0.21 & **0.13** \\ \hline +Align\_MI & 0.28 & 0.19 & 0.27 & 0.15 & 0.20 & 0.16 \\ +Align\_Gaus & **0.05** & **0.08** & **0.10** & 0.12 & **0.19** & **0.13** \\ \hline \end{tabular}
\end{table} TABLE III: Consistency of explanation in terms of average SHD across \(5\) rounds of random running on Tree-Grid.
\begin{table}
\begin{tabular}{l|c|c} \(\gamma\) in Training \\ \hline \hline Classification & \(0\) & \(0.7\) \\ \(\gamma\) in test & \(0\) & \(0.982\) & \(0.765\) \\ & \(0.78\) & \(0.994\) \\ \hline Explanation & PGExplainer +Align & PGExplainer +Align \\ \hline AUROC & \(0.711\) & \(\mathbf{0.795}\) & \(0.748\) & \(\mathbf{0.266}\) \\ on Motif & (Higher is better) & (Lower is better) \\ \hline \end{tabular}
\end{table} TABLE V: Performance on MixMotif. Two GNNs are trained with different \(\gamma\). We check their performance in graph classification, then compare obtained explanations with the motif.
On the other hand, a too-large \(\lambda\) may underweight the MMI-based explanation framework, which preserves the predictive power of obtained explanations.
* Among these variants, the strategy based on latent Gaussian mixture models shows the strongest performance in most cases. For example, for both datasets Tree-Grid and Mutag, this variant achieves the highest AUROC scores on identified explanatory edges. On the other hand, the variant directly using Euclidean distances shows inferior performances in most cases. We attribute this to their different ability in modeling the distribution and conducting alignment.
## 9 Conclusion
In this work, we study a novel problem of obtaining faithful and consistent explanations for GNNs, which is largely neglected by existing MMI-based explanation framework. With close analysis on the inference of GNNs, we propose a simple yet effective approach by aligning internal embeddings. Theoretical analysis shows that it is more faithful in design, optimizing an objective that encourages high MI between the original graph, GNN output, and identified explanation. Four different strategies are designed, by directly adopting Euclidean distance, using anchors, KL divergence with Gaussian mixture models, and estimated MI scores. All these algorithms can be incorporated into existing methods with no effort. Experiments validate their effectiveness in promoting the faithfulness and consistency of explanations.
In the future, we will seek more robust explanations. Increased robustness indicates stronger generality, and could provide better class-level interpretation at the same time. Besides, the evaluation of explanation methods also needs further studies. Existing benchmarks are usually clear and unambiguous, failing to simulate complex real-world scenarios.
## Acknowledgments
This material is based upon work supported by, or in part by, the National Science Foundation under grants number IIS-1707548 and IIS-1909702, the Army Research Office under grant number W911NF21-1-0198, and DHS CINA under grant number E205949D. The findings and conclusions in this paper do not necessarily reflect the view of the funding agency.
|
2302.09394 | Deep Neural Networks based Meta-Learning for Network Intrusion Detection | The digitization of different components of industry and inter-connectivity
among indigenous networks have increased the risk of network attacks. Designing
an intrusion detection system to ensure security of the industrial ecosystem is
difficult as network traffic encompasses various attack types, including new
and evolving ones with minor changes. The data used to construct a predictive
model for computer networks has a skewed class distribution and limited
representation of attack types, which differ from real network traffic. These
limitations result in dataset shift, negatively impacting the machine learning
models' predictive abilities and reducing the detection rate against novel
attacks. To address the challenges, we propose a novel deep neural network
based Meta-Learning framework; INformation FUsion and Stacking Ensemble
(INFUSE) for network intrusion detection. First, a hybrid feature space is
created by integrating decision and feature spaces. Five different classifiers
are utilized to generate a pool of decision spaces. The feature space is then
enriched through a deep sparse autoencoder that learns the semantic
relationships between attacks. Finally, the deep Meta-Learner acts as an
ensemble combiner to analyze the hybrid feature space and make a final
decision. Our evaluation on stringent benchmark datasets and comparison to
existing techniques showed the effectiveness of INFUSE with an F-Score of 0.91,
Accuracy of 91.6%, and Recall of 0.94 on the Test+ dataset, and an F-Score of
0.91, Accuracy of 85.6%, and Recall of 0.87 on the stringent Test-21 dataset.
These promising results indicate the strong generalization capability and the
potential to detect network attacks. | Anabia Sohail, Bibi Ayisha, Irfan Hameed, Muhammad Mohsin Zafar, Hani Alquhayz, Asifullah Khan | 2023-02-18T18:00:05Z | http://arxiv.org/abs/2302.09394v2 | # Deep Neural Networks based Meta-Learning for Network Intrusion Detection
###### Abstract
Designing an intrusion detection system is difficult as network traffic encompasses various attack types, including new and evolving ones with minor changes. The data used to construct a predictive model has a skewed class distribution and limited representation of attack types, which differ from real network traffic. These limitations result in dataset shift, negatively impacting the machine learning models' predictive abilities and reducing the detection rate against novel attacks. To address the challenge of dataset shift, we introduce the INformation FUsion and Stacking Ensemble (INFUSE) for network intrusion detection. This approach further improves its predictive power by employing a deep neural network-based Meta-Learner on top of INFUSE. First, a hybrid feature space is created by integrating decision and feature spaces. Five different classifiers are utilized to generate a pool of decision spaces. The feature space is then enriched through a deep sparse autoencoder that learns the semantic relationships between attacks. Finally, the deep Meta-Learner acts as an ensemble combiner to analyze the hybrid feature space and make a final decision. Our evaluation on stringent benchmark datasets and comparison to existing techniques showed the effectiveness of INFUSE with an F-Score of 0.91, Accuracy of 91.6%, and Recall of 0.94 on the Test\({}^{+}\) dataset, and an F-Score of 0.91, Accuracy of 85.6%, and Recall of 0.87 on the stringent Test-21 dataset. These promising results indicate the proposed technique has strong generalization capability and the potential to detect network attacks.
Intrusion Detection, Deep Neural Networks, Autoencoder, Deep Stacking Ensemble, Information Fusion, Deep Meta-Learner.
Introduction
The widespread use of various technologies such as 4G/5G networks, the Internet of Things, smart devices, and cloud-based services has significantly increased the size of global networks [1]. This expansion has made network attacks more sophisticated and increased the risk of network-based attacks due to prevailing vulnerabilities in the network [2, 3, 4]. The emergence of COVID-19 in late 2019 completely transformed the perception of internet-based activities among the general population, leading to a shift in trade and education to online mode [5, 6]. This transition has greatly expanded the number of network users, adding pace to the already-growing global network, with a projected growth of 62.5% of the world population in 2022, up from 56.7% [7, 8].
The large population relying on the global network makes network security a primary area of focus, as a considerable amount of privacy-sensitive data is being generated and distributed across multiple network nodes. Network traffic is targeted by a diverse range of attacks, including Probing, Denial of Service, User to Root, SQL injections, Cross-site Scripting, Web attacks, and many others [9]. In November 2021, a DDoS attack three times larger than previous records was initiated by approximately 15,000 bots, with a peak throughput of 3.45 Tbps. Recently, during the Russia-Ukraine war, the digital infrastructure of Ukraine [10] was crippled by cyber attacks [11].
The growing need for network security and the continuous evolution of intrusion methods demands an active research in the development of intrusion detection systems (IDS). Machine learning (ML) techniques are seen as a valuable tool for developing anomaly-based IDS because of their ability to model anomalous behaviour [12, 13]. However, the learning paradigm of ML techniques assumes that the probability distribution of the test set is the same as the training set [14]. In real-world applications, especially in network traffic, the data distribution often shifts between training and testing data. The continuous emergence of new and variant attack types poses a challenge of dataset shift in intrusion detection systems [15]. This raises concerns about poor performance of ML models on the test set due to underlying inductive bias of the model.
Dataset shift has received relatively little attention in the context of IDS, despite the fact that new attacks are emerging every day. Therefore, there is a pressing need to develop a technique that can effectively address the non-stationary nature of attacks. It is challenging to choose a model with the right bias and optimal hypothesis space that can accurately generalize the deviation in
attack distribution. Thus, it is necessary to develop a technique that can model the deviation pattern of attacks, rather than focusing solely on specific attack types.
This work proposes a novel Information Fusion and Stacking Heterogeneous Ensemble-based Model for Network Intrusion Detection (INFUSE) by performing both decision- and feature-level information fusion to address the dataset shift problem. The performance of the proposed INFUSE is assessed on the benchmarked network traffic dataset. The distinct contributions of this work are:
* Diverse feature spaces were explored to model the data distribution, which can address unseen emerging attack variations. The strong representational learning ability of weight regularized deep sparse autoencoders is utilized to encode the semantic relevance of normal and abrupt traffic.
* A strong decision space was produced by utilizing multiple base-classifiers with varying hypothesis spaces to reduce the inductive bias of each hypothesis space.
* A stacking-based deep heterogeneous ensemble was developed that uses a deep neural network as a meta-learner to systematically evaluate a hybrid space of different decision spaces along with multiple feature spaces.
* The proposed INFUSE has been analysed by evaluating several existing techniques and different ensembles, and the performance comparison shows a significant increase in detection rate and accuracy.
The paper is structured as follows: Section 2 analyses the relevant literature on intrusion detection. The dataset is described in Section 3, and methodology of the proposed ensemble is presented in Section 4. Section 5 discusses the results in detail and finally, concludes the study in Section 6.
## 2 Background Literature
In the past, several techniques have been developed for the analysis of network traffic, which involve either binary classification of intrusions in network traffic or multiclass segregation of attacks into specific categories. To achieve this, different approaches have been explored, including classical ML, deep learning, and ensemble learning techniques, utilizing publicly available datasets [16][17, 18, 19]. A brief overview of the existing techniques is provided below.
Initially, classical ML techniques were used to detect attacked samples in network traffic by focusing on the extraction or selection of informative representations from the data. Aslahi-Shahri et al. [20] developed a hybrid model that used genetic algorithms to select an optimal feature subset from the NSL-KDD dataset, followed by SVM classification. The technique achieved a 0.97 F-score by selecting 10 features out of 45. In another study, Liu et al. [21] used both unsupervised and supervised learning to address network intrusion. They first identified similar groups within the dataset using k-means clustering, and then used a Random Forest (RF) algorithm to classify these clusters as normal or attacked traffic. They followed this up with a multi-class classification of attacked samples using deep CNN and LSTM. This approach achieved accuracies of 85% and 99% for NSL-KDD and CIS-IDS2017 datasets, respectively. Another study [22] used the Naive Bayes algorithm to extract new feature embeddings from the NSL-KDD dataset and assigned the transformed feature space to SVM for maximum separability. Finally, a study by [23] proposed an evolutionary neural network for intrusion detection, using a multiverse optimizer to select optimal parameters during training. One major limitation of classical ML techniques is their reliance on different feature extraction techniques to deal with dataset variance effectively. A highly efficient intrusion detection system requires immediate detection of any attack.
Many researchers have utilized deep neural networks for network traffic analysis due to their strong representation mapping capacity. Neural network-based intrusion detection systems typically consist of two stages: in the first stage, deep neural networks are used for feature space encoding, and in the second stage, classification of attacked samples is performed [24]. Qureshi et al. [25] utilized the idea of self-taught learning and autoencoders to classify records as normal or malicious. In this study, a deep sparse autoencoder was used for feature extraction and classification phases. Firstly, a pre-trained autoencoder for the regression problem was used to derive a new feature set which was then concatenated with the original feature set. After this, an autoencoder without a decoder was trained end-to-end on the augmented feature space for classification. Al-Qataf et al. [26] proposed a hybrid learning scheme that utilized the idea of self-taught learning for pretraining of an autoencoder. They extracted a new low-dimensional feature space from the NSL-KDD dataset and assigned it to an SVM for classification.
In another study, the performance was improved by combining four different CNN architectures [27]. The feature space of the NSL-KDD dataset was transformed into an image to take
advantage of the learning capacity of CNN. Four different CNN architectures were merged before the fully connected layer and trained with a single loss function. The final decision was based on a 256-dimensional feature space, but the use of multiple CNNs increased the complexity. Naseer et al. [28] investigated various techniques, including deep CNN, LSTM, and AE, to differentiate between attacked samples and normal data flow. However, due to the significant difference between the attack samples in the training and testing sets, these techniques exhibit a low detection rate for attacked samples.
Ensemble learning has shown promise in improving individual classifiers for handling noise sensitivity, domain shift, scalability, and inability to detect diverse attacks. Gao et al. developed an adaptive learning-based ensemble to overcome the complexity of intrusion datasets [29]. Five different classifiers, including decision tree, RF, kNN, and DNN, were used as base learners in the ensemble. Majority voting was used to make a decision by assigning a weightage to each classifier's decision. This ensemble technique was used for intrusion detection in NSL-KDD Test+. Similarly, another study suggested an ensemble of \(n\) modified Adaboost algorithms. Cost-sensitive base-classifiers were developed by optimizing the Adaboost algorithm using the area under the curve to address the class imbalance problem. Salo et al. selected significant features by applying PCA and Information Gain [30]. They enhanced the learning capacity by developing an ensemble of SVM, instance-based learning algorithms, and multilayer perceptron.
Zhang et al. [31] proposed using multiple feature fusion and homogenous stacking ensemble mechanisms to detect irregularities in network traffic. A diverse set of features was generated to train \(n\) number of homogenous base classifiers. The predictions of the base classifiers were combined using RF as a meta-classifier to draw a final decision. The idea of heterogeneous ensembles has been proposed to enhance the learning capacity by addressing the shortcomings of homogenous ensembles. Zhou et al. developed a voting-based heterogeneous ensemble. Initially, they used a hierarchical feature extraction algorithm CFS-BA to boost the feature representation at the preprocessing step. The proposed approach exploited shallow algorithms such as Forest Penalizing Attributes, C4.5, and RF on the extracted representation, and an average voting strategy was used to combine the base classifiers' probabilities [32].
The main issue with the existing approach is that it ignores the non-stationary nature of attacks and uses accuracy as the performance metric, which can underestimate the detection rate of the
minority class. In real network traffic, a wide range of attacks exist, and the proportion of attacks is often imbalanced with respect to normal traffic. Therefore, it is essential to take into account the issue of dataset shift and prioritize the detection ability of the ML model for highly imbalanced malicious attack data.
## 3 Dataset Details
NSL-KDD dataset was utilized in this study for the analysis of network traffic intrusion. It is considered a benchmark dataset for the development of Network Intrusion Detection Systems [33]. The dataset's high level of difficulty, class imbalance, and presence of unique attacks in the test set make it suitable for analyzing the effectiveness of intrusion detection systems against a dataset shift problem. The presence of unique attacks in the test set motivates the development of a model that can handle emerging attacks and the challenge of distribution shift. The dataset's records are classified as normal or attacked, with the attacked samples falling into five main categories commonly found in network traffic. Table 1 provides a description of the dataset's characteristics, while Table 2 lists the various types of attacks reported in the dataset. The training set includes normal data and 22 attack types, while the test dataset contains seven new attack types in addition to those present in the training data.
The distinct features of NSL-KDD dataset are as follows:
* There is no repetition of records in the train and test set contrary to other datasets. Sample recurrence in the test set makes the performance evaluation criterion unrealistic. NSL-KDD dataset makes an equal contribution to each sample to remove bias.
* The difficulty level was assigned to both the train and test set. Test set was made stringent by adding samples from 17 different unique attacks in the Test\({}^{+}\) and Test-21 other than attacks reported in Train\({}^{+}\). The additional stringent dataset Test\({}^{-21}\) was generated by the test examples that were misclassified by the 21 classifiers.
* The train and test sets are highly imbalanced that increases the difficulty level. The frequency of normal and attacked traffic in the Train, Test\({}^{+}\) and Test-21 are represented in Figure 1.
### Statistical Evaluation of Dataset Shift
In this work, the Kolmogorov-Smirnov test is applied to analyse a dataset shift between train and test set. It is a non-parametric test and compares how likely that two datasets are drawn with the same probability distribution [34]. Hypothesis test for comparison of test distribution with train is formulated as:
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Category** & \multicolumn{2}{|c|}{**Description**} & **Data type** \\ \hline Basic Features & Features representing TCP/ICP connection & Symbolic \& \\ & without considering the payload & Continuous \\ \hline Content Features & Features required to accessed the payload & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } \\ Content Features & of TCP packet and suspicious behaviour & \\ & within payload & \\ \hline Time based Traffic & Network features were analysed with 2s & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } \\ Features & temporal window and provide a statistical & \\ & information & \\ \hline Host based Traffic & Feature used to analyse the attack within & \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } \\ features & interval longer than 2s & \\ \hline \end{tabular}
\end{table}
Table 1: Feature representation of NSL KDD dataset.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Category** & \multicolumn{2}{|c|}{**Attacks**} \\ \hline
**Probing** & \multicolumn{2}{|c|}{IP-sweep, Port-sweep, NMAP, Satan} \\ \hline
**Root to Local** & FTP-Write, Guess-password, IMAP, Multi-HOP, Phf, SPY, Warezclient, Warezmaster \\ \hline
**User to Root** & \multicolumn{2}{|c|}{Buffer Overflow, Load-module, Perl, Root-kit} \\ \hline
**Denial of Service** & Back, Land, Neptune, Ping-of-Death, Smurf, Teardrop \\ \hline
**Unique attacks in Test+** & \multicolumn{2}{|c|}{\begin{tabular}{} \end{tabular} } \\ and Test-21 & \multicolumn{2}{|c|}{
\begin{tabular}{} \end{tabular} } \\ \hline \end{tabular}
\end{table}
Table 2: Types of attacks in the Train, Test\(+\) and Test-21.
Figure 1: The proportion of attacked samples in the train and test sets (Test\(+\) and Test-21).
H\({}_{0}\): train and test sets are sampled from same data distributions
H\({}_{\Lambda}\): train and test sets have different data distributions
The Kolmogorov-Smirnov test computes the test statistics D by regressing the empirical distribution function of the test set on cumulative distribution function of reference dataset (train data) using the following formula:
\[D_{s,m}=\max\mid F(x)-E(x)\mid \tag{1}\]
In Eq. (1), \(n\) is the sample size of cumulative distribution function ( \(F(x)\)) of the train set, whereas \(m\) is the sample size of empirical distribution function ( \(E(x)\)) of the test set.
The test statistics D = 0.0128 show that there is a significant difference between two distributions with significant p-value (p-value= 6.82e-280 \(<\) 0.05).
### Feature space based Evaluation of Dataset Shift
Feature space of the train \(p_{train}(X_{tr})\) and test set \(p_{test}(X_{tr})\)is visualized using t-SNE (Figure 2) to qualitatively analyse the shift in distribution. Figure 2 shows that the distribution of the test set is not only different from the train set \(p_{train}(x_{tr}\mid y_{tr})\neq p_{test}(x_{tr}\mid y_{tr})\) but also that the test set has some unique attacks \(p_{train}(y_{tr}\mid x_{tr})\neq p_{test}(y_{tr}\mid x_{tr})\) that are not part of the training set.
### Dataset Representation and Preprocessing
NSL-KDD dataset comprises 41 network features that are measured on continuous and ordinal scales. The features are listed in Table 2, and symbolic features are represented by indicator variables in this study, which increases the feature dimension from 41 to 121.
The NSL-KDD dataset has a highly skewed feature distribution, leading to biased variable contributions and reduced classifier performance. To address this issue, we applied data
Figure 2: Visualization of the first two components of the feature space projected via t-SNE.
normalization to scale the Train, Test\({}^{+}\), and Test-21 sets while preserving their original distributions.
### Dataset Division
In this study, we used a stratified hold-out strategy to randomly split the training dataset into a 60:40% for training of base-classifiers and the Meta-Learner, respectively. We select an optimal set of hyperparameters for base-classifiers by performing 5-fold cross-validation on the 60% reserved training data. The Meta-Learner is trained on 80% of the 40% split, while 20% is utilized for validation and hyperparameter selection. Finally, we assess the generalization performance of the proposed ensemble on separately provided test sets (test\({}^{+}\) and test-21) by NSL KDD.
The proposed INformation Fusion and Stacking Heterogeneous Ensemble for Network Intrusion Detection (INFUSE)
The network data is heterogeneous in nature and vulnerable to various types of attacks, posing a challenge to developing a robust NIDS that can handle dataset shift problems. To address this challenge, we propose an INformation FUSion and Stacking Ensemble called "INFUSE". The proposed ensemble works sequentially in two phases: first, developing an information-rich
Figure 3: A detailed layout of the proposed INFUSE.
feature space, and second, developing a meta-learner to draw a final decision from the hybrid feature space. Figure 3 provides a detailed workflow of the proposed INFUSE ensemble.
### Motivation
The concept of information fusion is utilized at both the feature and decision level to tackle the dataset shift problem. Additionally, an idea of stacking ensemble is used to enhance the generalization of the NIDS. A machine learning model has an inherent bias towards a particular data distribution, which can affect its performance when dealing with dataset shift. Ensemble-based classifiers address this limitation by introducing diversity in sample space, feature space, or decision space, which can overcome the bias and variance associated with individual learners [35]. Therefore, the use of ensembles in this study is expected to improve accuracy and robustness against distribution shift in the test set compared to a single learner.
The learning potential of an ensemble is primarily defined by diversity and the combination rule [36]. In this study, the proposed ensemble "INFUSE" induces diversity by combining the predictive power of multiple classifiers and incorporating diversity in feature representation to handle diverse attacks. Meanwhile, an effective combination rule is generated via a deep neural network. The proposed technique works based on the following ideas:
* The idea of information fusion is exploited to improve the diversity and develop an information-rich feature space that can address the challenge of distribution shift.
* A set of supervised base-classifiers varying in their hypothesis spaces are used. Thus, the instances that cannot be tackled via single learner can be corrected by other classifiers.
* The representational learning capacity of artificial NN is exploited using unsupervised weight regularized deep sparse autoencoders to generate effective feature representations. In this regard, a new attack instance that lies outside the distribution can be modeled based on semantic relevance.
* A stacking based heterogeneous ensemble is developed to integrate multiple information spaces. A deep artificial NN based meta-learner is used that makes a decision by analysing the information-rich feature-space collectively in an intelligent way.
### Diversity Improvement using Decision Spaces
In this work, we have used five heterogeneous base classifiers \(H=\{H_{1},H_{2},H_{3},H_{4},H_{5}\}\) and combined their decision Score \(Z=\{Z_{1},Z_{2},Z_{3},Z_{4},Z_{5}\}\) to generate a pool of diverse hypothesis varying in their learning biases. Diversity in the hypothesis space of base classifiers is discussed below. The parameters of the base classifiers are mentioned in Table 3.
#### 4.2.1 Base Classifier 1
The first base classifier was selected based on the property of structural risk minimization. In this regard, SVM is used to define an optimal decision boundary [37] by incorporating a geometric interpretation of the hyper-plane during training. The mathematical representation of SVM is shown in Eqs. (2 & 3).
\[\mathbf{w}^{T}x+b=0 \tag{2}\]
\[\min_{w,\zeta_{i}}C\sum_{i}^{N}\zeta_{i}+\frac{1}{2}\left\|\,\mathbf{w}\, \right\|^{2} \tag{3}\]
In Eq. (2), an input instance is represented by \(x\), whereas \(\,\mathbf{w}^{T}\,\) is a weight vector and \(b\) is a bias. Misclassifications made by a learner are represented by \(\,\zeta_{i}\,\), and \(C\) is a hyper-parameter that establishes tradeoff between generalization and empirical error on a training set.
#### 4.2.2 Base Classifier 2
The second base classifier kNN was nonparametric in nature and makes an instance-wise classification. It locally approximates the target function by simply storing the training instances instead of learning the explicit definition of the target function. This gives the advantage for classifying new attacks as it assigns the class to new instance based on smallest distance with its \(k\) nearest neighbours as shown in Eq. (4) [38]. Eq. (4) shows that Euclidian distance is computed between train instance \(X\,\) and test instance \(j\,\).
\[D(X,j)=(\sum_{i=1}^{n}\left|x_{i}-j\right|^{p})^{1/p} \tag{4}\]
#### 4.2.3 Base Classifier 3
Rule-based approach is used in the third base classifier [29] to appropriately deal with categorical and nominal datasets. In intrusion detection, most features are binary in nature and can be well classified using a rule-based approach. For this, decision tree is implemented using entropy and it is mathematically represented in Eq. (5).
\[H(L)=-\sum_{i=1}^{k}p(L=l_{i})^{p}\log_{2}p(L=l_{i}) \tag{5}\]
#### 4.2.4 Base Classifier 4
Furthermore, idea of ensemble learning is employed in base classifier-4 using Random Forest (RF). RF improves robustness towards attacks by manipulating both data distribution and features during training. RF draws the final decision by taking an average of all the predictions made by the forest of trees. It is mathematically expressed in Eq. (6). RF reduces the variance and sensitivity associated with individual trees [39].
\[\hat{R}=\frac{1}{B}\sum_{b=1}^{B}\hat{f_{b}}(x^{\cdot}) \tag{6}\]
In Eq. (6), \(B\) is the number of decision trees that are considered during prediction, and \(b\) is the current decision tree. Data sub-set is defined by \(x^{\cdot}\), \(\hat{f_{b}}(.)\) is the function that fits the decision tree on feature subset, and final output is denoted by \(\hat{R}\).
#### 4.2.5 Base Classifier 5
AdaBoost reduces the bias associated with each classifier using boosting strategy [40]. It performs the serial training of a set of weak base-learners in \(n\) iterations, and weighted sampling is performed during training. Thus, each classifier focuses on correctly classifying the samples misclassified in the previous iteration. The final decision is drawn by taking a decision from all the learners and combining them through a majority vote rule.
### Diversity Improvement using Feature Space
In this work, we improved the feature space by training two AEs in an unsupervised manner to learn semantics of the data and generated a new representation reflecting the underlying distribution of network traffic. The newly generated laten representation \(L=\{L_{1},L_{2}\}\) is combined with the original feature space \(X\) to provide more discriminative features \(F=f_{c}(L_{1}\|\,L_{2}\,\|\,X)\).
#### 4.3.1 Weight Regularized Deep Sparse AE
We exploited two weight regularized deep sparse AEs with depth of 8- and 10- layers to capture the latent representation of attacks. The description of the proposed AEs is mentioned in Table 4
and Figure 4. AE is an effective representational learning algorithm based on unsupervised NNs. The learning principle of AE is based on identity function and follows the encoder-decoder paradigm [41]. The mathematical representation of AE is shown in Eqs. 7 & 8.
\[\overset{\wedge}{L_{1}}=f_{enc}(X)=\partial(\textbf{w}_{n}(\textbf{ w}_{n-1}..(\textbf{w}_{1}X+\textbf{b}_{1})+\textbf{b}_{n-1})+\textbf{b}_{n}) \tag{7}\] \[X^{{}^{\prime}}=f_{dec}(\overset{\wedge}{L})=\partial(\textbf{ w}_{n}^{{}^{\prime}}(\textbf{w}_{n-1}..(\textbf{w}_{1}^{{}^{\prime}} \overset{\wedge}{L+\textbf{b}_{1}^{{}^{\prime}}})+\textbf{b}_{n-1}^{{}^{\prime} })+\textbf{b}_{n}^{{}^{\prime}}) \tag{8}\]
In Eq. (7) \(X\) is the original input, **W**\(=\{\)**w\({}_{1}\),...,**w\({}_{n}\}\)** represents weights, **B**\(=\{\)**b\({}_{1}\),..., **b\({}_{n}\}\)** is the added biases. AE maps \(X\) to a latent representation \(\overset{\wedge}{L}\) using an encoding function \(f_{enc}(.)\). In Eq. (8), the reconstructed input is represented by \(X^{{}^{\prime}}\) that is generated from the latent representation ( \(L\) ) using the decoding function \(f_{dec}(.)\).
In this work, we enhance the representation learning ability of the AE by making it deep and adding the sparsity penalty. Deep AE maps the original input to useful representation by hierarchically learning multi-level feature representations while each hierarchy level corresponds to a different level of abstraction. The L2 weight regularization was applied as a sparsity penalty in the mean square error loss function that enforces the AE to learn the effective low-dimensional feature representation from the NSL-KDD dataset.
\[MSE=f_{error}(\frac{1}{n}\sum_{i=1}^{n}\left\|X_{i}-X^{{}^{\prime}}_{i}\right\| ^{2}+\lambda\left\|\textbf{w}\right\|^{2}) \tag{9}\]
The sparsity penalty for AE is shown in Eq. (9). L2 weight regularization penalizes the model in proportion to the magnitude of the weights. Thus, it enforces weights near zero and makes AE to learn a reduced feature space by discovering the structure hidden in the high dimensional feature space. A latent representation is generated by learning a semantic relevance between features corresponding to the same group.
### The proposed Meta-Learner for Information Fusion Analysis
A hybrid feature space is developed by fusing multiple feature representations with decision spaces to generate a strong representative context. A 6-layers deep, fully connected NN is used as a Meta-Learner (Figure 5) to approximate the target function from a hybrid information-rich feature-space intelligently. Multiple levels of nonlinearities make deep NNs efficient in learning a complex non-linear representation.
Figure 4: Architectural overview of the proposed weight regularized AE.
### Parameter Setting
The parameter setting of the Base and Meta-Learner is performed on the validation dataset that is separated from the training set. Tables- 3 and -4 show the parameter on which proposed models give the optimal performance.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Model** & **Layers** & **Learning rate** & **Optimizer** & **Loss** & **Weight decay** \\ \hline INFUSE & 6 Layers & 0.00008 & SGD & BCELoss & - \\ \hline Autoencoder 1 & 10 Layers & 0.00008 & Adam & MSELoss & 1e-5 \\ \hline Autoencoder 2 & 8 Layers & 0.0001 & Adam & MSELoss & 1e-5 \\ \hline \end{tabular}
\end{table}
Table 4: Parameter setting of the proposed weight regularized autoencoder and Meta-Leraner.
Figure 5: Architectural overview of the proposed deep NN based Meta-Learner.
Results
The significance of the proposed ensemble INFUSE against dataset shift has been analyzed on the Test\({}^{+}\) and Test-21 of the NSL-KDD dataset. The performance has been evaluated using several performance measures and compared with other techniques.
### Evaluation Measures
Multiple evaluation metrics, including accuracy (Acc.), F-score, detection rate (Recall), false negative rate (FNR), ROC, and PR curves, are considered to estimate the performance of the proposed ensemble INFUSE against dataset shift. Detection rate is used to evaluate the model's capacity to recognize attacks, and Accuracy and F-score are also reported. The statistical significance of the proposed ensemble is assessed via the McNemar test, and the standard error is computed at a 95% confidence interval using z-statistics. True Positives (TPs) are the samples correctly recognized as attacks, and True Negatives (TNs) are the number of normal instances correctly classified. Accuracy defines the percentage of correct predictions regardless of the normal and attacked class. The performance measures are defined as follows.
\[\textit{Detection Rate}\left(\text{Re}\,\textit{call}\right)=\textit{TP} \textit{/}\textit{TP}+\textit{FN} \tag{10}\]
\[\textit{Specificity}=\textit{TN}\textit{/}\textit{TN}+\textit{FP} \tag{11}\]
\[\textit{Accuracy}=\textit{TP}+\textit{TN}\textit{/}\textit{TP}+\textit{TN}+ \textit{FP}+\textit{FN} \tag{12}\]
\[\textit{F-Score}=2.\begin{array}{c}\text{Pr}\,\textit{ecision}\times\, \text{Re}\,\textit{call}\end{array} \tag{13}\]
\[\textit{FNR}=\textit{FN}\textit{/}\textit{/}\textit{TP}+\textit{FN} \tag{14}\]
### Performance Estimation of the proposed Ensemble (INFUSE)
Initially, the performance of the proposed INFUSE was analyzed by comparing it to standard baseline models, namely MLP (2-layers deep) and a deep fully connected neural network (6-layers deep). Table 5 shows the F-score, accuracy, and recall, indicating that the proposed ensemble INFUSE outperforms the baseline models on both datasets. The remarkable performance of INFUSE on the Test\({}^{21}\) dataset, which has a high distribution shift, suggests that the proposed technique is effective in addressing the dataset shift problem.
### Performance comparison of INFUSE with Base-Classifier
The performance of the base-classifiers is presented in Table 6, where SVM has the highest accuracy of 82.80% and 67% on Test\({}^{+}\) and Test-21, respectively. The F-scores of base-classifiers range from 0.69 to 0.83 on Test\({}^{+}\) and 0.55 to 0.76 on Test-21. While the base-classifiers show reasonable performance on KDD Test\({}^{+}\), their performance deteriorates on KDD Test-21, a stringent dataset with a high class imbalance of unseen attacks that are difficult to classify. The empirical analysis in Table 6 suggests that the proposed ensemble INFUSE outperforms the base-classifiers and significantly addresses the problem of dataset shift.
### Statistical Analysis
McNemar's test is applied to statistically evaluate the significance of the proposed ensemble INFUSE [42]. The hypothesis test is formulated as:
H\({}_{0}\): the proposed ensemble is equivalent to base-classifier in performance
H\({}_{1}\): the proposed ensemble performs better than base-classifier
The performance is accessed by comparing performance with best performing base-classifier using Eq. (15).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{**NSL-KDD Test+**} & \multicolumn{3}{c|}{**NSL-KDD Test-21**} \\ \cline{2-7} & **F-Score\(\pm\)S.E** & **Acc. (\%)** & **Recall** & **F-Score\(\pm\)S.E** & **Acc. (\%)** & **Recall** \\ \hline
**Proposed INFUSE** & **0.91\(\pm\)0.003** & **91.64** & **0.94** & **0.91\(\pm\)0.005** & **85.6** & **0.87** \\ \hline Base classifier-1 (SVM) & 0.83\(\pm\)0.005 & 82.80 & 0.73 & 0.76\(\pm\)0.0076 & 67.41 & 0.64 \\ \hline Base classifier-2 (KNN) & 0.76\(\pm\)0.0055 & 76.76 & 0.64 & 0.66\(\pm\)0.0085 & 55.83 & 0.53 \\ \hline Base classifier-3 (Decision Tree) & 0.77\(\pm\)0.0055 & 78.48 & 0.64 & 0.68\(\pm\)0.0083 & 59.16 & 0.53 \\ \hline Base classifier-4 (Random Forest) & 0.69\(\pm\)0.006 & 72.65 & 0.53 & 0.55\(\pm\)0.009 & 48.30 & 0.39 \\ \hline Base classifier-5 (AdaBoost) & 0.76\(\pm\)0.0055 & 76.56 & 0.64 & 0.66\(\pm\)0.0085 & 55.49 & 0.53 \\ \hline \end{tabular}
\end{table}
Table 6: Performance comparison of the proposed INFUSE with base classifiers.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Technique**} & \multicolumn{3}{c|}{**NSL-KDD Test+**} & \multicolumn{3}{c|}{**NSL-KDD Test-21**} \\ \cline{2-7} & **F-Score\(\pm\)S.E** & **Acc. (\%)** & **Recall** & **F-Score\(\pm\)S.E** & **Acc. (\%)** & **Recall** \\ \hline
**Proposed INFUSE** & **0.91\(\pm\)0.003** & **91.64** & **0.94** & **0.91\(\pm\)0.005** & **85.6** & **0.87** \\ \hline MLP & 0.84\(\pm\)0.005 & 83.05 & 0.77 & 0.79\(\pm\)0.007 & 69.18 & 0.69 \\ \hline Deep NN & 0.87\(\pm\)0.004 & 84.70 & 0.89 & 0.86\(\pm\)0.006 & 76.90 & 0.84 \\ \hline \end{tabular}
\end{table}
Table 5: Performance evaluation of the proposed ensemble INFUSE and baseline classifiers.
\[\chi^{2}=\frac{\left(b-c\right)^{2}}{b+c} \tag{15}\]
In the above equation, \(b\) is the number of misclassifications made by the base-classifier whereas \(c\) is the total misclassification by ensemble model.
The test statistics suggest a statistically significant difference in performance of the proposed ensemble compared to best performing base-classifer with statistics \(=566.7\), p-value \(=2.88\)e-125 \(<0.005\) for test\(+\) and statistics \(=753.5\), p-value \(=6.803\)e-166 \(<0.005\) for test-21.
### ROC and PR Curve based Analysis
The ROC curve graphically illustrates the ratio of the true positive rate to the false-positive rate at multiple confidence intervals, while the PR curve is for the imbalanced dataset that shows the
Figure 6: PR and ROC curves based analysis of the proposed INFUSE and base classifiers.
ratio of correctly classified examples among the positively predicted samples at several confidence intervals. As shown in Figure 6, the performance of the proposed ensemble improves on unseen data. Figure 6 also provides a graphical illustration of AUC-ROC and AUC-PR on Test\({}^{+}\) and Test-21.
### Ablation Study
We also conducted a thorough analysis to assess the importance of information fusion in the proposed ensemble by excluding each of the feature spaces one by one. The results are presented in Tables 7 and 8, which indicate that the proposed INFUSE shows better performance on both the datasets than that of the individual feature spaces. The results show that the proposed technique considerably gains an increase in F-score and accuracy on Test-21 and achieves a good trade-off between recall and specificity.
### 5.7 Significance Analysis of the Proposed Ensemble INFUSE
The significance of the proposed ensemble against dataset shift was evaluated by assigning the information-fused hybrid feature space to SVM and XGBoost as Meta-Learners. Additionally, a performance comparison was made with different classical ensemble techniques, and the results are shown in Table 9. The empirical evaluation demonstrated that the proposed deep learning-based Meta-Learner significantly outperformed other Meta-Learners as well as max-weighted, average, and majority voting-based ensemble techniques.
### Detection Rate Analysis and Feature space Visualization
In IDS, detecting each type of attack is crucial for system security. Figure 7 shows the detection rate and FNR for the proposed ensemble and other techniques. As the type and nature of attacks change every day, IDS should be robust against new attack types. Therefore, the detection rate of the proposed ensemble INFUSE is analyzed for all the unique attack profiles included in the test sets. In the test set, there are 7 attack profiles - mscan, processtable, snmpguess, sainl, apache, httptunnel, and mailbomb - that are unique attack variants and were not seen by the classifier during training. The results, shown in Figure 8, propose that the proposed ensemble INFUSE achieve a considerable detection rate for new attacks that were not seen by the classifier during training.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Technique**} & \multirow{2}{*}{**Type**} & \multicolumn{2}{c|}{**NSL-KDD Test+**} & \multicolumn{2}{c|}{**NSL-KDD Test-21**} \\ \cline{3-6} & & **F-Score+S.E** & **Acc. (\%)** & **F-Score+S.E** & **Acc. (\%)** \\ \hline
**Proposed INFUSE** & & **0.91\(\pm\)0.003** & **91.64** & **0.91\(\pm\)0.005** & **85.6** \\ \hline Proposed Hybrid space with SVM Meta-Learner & Stacking Ensemble & 0.78\(\pm\)0.0055 & 78.80 & 0.69\(\pm\)0.0083 & 59.76 \\ \hline Proposed Hybrid space with XGBoost Meta-Learner & & 0.76\(\pm\)0.0055 & 77.69 & 0.66\(\pm\)0.0085 & 57.59 \\ \hline Max-weighted voting & & 0.79\(\pm\)0.0053 & 80.00 & 0.72\(\pm\)0.0080 & 62.05 \\ \hline Average voting & Classical Ensemble & 0.768\(\pm\)0.0055 & 78.00 & 0.67\(\pm\)0.0084 & 58.40 \\ \hline Majority voting & & 0.77\(\pm\)0.0055 & 78.38 & 0.67\(\pm\)0.0084 & 58.89 \\ \hline \end{tabular}
\end{table}
Table 9: Performance comparison of the proposed ensemble with different ensemble learning strategies on Test\({}^{+}\).
Feature space visualization using t-SNE shows that the proposed ensemble INFUSE significantly separates the attacked samples from normal traffic samples (Figure 9).
### Performance Comparison with Existing Techniques
The proposed ensemble INFUSE has been evaluated by making performance comparison with state-of-the-art studies that employed ML, deep learning, and ensemble learning techniques. The results, shown in Table 10, indicate that the proposed ensemble INFUSE significantly outperforms other classifiers both on Test\({}^{+}\) and Test-21.
Figure 9: Feature space is visualized via t-SNE. Panels (a) and (b) show original and feature space of the last layer of the proposed INFUSE, respectively.
## 6 Conclusion
Efficient detection against various network traffic attacks is crucial for ensuring smooth flow of network traffic without interruption. In this study, we developed a robust stacking heterogeneous ensemble using the idea of information fusion to address the problem of dataset shift. Our proposed approach, INFUSE, improves the detection rate by utilizing weight regularized deep sparse autoencoders, while achieving specificity towards attacks through exploitation of multiple decision spaces. In the decision stage, a deep neural network-based Meta-Learner intelligently draws a final decision from the hybrid feature space, establishing a good trade-off between specificity and recall. Our performance analysis of the proposed technique on the NSL-KDD dataset in terms of accuracy (Test\(+\): 91.6%, Test-21: 85.6%) and detection rate (Test\(+\): 0.94, Test-21: 0.87) demonstrates its effectiveness. Comparison with other ensemble learning techniques and existing techniques highlights the strong detection ability of INFUSE towards
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Reference**} & \multirow{2}{*}{**Technique**} & **NSL-KDD Test\(+\)** & **NSL-KDD Test-21** \\ \cline{3-4} & & **Acc. (\%)** & **Acc. (\%)** \\ \hline
**Proposed INFUSE** & **Information fusion and deep Meta learner** & **91.6** & **85.6\%** \\ \hline Mushtaq et al. [43] & LSTM\(+\)AE & 89.84 & - \\ \hline Vinet et al. [44] & Clustering \(+\) SVM & 91.3 & - \\ \hline Zhang et al. [31] & Stacking Ensemble & 84 & - \\ \hline Sham et al. [45] & Deep CNN & 81.00 & - \\ \hline Zhou et al. [32] & Voting based Ensemble & 87.37 & 73.57 \\ \hline Tama et al. [46] & Two-Stage Ensemble & 85.8 & 72.52 \\ \hline Qureshi et al. [25] & Autoencoder & 84.60 & 79.90 \\ \hline Chohan et al. [47] & Deep CNN & 89.41 & 80.36 \\ \hline Singh et al. [48] & RNN & 84.03 & 69.75 \\ \hline Li et al. [27] & Multi-CNN fusion & 86.95 & 76.67 \\ \hline Gao et al. [49] & Ensemble & 84.54 & 71.29 \\ \hline Naseer et al. [28] & Deep CNN & 85.00 & 70.00 \\ \hline Qatf et al. [26] & Sparse Autoencoder \& SVM & 84.96 & 79.42 \\ \hline Ashfaq et al. [50] & Fuzzy based semi supervised Learning & 84.12 & 68.82 \\ \hline Yin et al. [51] & RNN & 81.29 & 64.67 \\ \hline \end{tabular}
\end{table}
Table 10: Performance comparison with existing state-of-the-art techniques.
unseen attacks. In the future, this approach may prove useful for Zero-day attack detection and other malware analysis problems, enabling improved generalization.
**CRediT authorship contribution statement**
**Anabia Sohail:** Methodology, Software, Supervision, Writing - original draft, Writing - review & editing, **Ayisha Abdullah:** Methodology, Software, Visualization, **Irfan Hameed:** Validation, **Muhammad Mohsin Zafar**: Writing - original draft & review, **Asifullah Khan:** Supervision, Methodology, Writing - review & editing, Project administration, Resources.
**Declaration of Competing Interest**
The authors declare no competing interest.
**Acknowledgments**
The authors thank Pattern Recognition lab at DCIS, PIEAS, for providing computational facilities.
|
2305.18558 | DelBugV: Delta-Debugging Neural Network Verifiers | Deep neural networks (DNNs) are becoming a key component in diverse systems
across the board. However, despite their success, they often err miserably; and
this has triggered significant interest in formally verifying them.
Unfortunately, DNN verifiers are intricate tools, and are themselves
susceptible to soundness bugs. Due to the complexity of DNN verifiers, as well
as the sizes of the DNNs being verified, debugging such errors is a daunting
task. Here, we present a novel tool, named DelBugV, that uses automated delta
debugging techniques on DNN verifiers. Given a malfunctioning DNN verifier and
a correct verifier as a point of reference (or, in some cases, just a single,
malfunctioning verifier), DelBugV can produce much simpler DNN verification
instances that still trigger undesired behavior -- greatly facilitating the
task of debugging the faulty verifier. Our tool is modular and extensible, and
can easily be enhanced with additional network simplification methods and
strategies. For evaluation purposes, we ran DelBugV on 4 DNN verification
engines, which were observed to produce incorrect results at the 2021 neural
network verification competition (VNN-COMP'21). We were able to simplify many
of the verification queries that trigger these faulty behaviors, by as much as
99%. We regard our work as a step towards the ultimate goal of producing
reliable and trustworthy DNN-based software. | Raya Elsaleh, Guy Katz | 2023-05-29T18:42:03Z | http://arxiv.org/abs/2305.18558v1 | # DelBugV: Delta-Debugging Neural
###### Abstract
Deep neural networks (DNNs) are becoming a key component in diverse systems across the board. However, despite their success, they often err miserably; and this has triggered significant interest in formally verifying them. Unfortunately, DNN verifiers are intricate tools, and are themselves susceptible to soundness bugs. Due to the complexity of DNN verifiers, as well as the sizes of the DNNs being verified, debugging such errors is a daunting task. Here, we present a novel tool, named DelBugV, that uses automated _delta debugging_ techniques on DNN verifiers. Given a malfunctioning DNN verifier and a correct verifier as a point of reference (or, in some cases, just a single, malfunctioning verifier), DelBugV can produce much simpler DNN verification instances that still trigger undesired behavior -- greatly facilitating the task of debugging the faulty verifier. Our tool is modular and extensible, and can easily be enhanced with additional network simplification methods and strategies. For evaluation purposes, we ran DelBugV on 4 DNN verification engines, which were observed to produce incorrect results at the 2021 neural network verification competition (VNN-COMP'21). We were able to simplify many of the verification queries that trigger these faulty behaviors, by as much as 99%. We regard our work as a step towards the ultimate goal of producing reliable and trustworthy DNN-based software.
## I Introduction
Deep neural networks (DNNs) [22] are software artifacts that are generated automatically, through the generalization of a finite set of examples. These artifacts have been shown to outdo manually crafted software in a variety of key domains, such as natural language processing [20, 26, 38], image recognition [26, 62], protein folding [27, 42], and many others. However, this impressive success comes at a price: unlike traditional software, DNNs are opaque artifacts, and are incomprehensible to humans. This poses a serious challenge when it comes to certifying, modifying, extending, repairing or reasoning about them [23, 28, 33].
In an effort to address these issues, the formal methods community has taken up an interest in _DNN verification_[28, 31, 47]: automated techniques that can determine whether a DNN satisfies a prescribed specification, and provide a counter-example if it does not. DNN verification technology has been making great strides, and its applicability has been demonstrated in various domains [2, 3, 4, 19, 31, 34]. In fact, this technology has progressed to a point where DNN verifiers themselves have become quite complex, and consequently error-prone; especially as they often perform delicate arithmetic operations, which can also introduce bugs into the verification process [31]. Thus, it is not surprising that various bugs have been observed in these tools [30]. For example, in the VNN-COMP'21 competition [10], various verifiers have been shown to disagree on the result of multiple verification queries (each query is comprised of a neural network and a property to be checked), or produce incorrect counter-examples, indicating the existence of bugs. Moreover, many of these verifiers are still being developed, with new and experimental features being introduced -- potentially introducing new bugs as well. An inability to trust the results of DNN verifiers could undermine the benefits of DNN verification technology, and clearly needs to be addressed.
Here, we propose to mitigate this issue by adopting known techniques from related fields (e.g., SMT solving [13]) -- specifically, that of _delta debugging_. The idea is to leverage the fact that DNN verification is at a point where many verification tools are available, and to allow engineers to readily compare the results produced by their verification tool to those produced by others, in order to identify and correct bugs. When a verification query that triggers some bug in a verifier is detected, we can initiate an automated process that repeatedly and incrementally _simplifies_ the verification query. After each simplification step, we can check that the verifier in question still disagrees with the remaining, _oracle_ verifiers, until reaching the simplest verification query that we can find. If this final query is much simpler than the original, it will be that much easier for engineers to debug their tools, eventually improving their overall soundness.
We present a new tool, DelBugV (**Delta** de**Bug**ging **I**e**nural **N**etwork **V**erifiers), that takes as input a verification query, a malfunctioning DNN verifier that errs on the given verification query, and an oracle DNN verifier. Within DelBugV, we implement a set of operations for simplifying the neural network of the given verification query into a network with fewer layers and fewer neurons. We empirically design a strategy that applies these operations sequentially in an order that produces much simpler verification queries. In some cases, when the malfunctioning DNN verifier produces a faulty counter-example, DelBugV can run in _single solver_ mode - without an oracle verifier, where the query is repeatedly simplified as long as the malfunctioning DNN verifier continues to produce incorrect counter-examples.
For evaluation, we tested DelBugV on 4 DNN verifiers "suspected" of errors, per the results of VNN-COMP'21 [10]: Marabou [33, 43, 61], NNV [52, 53, 54, 55, 63], NeuralVerification.jl(NV,jl) [37], and nennum [8, 9, 52, 53]. We ran DelBugV on queries where pairs of these verifiers disagreed. Our evaluation demonstrates that DelBugV could reduce the size of the error-triggering queries by an average of \(96.8\%\), and
by as much as \(99\%\) in some cases, resulting in very simple neural networks. We believe that these results highlight the significant potential of our tool and approach.
The rest of the paper is organized as follows. In Sec. II we provide the necessary background on DNNs and their verification. Next, in Sec. III we describe the design of DelBugV, focusing on its algorithm and network simplification methods and the strategy we use to apply those methods. The implementation and evaluation of DelBugV is discussed in Sec. IV. This is followed by a discussion of related work in Sec. V, and we conclude in Sec. VI.
## II Background
**Neural Networks.** A _neural network_ is a directed acyclic graph in which the nodes, called neurons, are organized in layers \(l^{0},l^{1},\ldots,l^{n}\), \(l^{0}\) is called the input layer, \(l^{n}\) the output layer, and layers \(l^{1},\ldots,l^{n-1}\) are called hidden layers. Each hidden layer has an associated non-linear _activation function_. In feed-forward networks, which are our subject matter here, neurons in layer \(l^{i}\) have edges connecting them only to neurons in the next layer, layer \(l^{i+1}\).
Each neuron in the network (except the ones in the input layer) has a bias value, and each edge has a weight. The biases and weights belonging to neurons in layer \(l^{i}\) are organized into a vector \(B^{i}\) and a matrix \(W^{i}\), respectively. The \(j,j^{\prime}\)-th entry of \(W^{i}\) is the weight assigned to the edge out-going from the \(j^{\prime}\)-th neuron in layer \(l^{i-1}\) and entering the \(j\)-th neuron in layer \(l^{i}\). For a _fully connected_ layer, \(W^{i}\) is a full matrix; whereas for a _convolutional_ layer, \(W^{i}\) is very sparse, and has a specific structure (discussed later).
An input to neural network \(\mathcal{N}\) is a vector \(I\) of values of the neurons in the input layer, and it produces an output vector \(\mathcal{N}(I)\) which is the values of the neurons in the output layer. We denote the values of neurons in layer \(l^{i}\), prior to applying the activation function, by \(\mathcal{N}^{l^{i}}(I)\); and the values after applying the activation function by \(\mathcal{N}^{a^{i}}(I)\). The values of the neurons are evaluated according to the rules:
\[\mathcal{N}^{l^{0}}(I)=I,\qquad\mathcal{N}^{l^{i}}(I)=W^{i} \mathcal{N}^{a^{i-1}}(I)+B^{i},\] \[\mathcal{N}^{a^{i}}(I)=Act^{i}(\mathcal{N}^{l^{i}}(I))\]
where \(Act^{i}\) is the activation function associated with layer \(l_{i}\).
We define the size of a neural network to be the total number of neurons in the graph (including the neurons in the input and output layers) and denote it by \(|\mathcal{N}|\). The automated training (i.e., selection of weights and biases) of neural networks is beyond our scope here; see, e.g., [22].
Fig. 1 depicts a neural network, \(\mathcal{N}_{e}\), with a single input, a single output, and 2 hidden layers with 3 neurons in each. It uses the ReLU activation function, \(ReLU(x)=\max(0,x)\). The bias of each neuron is listed above it, and weights are listed over the edges (zero values are omitted). In matrix representation, the weights and biases are:
\[W^{1}=\begin{bmatrix}-5\\ -0.5\\ -1\end{bmatrix},B^{1}=\begin{bmatrix}10\\ -2.5\\ 7\end{bmatrix},W^{2}=\begin{bmatrix}0.8&-1&-2\\ 0&0.5&0\\ 2&0.5&-1\end{bmatrix},\]
\[B^{2}=\begin{bmatrix}8\\ 2\\ 0\end{bmatrix},W^{3}=\begin{bmatrix}0.25\\ 2\\ 0.5\end{bmatrix}^{T},B^{3}=\begin{bmatrix}0\end{bmatrix}\]
\(\mathcal{N}_{e}\) is of size 8 (every \(l_{j}^{i}\) and \(r_{j}^{i}\) pair in the figure are counted as one neuron; we split them only for visualization purposes), and has 4 layers. The figure also demonstrates an evaluation of the network, for the input \(x=5\). The assignment of each node is listed below it; and we can see that the produced output in this case is \(y=5\).
**Convolutional Neural Networks.** A _convolutional neural network_ is a neural network with one or more convolutional layers (typically, these are the first layers of the network). The parameters of a convolutional layer include the height \(h\) and width \(w\) of images in the input; the kernel size \(k\); the stride size \(s\); the padding size \(p\); the input channels \(c_{i}\); the output channels \(c_{o}\); the kernel weights \(W\), given as a tensor of dimensions \((c_{o}\times c_{i}\times k\times k)\); and the biases, \(B\), organized in an array of length \(c_{o}\). We assume for simplicity that the kernel size, padding size, and stride size are equal along all axes, although this is not a limitation of our approach.
The convolutional layer filters its input, which is a \((c_{i}\times k\times k)\)-dimensional matrix, using the above parameters and outputs a multidimensional matrix which represents feature maps. For additional information on how a convolutional layer computes its output, see [22]. Note that convolutional layers are comprised strictly of linear operations.
**Neural Network Verification.** A _property_\(\mathcal{P}\) is a set of constraints on the inputs and outputs of the neural network. These constraints give rise to an input region \(I(\mathcal{P})\) and an output region \(O(\mathcal{P})\). Verifying \(\mathcal{P}\), with respect to some neural
Fig. 1: \(\mathcal{N}_{e}\) An example of a neural network with ReLU activation functions.
network, entails determining whether there exists an input in \(I(\mathcal{P})\) that the neural network maps to an output in \(O(\mathcal{P})\) (the SAT case), or not (the UNSAT case). Typically, \(\mathcal{P}\) is specified so that \(O(\mathcal{P})\) represents _undesirable_ behavior, and so an UNSAT result indicates that the system is correct. \(\mathcal{P}_{e}=(5\leq x\leq 10)\land(5\leq y\leq 10)\) is an example of a property of \(\mathcal{N}_{e}\) in Fig. 1.
A _neural network verifier_ takes in a verification query (a neural network and a property) and attempts to automatically verify it. When successful, it returns a SAT or UNSAT answer; otherwise, it can return ERROR, or TIMEOUT. When a neural network verifier returns SAT, it also returns an input that proves the satisfiability of the query. Given a verifier \(\mathcal{V}\) and a verification query \(Q=(\mathcal{N},\mathcal{P})\), we denote by \(\mathcal{V}(Q)\in\{\texttt{SAT},\texttt{UNSAT},\texttt{ERROR},\texttt{TIMEOUT}\}\) the answer of \(\mathcal{V}\) on \(Q\). If \(\mathcal{V}(Q)=\texttt{SAT}\), we denote by \(\mathcal{V}_{w}(Q)\in I(\mathcal{P})\) the satisfying assignment (the witness) returned by the verifier.
Continuing with our running example, given a sound neural network verifier \(\mathcal{V}_{e}\) and the verification query \(Q_{e}=(\mathcal{N}_{e},\mathcal{P}_{e})\), \(\mathcal{V}_{e}(Q_{e})=\texttt{SAT}\) and a valid witness is \(\left(\mathcal{V}_{e}\right)_{w}(Q_{e})=(5)\), since \(\mathcal{N}_{e}((5))=(5)\in O(\mathcal{P}_{e})\).
Neural network verification is complex, both theoretically and practically [31]; and modern tools apply sophisticated techniques to verify large networks [1]. These techniques are typically theoretically sound, but implementation bugs can cause verifiers to produce incorrect results. These bugs are easier to track and correct if the problem manifests for queries with small networks.
In a situation where two verifiers disagree on the satisfiability of a given query, at least one of them must answer SAT and provide a satisfying assignment. We evaluate the neural network on that assignment, and determine whether it indeed satisfies the property at hand. If so, we conclude that the other verifier, which returned UNSAT, is faulty; otherwise, if the satisfying assignment is incorrect, we determine that the verifier that answered SAT is faulty. The remaining verifier then takes the role of the oracle verifier.
## III DelBugV: Delta-Debugging Verification Queries
### _General Flow_
Applying _delta-debugging_ techniques means automatically simplifying an input \(x\) that triggers a bug in the system into a simpler input, \(x^{\prime}\), that also triggers a bug [41]. \(x^{\prime}\) can often trigger the bug faster, thus reducing overall debugging time; and also trigger fewer code lines that are unrelated to the bug, allowing engineers to more easily identify its root cause. In our setting, given a verification query \(Q=(\mathcal{N},\mathcal{P})\) that triggers a bug in a neural network verifier, we seek to generate another query \(Q^{\prime}=(\mathcal{N}^{\prime},\mathcal{P})\), with a much smaller (simplified) neural network: \(|\mathcal{N}^{\prime}|<|\mathcal{N}|\). The motivation for focusing on the neural network, and not on the verification conditions, is that common verification conditions are typically already quite simple [58], whereas neural network sizes have a crucial effect on verifier performance [31].
The general delta debugging framework that our tool follows appears as Alg. 1. The inputs to the process are a faulty verifier \(\mathcal{V}\), an oracle verifier \(\mathcal{V}_{O}\), and a verification query \(Q=(\mathcal{N},\mathcal{P})\). The algorithm maintains a candidate result neural network \(\mathcal{N}_{r}\) that triggers a bug in \(\mathcal{V}\) and make it produce an incorrect answer, and whose size is iteratively decreased. In each iteration, the algorithm invokes Alg. 2 to attempt simplifying \(\mathcal{N}_{r}\). The process terminates when Alg. 2 states that it cannot simplify \(\mathcal{N}_{r}\) any further, or when a timeout limit is exceeded. Finally, it returns the verification query with the smallest \(\mathcal{N}_{r}\) it achieved.
```
1:\(\mathcal{V}\), \(\mathcal{V}_{O}\), \(Q=(\mathcal{N},\mathcal{P})\) // Faulty Verifier, Oracle Verifier, Verification query
2:\(\mathcal{N}_{r}\leftarrow\mathcal{N}\)
3:progressMade \(\leftarrow\) True
4:while noTimeout() \(\land\) progressMade do
5:\(\mathcal{N}_{r}\leftarrow\mathcal{N}\)
6: progressMade, \(\mathcal{N}\leftarrow\) Simplify(\(\mathcal{V},\mathcal{V}_{O},Q\))
7:return\((\mathcal{N}_{r},\mathcal{P})\)
```
**Algorithm 1**_Reduce Verification Query_
Alg. 2 takes in the same arguments as Alg. 1, and its goal is to perform one successful simplification step on \(\mathcal{N}\), from a pool of potential steps. The algorithm heuristically chooses a sequence of simplification steps to attempt (Line 1), and then performs them, one by one, until one is successful. We propose several simplification steps in Sec. III-B. Specifying the order according to which theses simplification steps are attempted (Line 1) is key, and different strategies may result in different simplified networks -- we propose one such strategy in Sec. III-B.
```
1:\(\mathcal{V}\), \(\mathcal{V}_{O}\), \(Q=(\mathcal{N},\mathcal{P})\) // Faulty Verifier, Oracle Verifier, Verification query
2:True/False, \(Q_{r}\) // Whether the query was simplified, and the simplified query
3:Attempts = \((M_{0},M_{1},\ldots)\leftarrow\) attemptsBySimplificationStrategy(\(\mathcal{N}\))
4:while Attempts \(\neq\emptyset\)do
5:\(M_{i}\leftarrow\)Attempts.\(pop()\)
6:\(\mathcal{N}_{r}\gets M_{i}(\mathcal{N})\)
7:if successSimplification(\(\mathcal{V},\mathcal{V}_{O},(\mathcal{N}_{r},\mathcal{P})\)) then
8:return True, \(\mathcal{N}_{r}\)
9:return False, \(\mathcal{N}\)
```
**Algorithm 2**_Simplify_
Line 5 of Alg. 2 invokes Alg. 3 to check whether the simplification step attempted succeeded or not. To do so, Alg. 3 first checks whether \(\mathcal{V}\) answers SAT, but returns an incorrect counter-example. If so, this candidate should clearly be kept. Otherwise, the algorithm checks whether \(\mathcal{V}\) and \(\mathcal{V}_{O}\) both answer UNSAT or SAT, but disagree; if so, it returns
True. In all other cases, i.e. where one of the verifiers times out, or when there is no basis for comparison (one of the verifiers returned an error), the algorithm returns False, and an alternative simplification step in Alg. 2 is attempted.
```
1:\(\mathcal{V}\), \(\mathcal{V}_{O}\), \(Q=(\mathcal{N},\mathcal{P})\)// Faulty Verifier, Oracle Verifier, Verification query
2:True/False // Was the query successfully simplified?
3:if\(\mathcal{V}(\mathcal{N},\mathcal{P})=\textsc{SAT}\wedge\mathcal{V}_{W}(Q) \notin I(\mathcal{P})\)then
4:return True
5:if\(\mathcal{V}(\mathcal{N},\mathcal{P})=\textsc{SAT}\wedge\mathcal{N}(\mathcal{V}_{W} (Q))\notin O(\mathcal{P})\)then
6:return True
7:if\(\mathcal{V}(\mathcal{N},\mathcal{P}),\mathcal{V}_{O}(\mathcal{N},\mathcal{P}) \in\{\textsc{SAT},\textsc{UNSAT}\}\)\(\wedge\)\(\mathcal{V}(\mathcal{N},\mathcal{P})\neq\mathcal{V}_{O}(\mathcal{N},\mathcal{P})\)then
8:return True
9:return False
```
**Algorithm 3**_successSimplification_
One possible risk when using Alg. 1 is a "flip" between the two verifiers. This can happen when initially, \(\mathcal{V}_{O}\) produces a correct answer and \(\mathcal{V}\) does not; but after a simplification step, \(\mathcal{V}\) starts producing the correct answer and \(\mathcal{V}_{O}\) starts producing an incorrect answer. This situation is unlikely: the simplification steps we propose later make local modifications to the network, and are consequently far more likely to continue to trigger the same bug in \(\mathcal{V}\) than to trigger a new one in \(\mathcal{V}_{O}\). Still, this concern can be mitigated even further by using multiple oracle verifiers, and ensuring that they all agree amongst themselves while \(\mathcal{V}\) dissents.
**Single Verifier Mode.** Our approach could also be applied to delta-debug a single verifier that returns incorrect satisfying assignments, without using an oracle. As we explain in Sec. III-B, the simplification methods we apply require the returned satisfying assignment from either the faulty or the oracle verifier, thus, if the faulty verifier returns an incorrect satisfying assignment for the query at hand, we can drop the oracle verifier. This is achieved by removing the last "if" condition from Alg. 3 and removing the oracle verifier \(\mathcal{V}_{O}\) from the inputs.
### _Simplification Methods_
A core component of Alg. 1 is the selection of simplification strategy to apply (Line 1 in Alg. 2). We now describe our pool of neural network simplification methods, and the strategy that we suggest for selecting among them. The goal of all the simplification methods we propose here is to reduce neural network sizes, while keeping the network's behavior (i.e., its outputs) similar to that of the original; especially on the counter-example provided by either the faulty verifier or the oracle verifier. Note that a single simplification method can often be applied multiple times, in different ways, using different input parameters.
**Method 1: linearizing piecewise-linear activation functions between fully-connected layers.** In general, the presence of activation functions is a major source of complexity in the verification process of neural networks: they render the problem NP-complete, require complex mechanisms for linearly approximating them, and often entail case-splitting that slows down the verifiers [31, 40, 59]. Thus, in order to simplify the neural network, we propose to eliminate such activation functions, by _fixing them to a single linear segment_, effectively replacing them with linear constraints. This procedure is performed on an entire layer at a time; which, in turn, creates a sequence of consecutive purely linear layers that can then be merged into a single linear layer, reducing the overall number of layers and neurons in the network.
In choosing the linear segment to which each function is fixed, we propose to use the counter-example \(I\) provided by either the faulty verifier or the oracle verifier. The output of the new linear segment we choose, with respect to \(I\), will match the output of the activation function on \(I\).
For simplicity, we focus here on the ReLU activation function (\(ReLU(x)=\max\left(x,0\right)\)), although the technique is applicable to any piecewise-linear function. Intuitively, in such cases we propose to replace _active_ ReLUs (\(x\geq 0\)) by the identify function, and _inactive_ ReLUs (\(x<0\)) by zero. More formally, observe two consecutive layers, \(l^{t}\) and \(l^{t+1}\), in the neural network \(\mathcal{N}\), where layer \(l^{t}\) has a ReLU activation function. We construct an alternative layer, \(l^{a}\), to replace both \(l^{t}\) and \(l^{t+1}\). \(l^{a}\) inherits the activation function of \(l^{t+1}\). The weights \(W^{a}\) and the biases \(B^{a}\) of \(l^{a}\) are calculated as:
\[W^{a} =W^{t+1}W^{\prime}W^{t}\] \[B^{a} =W^{t+1}W^{\prime}B^{t}+B^{t+1}\]
where
\[W^{\prime}_{i,j}=\begin{cases}1&i=j\wedge\left(N_{Q}^{l^{t}}(I)\right)_{i}\geq 0 \\ 0&\text{otherwise}\end{cases}\]
Here \(W^{\prime}\) is the new linear segment replacing the activation function ReLU. Finally, the obtained simplified network \(\mathcal{N}_{r}\) is the network \(\mathcal{N}\) where layers \(l^{t}\) and \(l^{t+1}\) are deleted and replaced with \(l^{a}\).
Fig. 2 depicts the result of applying this method on layers \(l^{2}\) and \(l^{3}\) from Fig. 1, using the assignment \(I_{e}=(5)\). Fig. 1(a) depicts the layers selected for merging; and Fig. 1(b) depicts the resulting neural network. Notice that \(\mathcal{N}_{e}^{l^{2}}(I_{e})=(4,2,-2)\), meaning that only the ReLUs in neurons \(l^{2}_{0}\) and \(l^{2}_{1}\) are active. Thus, these ReLUs are replaced by the identity function, whereas the inactive ReLU of \(l^{2}_{2}\) is replaced by \(0\). After this step, layers \(l^{2}\) and \(l^{3}\) perform only linear operations, and are merged into a single layer.
**Method 2: linearizing piecewise-linear activation functions between convolutional layers.** In this method, a convolutional layer is combined with the layer following it (either a fully connected layer or a convolutional one), and replaced by a single, fully connected layer.
For simplicity, we focus here on the case where the second layer is fully connected. More formally, observe two consecutive layers, \(l^{t}\) and \(l^{t+1}\) in \(\mathcal{N}\), where \(l^{t}\) is a convolutional layer
and \(l^{t+1}\) is a fully connected layer. Our goal is to construct an alternative layer, \(l^{a}\), that will replace \(l^{t}\) and \(l^{t+1}\). Since a convolutional layer is a particular case of fully connected layer, we construct \(l^{a}\) by first converting the convolutional layer \(l^{t}\) into a fully connected one, denoted \(l^{c}\); then linearizing the activation functions, as in _Method 1_; and finally, combining the two layers into one.
Denote by \(W^{t}\) and \(W^{t+1}\) the matrices representing the weights of layer \(l^{t}\) and \(l^{t+1}\) respectively, and by \(B^{t}\) and \(B^{t+1}\) the vectors representing their respective biases. To transform a convolutional layer into a fully connecting one, we calculate the weights, \(W^{c}\), and the biases, \(B^{c}\), of the fully connected layer replacing the convolutional one, according to the conventional layer parameters. First, we turn its input and output from a multidimensional tensors into 1-dimensional vectors. The height and width (dimensions) of the feature maps in the convolutional layer's output are: \(h_{o},w_{o}\) where
\[h_{o}=\left\lfloor\frac{h+2p-k}{s}\right\rfloor+1,\quad w_{o}=\left\lfloor \frac{w+2p-k}{s}\right\rfloor+1.\]
The convolutional layer's output contains \(c_{o}\) feature maps, i.e., the dimensions of the output are \((c_{o}\times h_{o}\times w_{o})\). Thus, the dimensions of \(W^{c}\) are \((c_{o}h_{o}w_{o}\times c_{i}hw)\). \(W^{c}\) is a sparse matrix. To calculate the value of the \(i,j\)-th entry in \(W^{c}\), we first compute the following values:
\[c_{i}^{\prime} =\left\lfloor\frac{j}{hw}\right\rfloor,\quad c_{o}^{\prime}= \left\lfloor\frac{i}{h_{o}w_{o}}\right\rfloor,\] \[i^{\prime} =\left\lfloor\frac{i-c_{i}hw}{w}\right\rfloor-\left(\left\lfloor \frac{j-c_{o}h_{o}w_{o}}{w_{o}}\right\rfloor\cdot s-p\right)\] \[j^{\prime} =\left((i-c_{i}hw)\bmod w\right)-\left(((j-c_{o}h_{o}w_{o}) \bmod w_{o})\cdot s-p\right)\]
\(c_{i}^{\prime}\) and \(c_{o}^{\prime}\) are the input and output channels that the \(i,j\)-th entry should be associated with. \(i^{\prime}\) and \(j^{\prime}\) are the indices in the kernel that should match to the \(i,j\)-th entry. The weight matrix \(W^{c}\) is given by:
\[W^{c}_{i,j}=\begin{cases}W^{t}_{c_{i}^{\prime},c_{i}^{\prime},j^{\prime}}&0 \leq i^{\prime}\wedge j^{\prime}<k\\ W^{c}_{i,j}=0&\text{otherwise}\end{cases}\]
Finally,
\[B^{c}_{i}=B^{t}_{\left\lfloor\frac{i}{h_{o}w_{o}}\right\rfloor}\]
According to this construction of \(W^{c}\) and \(B^{c}\), they will have the same functionality as the convolutional operation they replace. This step may temporarily increase the number of edges in the network (but not the number of neurons). This is required to prepare for the minimization step.
The next step is to linearize the ReLU. This is done in a similar manner to the linearization in the previous method, from which we get \(W^{\prime}\). Next, we construct the weights \(W^{a}\) and the biases \(B^{a}\) of the alternative layer \(l^{a}\):
\[W^{a}= W^{t+1}W^{\prime}W^{c}\] \[B^{a}= W^{t+1}W^{\prime}B^{c}+B^{t+1}\]
And the activation function assigned to the new layer \(l_{a}\) is the same as the one assigned to layer \(l_{t+1}\). Finally, the simplified neural network \(\mathcal{N}_{r}\) is the network \(\mathcal{N}\), where layers \(l_{t}\) and \(l_{t+1}\) are deleted and replaced with \(l_{a}\).
In case \(l^{t+1}\) is also a convolutional layer, we convert it to a fully connected layer, as we did with \(l^{t}\); and the remainder of the process is unchanged.
**Method 3: merging neurons.** In this method, we seek to merge a pair of neurons in the same layer into a single neuron, thus decreasing the neural network size by one. Of course, this entails selecting the weights of this new neuron's incoming and outgoing edges, as well as its bias. Our motivation is to cause the merged neuron to produce values close to those of the original neurons, and consequently cause little changes in the neural network's eventual output. We present first the technical process of merging neurons, and later discuss _which_ pairs of neurons should be merged.
We focus again on the case where the activation function is ReLU. We first use the counter-example \(I\) (returned by either the faulty verifier or the oracle verifier) to check whether the activation functions of the neurons being merged have the same phase -- i.e., if they are both active, or both inactive. If they have the same phase, we compute the merged neuron's weights and biases using the original neurons' weights and
Fig. 2: \(\mathcal{N}_{e}\) with layers \(l^{2}\) and \(l^{3}\) selected in orange (a), and then merged (b).
biases. Specifically, the weight of each edge incoming to the merged neuron is the mean of the original incoming edge weights, and the neuron's bias is the mean of the original neurons' biases; whereas the weights of its outgoing edges are the weighted sum, according to \(I\), of the original outgoing edge weights (a weighted sum is needed, instead of a simple sum, to ensure that the neurons in the following layer obtain values similar to their original ones with respect to \(I\)). In case one of the neurons is active and the other is inactive, we simply delete the inactive one, since it does not contribute to the following layer's neuron values (with respect to \(I\)).
Formally, given a neural network, \(\mathcal{N}\), two successive layers in it, \(l^{t}\) and \(l^{t+1}\), and two neurons indices \(b<c\), we construct two alternative layers \(l^{a}\) and \(l^{a+1}\) that will replace \(l^{t}\) and \(l^{t+1}\) respectively. Additionally, \(l^{a}\) and \(l^{a+1}\) inherit the activation functions of \(l^{t}\) and \(l^{t+1}\) respectively. If the ReLUs of the neurons \(b\) and \(c\) in layer \(l^{t}\) have the same phases: \(\left(\mathcal{N}^{l^{t}}(I)\right)_{b},\left(\mathcal{N}^{l^{t}}(I)\right) _{c}>0\) or \(\left(\mathcal{N}^{l^{t}}(I)\right)_{b},\left(\mathcal{N}^{l^{t}}(I)\right) _{c}<0\), the weights and the biases \(W^{a},W^{a+1},B^{a},B^{a+1}\) of the alternative layers are calculated as follows:
\[B^{a}_{i} =\begin{cases}B^{t}_{i}&i<b\lor b<i<c\\ \frac{B^{t}_{i}+B^{t}_{i}}{2}&i=b\\ B^{t}_{i+1}&c\leq i\end{cases}\] \[B^{a+1} =B^{t+1}\] \[W^{a}_{i,j} =\begin{cases}W^{t}_{i,j}&i<b\lor b<i<c\\ \frac{W^{t}_{i,j}+W^{t}_{c,i}}{2}&i=b\\ W^{t}_{i+1,j}&c\leq i\end{cases}\] \[W^{a+1}_{i,j} =\begin{cases}W^{t+1}_{i,j}&j<b\lor b<j<c\\ \frac{2\cdot\left(W^{t+1}_{i,k}\left(\mathcal{N}^{l^{t+1}}(I)\right)_{b}+W^{t +1}_{i,c}\left(\mathcal{N}^{l^{t+1}}(I)\right)_{c}\right)}{\left(\mathcal{N}^ {l^{t+1}}(I)\right)_{b}+\left(\mathcal{N}^{l^{t+1}}(I)\right)_{c}}&j=b\\ W^{t+1}_{i,j+1}&c\leq j\end{cases}\]
Otherwise, if the ReLUs of the neurons \(b\) and \(c\) in layer \(l^{t}\) have different phases: \(\left(\mathcal{N}^{l^{t}}(I)\right)_{b}>0\wedge\left(\mathcal{N}^{l^{t}}(I) \right)_{c}<0\) (assume w.l.o.g. that the \(c\)-th neuron is the inactive one), the weights and biases \(W^{a},W^{a+1},B^{a},B^{a+1}\) of the alternative layers are calculated as follows:
\[B^{a}_{i} =\begin{cases}B^{t}_{i}&i<c\\ B^{t}_{i+1}&c\leq i\end{cases},\qquad B^{a+1}=B^{t+1}\] \[W^{a}_{i,j} =\begin{cases}W^{t}_{i,j}&i<c\\ W^{t}_{i+1,j}&c\leq i\end{cases},\quad W^{a+1}_{i,j}=\begin{cases}W^{t+1}_{i,j} &j<c\\ W^{t+1}_{i,j+1}&c\leq j\end{cases}\]
Finally, the obtained simplified neural network \(\mathcal{N}_{r}\), is the network \(\mathcal{N}\) where layers \(l^{t}\) and \(l^{t+1}\) are replaced with \(l^{a}\) and \(l^{a+1}\) respectively. This method can be applied repeatedly, to reduce the network size even further.
An example of applying this method on the pair of neurons \(l^{2}_{0}\) and \(l^{2}_{1}\) in \(\mathcal{N}_{e}\) from Fig. 1 using the assignment \(I_{e}=(5)\) appears in Fig. 3. Fig. (a)a shows the neurons selected for merging, and Fig. (b)b shows the result of the merge.
Choosing which pair of neurons to merge is crucial for the success of this method. Every two neurons in the same layer are valid candidates; however, some pairs are more likely to succeed than others by resulting in a simplified neural network that behaves similarly to the original. We consider the following possible approaches for prioritizing between the pairs: (1) an arbitrary ordering; (2) prioritizing pairs with neurons that are assigned similar values (prior to the activation function), when the network is evaluated on assignment \(I\). The motivation is that merging such pairs is expected to have smaller effect on the overall functionality of the neural network; (3) prioritizing pairs of neurons whose ReLUs are inactive when evaluated on \(I\). The motivation is that inactive neurons may have little effect on the bug at hand. This approach can be combined with Approach 2 to prioritize pairs with similar values after categorizing them by the status of the ReLUs; (4) prioritizing pairs of neurons with positive values with respect to \(I\). This approach, too, can be combined with Approach 2; and (5) prioritizing pairs of neurons with negative values, and then pairs with positive values, with respect to \(I\). This approach is a combination of Approaches 3 and 4, and again uses Approach 2 for internal prioritization within each category.
**Strategy for applying the simplification rules.** Within Alg. 1, the simplification steps mentioned above can be invoked in any order. We propose to attempt methods that significantly reduce the neural network size first, in order to reduce verification times. We empirically observed that this is achieved by the following strategy: first, attempt to linearize and merge con
Fig. 3: \(\mathcal{N}_{e}\) with neurons \(l^{2}_{0}\) and \(l^{2}_{1}\) selected in orange (a), and then merged (b).
volutional layers (_Method 2_). Second, attempt to linearize and merge fully connected layers (_Method 1_) -- starting with the output layer, and working backwards towards the input layer. Finally, merge neurons (_Method 3_) according to Approach 5. However, our implementation is highly customizable, and users can configure it to use any other strategy, according to the task at hand.
To illustrate, applying our proposed strategy to \(\mathcal{N}_{e}\) from Fig. 1, with respect to the assignment \(I_{e}=(5)\) in which \(\mathcal{N}_{e}^{l^{2}}(I_{e})=(-15,-5,2)\) and \(\mathcal{N}_{e}^{l^{2}}(I_{e})=(4,2,-2)\), would result in attempting the simplification methods in the following order: (1) merge the layers \(l^{2}\) and \(l^{3}\); (2) merge the layers \(l^{1}\) and \(l^{2}\); (3) merge the pair of neurons \(l^{1}_{0},l^{1}_{1}\); (4) merge the pair of neurons \(l^{2}_{1},l^{2}_{2}\); (5) merge the pair of neurons \(l^{2}_{0},l^{2}_{2}\); (6) merge the pair of neurons \(l^{1}_{1},l^{1}_{2}\); and then, (7) merge the pair of neurons \(l^{1}_{0},l^{1}_{2}\). These steps are attempted, in order, until one succeeds; after which the strategy is reapplied to the simplified network, and so on.
## IV Implementation and Evaluation
We designed our tool, DelBugV, to be compatible with the standard input format used in the VNN-COMP competition [10], in which verification queries are encoded using the _VNN-LIB_ format [12]; and which, in turn, relies on the _Open Neural Network Exchange_ (_ONNX_) format. This facilitated integrating DelBugV with the various verifiers. DelBugV is implemented in Python, and contains classes that wrap objects of these formats. The tool has a modular design that allows applying our proposed minimization methods in any order desired.
VNN-COMP'21 included 12 participating neural network verifiers, and these were tested on a set of verification queries. We began by extracting from the VNN-COMP'21 results pairs of dissenting verifiers, and the verification queries that triggered these discrepancies. Each such triple (two verifiers and a query) constitutes an input to DelBugV. This extraction led us to target the following verifiers: (1) Marabou [33]; (2) NNV [52, 53, 54, 55, 63]; (3) NeuralVerification.jl (NVjl) [37]; and (4) nenum [8, 9, 52, 53]. In the experiments described next, we used the same versions of these verifiers that were used in VNN-COMP'21.
**Neuron Merging and Prioritization Approaches.** For our first experiment, we set out to determine which of the neuron-pair prioritization schemes described as part of _Method 3_ in Sec. III-B is the most successful. We measured success along two parameters: the size of the simplified network obtained, and by the percentage of successful merging steps along the way. We tested our algorithm on 5 input triples, involving networks of size 310 each. Using only _Method 3_, we ran DelBugV with each of the prioritization schemes, and counted for each, the number of merging steps performed and the number of the steps that succeeded. Table. I shows the results of this comparison: the second column indicates, for every approach, the percentage of the successful steps out of all the steps tried, aggregated for all 5 benchmarks.
Looking at the average reduction sizes, the results indicate that all 5 approaches were able to achieve a similar reduction in size, with a slight advantage to approaches 1, 3 and 5. However, the number of successful merges varied significantly -- from Approach 1, in which only 37.2% of the merge steps were successful, and up to 75.9% for Approach 5 (in bold). These results thus indicate that Approach 5 is the most efficient of the 5, and so we used it as our default strategy for Method 3 in the subsequent experiments.
**Linearizing ReLU Activations.** In _Method 1_ and _Method 2_ in Sec. III-B, we proposed to linearize activation functions, and then merge them with the previous and following layers. These methods can be applied to any piecewise-linear activation function in the network. The order in which they are applied is customizable. In this experiment, we set out to compare linearizing ReLUs in ascending order (from input layer towards output layer), and in descending order (from output towards input). Table II shows the results of this experiment.
Every row in the table corresponds to an input triple to DelBugV (two disagreeing verifiers and a verification query that they disagreed on), and the two simplification approaches that were attempted. For each such experiment, the second column indicates the number of simplification steps tried, until DelBugV reached saturation (there were no additional steps to try). The third column indicates the number of the successful steps out of all the steps. In column four, the percentage successful steps out of all steps is shown; and the final column shows the reduction percentage in the neural network size. When one of the approaches was clearly superior, the entry appears in bold.
To analyze the results, observe, e.g., the 5th experiment in Table II. The results imply that when using the ascending approach, 12 linearizing and merging steps were made, until the network count not be simplified further with either _Method 1_ or _Method 2_. Of these 12 steps, 5 were successful -- and consequently, the simplified network has 5 fewer layers than the original. In contrast, with the descending approach only 9 steps were made until the network could not be simplified further, 6 of which were successful. Consequently, the simplified network in this case has 6 fewer layers compared to the original.
The results indicate that linearizing in descending order slightly outperforms linearizing in ascending order, although the gap is not very significant. The neural network in the last row included a convolutional layer, and, according to the results, linearizing it in ascending order preformed better.
\begin{table}
\begin{tabular}{|l||c|c|} \hline \hline & Successful merges (\%) & Average Reduction (\%) \\ \hline \hline Approach 1 & 37.2\% & 96.0\% \\ Approach 2 & 68.4\% & 95.9\% \\ Approach 3 & 71.6\% & 96.0\% \\ Approach 4 & 62.9\% & 95.8\% \\ Approach 5 & **75.9\%** & 96.0\% \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparing neurons merging approaches (Method 3) by size reduction and successful merges.
After investigating this query further, we noticed that in the ascending order approach, the convolutional layer was merged into a fully connected one; whereas the descending approach did not succeed in removing or merging any convolutional layers. We thus conclude that, for a convolutional network, it is advisable to apply _Method 2_ before applying _Method 1_.
**Delta Debugging Discrepancies from VNN-COMP'21.** For our final experiment, we considered 13 triples of verifiers, oracle verifiers, and verification queries. Of these triples, 11 contained DNNs from the ACAS-Xu family [31], 1 was a DNN from the MNIST DNNs [36], and 1 was a DNN from the Oval21 benchmark [10]. Using the optimal configuration of our tool as previously discussed, we applied the full-blown delta-debugging algorithm to all of our 13 benchmarks. The results appear in Table. III. Every row in the table represents a triple, and the first two columns indicate the number of neurons in the original network, and the number of remaining neurons after delta debugging was applied. The next two columns indicate the number of layers in the original and reduced networks; and the final column indicates the percent of neurons that were removed.
Overall, the algorithm performed exceedingly well, reducing the network sizes by an average of 96.8% (!); and, in some cases, causing a size decrease of 99%, from a neural network with 1306 neurons and 4 layers to just 11 neurons and 2 layers (an input layer and an output layer, without any activation functions). The minimal decrease observed was 95%, from 310 neurons to 13. We regard these results as a very strong indication of the usefulness of delta debugging in the context of DNN verification. Further analyzing the results, we observe that the ReLU linearization simplification rule was responsible for an average of 66% of the size reduction, whereas the remaining two rules were responsible for an average of 34% -- indicating that the ReLU linearization simplification rule is the main workhorse of our approach at its current configuration.
## V Related Work
With the increasing pervasiveness of DNNs, the verification community has been devoting growing efforts to verifying them. Numerous approaches have been proposed, including SMT-based approaches [24, 31, 32, 33, 50, 60], approaches based on LP or MILP solvers [15, 17, 51], reachability-based approaches [39, 63], abstraction and abstract-interpretation based approaches [6, 19, 25, 28, 40, 46, 48, 59], synthesis-based approaches [34, 44], run-time optimization [5, 7], quantitative verification [11], verification of recurrent networks [29, 65], and many others. These approaches, in turn, have been used in numerous application domains [16, 18, 21, 49, 56, 57, 64]. Given the scope of these efforts, and the number of available tools, it is not surprising that bugs are abundant, and that engineers are in need of efficient debugging tools.
To the best of our knowledge, no previous work has applied delta debugging in the context of DNN verification, although similar approaches have been shown successful in the related domains of SMT [41, 13] and SAT [14] solving. Related efforts have attempted to reduce DNN sizes, with the purpose of producing smaller-but-equivalent networks, or networks smaller with a respect to a particular verification property of interest [6, 35, 45, 46]. In the future, principles from these approaches could be integrated as simplification strategies within our delta-debugging approach.
## VI Conclusion
In this paper, we presented the DelBugV tool for automatically reducing the size of a verification query with respect to an erroneous neural network verifier. We focused on delta-debugging techniques, and proposed multiple minimization methods for reducing neural network sizes. These techniques attempt to simplify the neural network in question, while modifying it as little as possible. We also suggested a strategy for the order in which to apply those methods. We demonstrated the effectiveness of DelBugV on actual benchmarks from the VNN-COMP'21 competition, and were able to significantly simplify them. We regard this work as another step towards more sound tools for DNN verification.
**Acknowledgements.** This work was partially supported by the Israel Science Foundation (grant number 683/18).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}{c} Linearing \\ approach \\ \end{tabular} } & No. of & \multirow{2}{*}{\begin{tabular}{c} No. of \\ successful \\ steps \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Successful \\ steps \% \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} Neuron \\ reduction \% \\ \end{tabular} } \\ \hline \multirow{2}{*}{1.} & Ascending & 6 & 6 & 100.0\% & 96.7\% \\ & Descending & 6 & 6 & 100.0\% & 96.7\% \\ \hline \multirow{2}{*}{2.} & Ascending & 6 & 6 & 100.0\% & 96.7\% \\ & Descending & 6 & 6 & 100.0\% & 96.7\% \\ \hline \multirow{2}{*}{3.} & Ascending & 6 & 6 & 100.0\% & 96.7\% \\ & Descending & 6 & 6 & 100.0\% & 96.7\% \\ \hline \multirow{2}{*}{4.} & Ascending & 6 & 0 & 0.0\% & 0.0\% \\ & Descending & 6 & 0 & 0.0\% & 0.0\% \\ \hline \multirow{2}{*}{5.} & Ascending & 12 & 5 & 41.6\% & 80.6\% \\ & Descending & 9 & 6 & **66.6\%** & **96.7\%** \\ \hline \multirow{2}{*}{6.} & Ascending & 3 & 2 & 66.6\% & 39.2\% \\ & Descending & 2 & 2 & **100.0\%** & 39.2\% \\ \hline \multirow{2}{*}{7.} & Ascending & 3* & 2* & **66.6\%** & **65.8\%** \\ & Descending & 2* & 1 & 50.0\% & 0.0\% \\ \hline \end{tabular}
\end{table} TABLE II: Comparing linearizing layers approaches by successful steps. * indicates the existence of a convolutional layer.
\begin{table}
\begin{tabular}{|c|c||c|c||c|} \hline \multicolumn{2}{|c||}{Neurons} & \multicolumn{2}{c||}{Layers} & \multicolumn{1}{c|}{Reduction} \\ \hline In Original & In reduced & In original & In reduced & percentage \\ \hline \hline
310 & 6 & 8 & 2 & 98\% \\
310 & 7 & 8 & 2 & 97\% \\
310 & 6 & 8 & 2 & 98\% \\
310 & 12 & 8 & 8 & 96\% \\
310 & 6 & 8 & 2 & 98\% \\
9326 & 12 & 5* & 3 & 99\% \\
1306 & 11 & 4 & 2 & 99\% \\
310 & 10 & 8 & 3 & 96\% \\
310 & 6 & 8 & 2 & 98\% \\
310 & 10 & 8 & 4 & 96\% \\
310 & 9 & 8 & 4 & 97\% \\
310 & 13 & 8 & 6 & 95\% \\ \hline \end{tabular}
\end{table} TABLE III: Delta-debugging using our algorithm. * indicates the existence of a convolutional layer. |
2305.10110 | Adaptive aggregation of Monte Carlo augmented decomposed filters for
efficient group-equivariant convolutional neural network | Group-equivariant convolutional neural networks (G-CNN) heavily rely on
parameter sharing to increase CNN's data efficiency and performance. However,
the parameter-sharing strategy greatly increases the computational burden for
each added parameter, which hampers its application to deep neural network
models. In this paper, we address these problems by proposing a
non-parameter-sharing approach for group equivariant neural networks. The
proposed methods adaptively aggregate a diverse range of filters by a weighted
sum of stochastically augmented decomposed filters. We give theoretical proof
about how the continuous group convolution can be approximated by our methods.
Our method applies to both continuous and discrete groups, where the
augmentation is implemented using Monte Carlo sampling and bootstrap
resampling, respectively. We demonstrate that our methods serve as an efficient
extension of standard CNN. Experiments on group equivariance tests show how our
methods can achieve superior performance to parameter-sharing group equivariant
networks. Experiments on image classification and image denoising tasks show
that in certain scenarios, with a suitable set of filter bases, our method
helps improve the performance of standard CNNs and build efficient lightweight
image denoising networks. The code will be available at
https://github.com/ZhaoWenzhao/MCG_CNN. | Wenzhao Zhao, Barbara D. Wichtmann, Steffen Albert, Angelika Maurer, Frank G. Zöllner, Ulrike Attenberger, Jürgen Hesser | 2023-05-17T10:18:02Z | http://arxiv.org/abs/2305.10110v3 | Adaptive aggregation of Monte Carlo augmented decomposed filters for efficient group-equivariant convolutional neural network
###### Abstract
Filter-decomposition-based group-equivariant convolutional neural networks (G-CNN) have been demonstrated to increase CNN's data efficiency and contribute to better interpretability and controllability of CNN models. However, so far filter-decomposition-based affine G-CNN methods rely on parameter sharing for achieving high parameter efficiency and suffer from a heavy computational burden. They also use a limited number of transformations and in particular ignore the shear transform in the application. In this paper, we address these problems by emphasizing the importance of the diversity of transformations. We propose a flexible and efficient strategy based on weighted filter-wise Monte Carlo sampling. In addition, we introduce shear equivariant CNN to address the highly sparse representations of natural images. We demonstrate that the proposed methods are intrinsically an efficient generalization of traditional CNNs, and we explain the advantage of bottleneck architectures used in the existing state-of-the-art CNN models such as ResNet, ResNet, and ConvNeXt from the group-equivariant perspective. Experiments on image classification and image denoising tasks show that with a set of suitable filter basis, our methods achieve superior performance to standard CNN with high data efficiency. The code will be available at [https://github.com/ZhaoWenzhao/MCG_CNN](https://github.com/ZhaoWenzhao/MCG_CNN).
Group equivariance, convolutional neural network, Monte Carlo sampling, filter decomposition.
## I Introduction
Convolutional neural networks (CNNs) belong to one of the most widespread deep neural network architectures in computer vision. Its success originates from its "sliding window" strategy inspired by human vision [13][29], which shows a desirable property of translation equivariance. In recent years, a sheer amount of publications have emerged aiming at developing and applying more advanced group equivariant CNNs to improve CNN's sample efficiency and generalizability [27][19][34]. The concept of group equivariant CNN (GCNN) was first proposed by Cohen and Welling in [6], which exploited a higher degree of weight sharing by increasing the number of convolutional channels with the periodical rotation of the same convolutional kernel. This idea was further extended in [8] by introducing steerable filters which decomposed the convolutional kernel with an orthogonal basis of roto-reflection groups.
Following the work of rotation equivariant CNN, in recent years, there have been a lot of studies based on filter decomposition for exploring scale equivariant CNN [41][40][39][51], and scale-rotation equivariant CNN [14][19]. Attention mechanisms have been introduced in [38][19] to help better identify optimal filter banks and boost equivariance performance. The idea of group equivariance has also been introduced to transformer networks to improve the transformer's data efficiency. Apart from filter decomposition, more recently, the feature alignment has also proven to be helpful for improving CNN's group equivariance against affine image transforms [42].
The existing works for filter-decomposition-based group equivariant CNN all require increasing channel numbers to increase parameter sharing, which brings in a heavy computational burden [27] and hence hampers their practical application to natural images. Due to the computational burden needed for considering one kind of transform equivariance, the existing works of affine G-CNN are limited to transforms such as scaling, rotation, and reflection. So far, further including the shear transform is rarely considered in the conventional framework of affine G-CNN. In this paper, we propose an efficient implementation based on an adaptive aggregation of Monte Carlo augmented decomposed filters. The contribution of this paper is embodied in three aspects:
Our approach does not increase the computation burden and achieves high parameter and data efficiency compared with conventional CNNs.
In addition, thanks to the convenience of weighted Monte Carlo (MC) sampling in implementation, our work is able to consider a more flexible mix of different transforms, we thereby introduce shear transform and demonstrate its potential to improve networks' performance on natural images.
Our methods achieve superior performance to conventional CNN in both image classification and image denoising tasks.
The paper is organized as follows: In the Methods section, we review the general framework of the group-equivariant model and introduce the details of our approach. We show the experimental results and discussions in the Experiments section and conclude the paper in the Conclusion section.
## II Methods
### _The general framework of group-equivariant model_
Borrowing the concepts of [24], we will briefly introduce the definition of group equivariant mapping and group convolution. Although we constrain the discussion to a few transformation groups, the concept can be applied to any type of group and hence group equivariance. In particular, it applies to any dimension of the image space.
#### Ii-A1 Group equivariance
In this paper, we consider a group \(G\) for the affine transformations on 2D images \(\mathbb{R}^{2}\), which can be written as \(G=\mathbb{R}^{2}\rtimes\mathcal{A}\), a semidirect product between the translation group \(\mathbb{R}^{2}\) and another affine transform group \(\mathcal{A}\) (whose group element for 2D images takes the representation of a \(2\times 2\) matrix). Its group product rule is defined as
\[\begin{array}{l}g_{1}\bullet g_{2}=(x_{1},a_{1})\bullet(x_{2},a_{2})\\ =(x_{1}+M(a_{1})x_{2},a_{1}+a_{2}),\end{array} \tag{1}\]
where "\(\bullet\)" denotes the group product operator, \(g_{1}=(x_{1},a_{1})\), \(g_{2}=(x_{2},a_{2})\) with \(x_{1},x_{2}\in\mathbb{R}^{2}\), \(a_{1},a_{2}\in\mathbb{R}^{3}\), and function \(M:\mathbb{R}^{3}\rightarrow\mathcal{A}\). In this paper, we consider the following affine group, in particular, for any \(a=(\alpha,\sigma,s)\) with \(\alpha,\sigma,s\in\mathbb{R}\), \(M(a)=R(\theta)A(\alpha)S(s)\), where
\[S(s)=\begin{bmatrix}1&s\\ 0&1\end{bmatrix}, \tag{2}\]
\[A(\alpha)=\begin{bmatrix}2^{\alpha}&0\\ 0&2^{\alpha}\end{bmatrix}, \tag{3}\]
\[R(\theta)=\begin{bmatrix}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{bmatrix}. \tag{4}\]
It should be noted that the existing works on affine G-CNN only consider translation, scaling, rotation, and mirror transforms. In this work, shear transform is included to form a more general case and explore its potential for boosting G-CNN's performance on natural images.
For a group element of the affine transformation group \(g\in G\), there is a corresponding group action on an index set \(\mathcal{X}\), i.e., a transformation \(T:G\times\mathcal{X}\rightarrow\mathcal{X}\). And for any \(g_{1},g_{2}\in G\) and \(x\in\mathcal{X}\), we have
\[T(g_{1}\bullet g_{2},x)=T(g_{1},T(g_{2},x)). \tag{5}\]
For any function \(f:\mathcal{X}\rightarrow\mathbb{C}\), we further define \(\mathbb{T}_{g}:f\to f^{\prime}\) where \(f^{\prime}(T(g,x))=f(x)\).
With the concept of group and group actions, we can now define the group equivariant map. Suppose we have a function \(f:\mathcal{X}\to V\) to be the input image or feature map of a neural network layer with \(V\) as a vector space. Let \(L_{V}(\mathcal{X})\) denote the Banach space of functions \(f:\mathcal{X}\to V\). Consider a map \(\phi:L_{V_{1}}(\mathcal{X}_{1})\to L_{V_{2}}(\mathcal{X}_{2})\) between two function spaces \(L_{V_{1}}(\mathcal{X}_{1}):\{f:\mathcal{X}_{1}\to V_{1}\}\) and \(L_{V_{2}}(\mathcal{X}_{2}):\{f:\mathcal{X}_{2}\to V_{2}\}\). For \(g\in G\), we have \(T_{g}\) and \(T_{g}^{\prime}\) to be G actions corresponding to set \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\), as well as \(\mathbb{T}_{g}\) and \(\mathbb{T}_{g}^{\prime}\). The map \(\phi\) is group equivariant if and only if
\[\forall g\in G,\phi(\mathbb{T}_{g}(f))=\mathbb{T}_{g}^{\prime}(\phi(f)) \tag{6}\]
#### Ii-A2 Group convolution
A standard convolution of functions \(f\) with \(\psi\colon\mathbb{R}\rightarrow\mathbb{R}\) is a translation-equivariant map, which can be written as
\[(\psi\ast f)(x)=\int\psi(-x+x^{\prime})f(x^{\prime})dx^{\prime}, \tag{7}\]
Group convolution is a generalization of standard convolution by introducing the group operation. The group convolution [24][7][3][19] on a compact group \(G\) at group element \(g\) is written as
\[(\psi\ast f)(g)=\int_{G}\psi(g^{-1}\bullet g^{\prime})f(g^{\prime})d\mu(g^{ \prime}) \tag{8}\]
where \(\mu\) is the Haar measure, and \(f,\psi:G\rightarrow\mathbb{C}\). It should be noted that plain convolution is a special case of group convolution when only the translation group is considered (i.e., \(g^{-1}=-x\); \(g^{\prime}=x^{\prime}\) and the "\(\bullet\)" corresponds to "\(+\)"). [24] proved that the group convolution defined in the equation (8) is a group-equivariant map for affine transform groups.
### _Adaptive aggregation of Monte Carlo augmented decomposed filters_
In a discrete implementation of group convolution, the integral is usually implemented based on the trapezoidal rule [2] using evenly sampled group elements \(g^{\prime}\) in equation (8). For each input feature map channel (when considering many different kinds of affine transforms such as scaling, rotation, and mirror), nested integrals are needed, i.e. one nested integral per transform considered. By this, the approach increases the computation burden exponentially with the number of considered transforms leading to the curse of dimensionality [45]. For example, when we have \(m\) different elements per transform and \(n\) transforms, this amounts to \(m^{n}\) terms to be evaluated.
To improve the flexibility of group convolution for the general affine transform group and avoid the curse of dimensionality, in this work, we propose to approximate the multi-dimensional integral over group operations in the group convolution by MC integration.
#### Ii-B1 Monte Carlo integration
MC integration is known to tackle high-dimensional integration with robust convergence independent of the number of dimensions [45]. We consider for brevity only the standard MC variant, being aware that more efficient schemes such as Quasi-MC have the potential to substantially increase the performance further [5][30].
For multi-dimensional Monte Carlo integral, we have the theorem [36][25][23] as follows,
**Theorem II.1**.: _Let \(\mu_{p}\) be a probabilistic measure on \((\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))\), i.e., \(\mu_{p}(\mathbb{R}^{d})=1\), and \(\mathcal{B}(\mathbb{R}^{d})\) denotes the Borel algebra on \(\mathbb{R}^{d}\) with \(d\) the number of dimensions. For \(f\in L^{2}(\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}),\mu_{p})\), we define_
\[I(f)=\int_{\mathbb{R}^{d}}f(x)d\mu_{p}(x), \tag{9}\]
_and_
\[Q_{N}(f)=\frac{1}{N}\sum_{i=1}^{N}f(\xi_{i}), \tag{10}\]
_where \((\xi_{i})_{i\in N}\) is an i.i.d sequence of random variables with distributions \(\mu_{p}\). We have \(Q_{N}(f)\to I(f)\) when \(N\to+\infty\). For all \(N\in\mathbb{N}\), there is_
\[(\mathbb{E}\|I(f)-Q_{N}(f)\|^{2})^{1/2}=\sigma(f)/\sqrt{N}, \tag{11}\]
_where \(\sigma^{2}(f)=I(f^{2})-(I(f))^{2}\), and \(\|\cdot\|\) is the \(l^{2}\) norm._
A finite non-zero Haar measure in (8) can be normalized to get a corresponding probabilistic measure \(\mu_{p}\). Therefore, it is theoretically justified to apply MC sampling for the discrete implementation of G-CNN.
#### Ii-B2 Discrete implementation of G-CNN with MC integration
In the discrete implementation, we stochastically sample the group operations including scaling, rotation, and shear transform. This approach allows a more flexible choice of the number of used transformations and decouples the relationship between the number of output channels and the number of categories of considered transformations.
Specifically, when we consider a filter \(W=w\cdot\psi\) with a fixed base filter \(\psi\) and \(w\) the trainable scalar weight, a continuous CNN layer can be written as
\[\begin{array}{l}f_{c_{o}}^{(l+1)}(x)=\sum_{c_{i}}w_{c_{o},c_{i}}^{(l)}(\psi *f_{c_{i}}^{(l)})(x)\\ =\sum_{c_{i}}\int_{\mathbb{R}^{2}}w_{c_{o},c_{i}}^{(l)}\psi(u-x)f_{c_{i}}^{(l) }(u)du\end{array} \tag{12}\]
A corresponding discrete implementation of the convolutional layer1 of \(l\)-th layer is as below
Footnote 1: It should be noted that in this paper, for simplicity, we omit point-wise nonlinearity functions, constant scalar coefficients, and normalization layers in neural networks, which do not affect the group equivariance [24].
\[f_{c_{o}}^{(l+1)}(x)=\sum_{c_{i}}\sum_{u}w_{c_{o},c_{i}}^{(l)}\psi(u-x)f_{c_{i }}^{(l)}(u) \tag{13}\]
where \(x,u\in\mathbb{R}^{2}\), \(\psi(\cdot)\) denotes the spatial convolutional filter function with a domain of translation group \(\mathbb{R}^{2}\), \(c_{i}\in[1,C_{l}]\) and \(c_{o}\in[1,C_{l+1}]\). \(f_{c_{i}}^{(l)}(x)\) is the feature map of the \(l\)-th layer and \(w_{c_{o},c_{i}}^{l}\) is the filter weight for filter of the \(l\)-th layer with output channel \(c_{o}\) and input channel \(c_{i}\).
A continuous affine group equivariant CNN can be written as
\[\begin{array}{l}f_{c_{o}}^{(l+1)}(g)=\sum_{c_{i}}w_{c_{o},c_{i}}^{(l)}(\psi *f_{c_{i}}^{(l)})(g)\\ =\sum_{c_{i}}\int_{G}w_{c_{o},c_{i}}^{(l)}\psi(g^{-1}\bullet g^{\prime})f_{c_{ i}}^{(l)}(g^{\prime})d\mu(g^{\prime})\end{array} \tag{14}\]
For simplicity, in the following part of this paper, we denote \(f(x)\) a function with domain on \(\mathbb{R}^{2}\), and we denote the corresponding function with domain on group \(G\) as \(f(g)=f(x,a)\) with \(x\in\mathbb{R}^{2}\) the spatial position, and \(a\in\mathbb{R}^{3}\) the transform parameter vector for affine transform group.
Let \(g=(x,a)\) and \(g^{\prime}=(u,b)\), we can rewrite the Haar integration in group convolution of the \(l\)-th layer as:
\[\begin{array}{l}f_{c_{o}}^{(l+1)}(x,a)=\sum_{c_{i}}\int_{\mathbb{R}^{3}}\int _{\mathbb{R}^{2}}w_{c_{o},c_{i}}^{(l)}2^{-2\alpha_{a_{a}}}\\ \psi(-x+M(-a)u,-a+b)f_{c_{i}}^{(l)}(u,b)dudb\end{array} \tag{15}\]
where we have the transform parameter vectors \(a=[\alpha_{a},\theta_{a},s_{a}]\), and \(b=[\alpha_{b},\theta_{b},s_{b}]\).
A typical corresponding discrete G-CNN can be written as below:
\[\begin{array}{l}f_{c_{o}}^{(l+1)}(x,a)=\sum_{c_{i}}\sum_{b}\sum_{u}w_{c_{o},c_{i}}^{(l)}2^{-2\alpha_{a_{a}}}\\ \psi(-x+M(a)u,-a+b)f_{c_{i}}^{(l)}(u,b)\end{array} \tag{16}\]
In particular, the sum over the parameter vector \(b\) is a three-layer nested sum corresponding to the nested integrals in the continuous domain, which, as mentioned in previous sections, leads to a heavy computational burden.
The Monte-Carlo integration considers \(a\) and \(b\) as random variables. Suppose their entries \(\alpha=\xi_{\alpha}\), \(\theta=\xi_{\theta}\), and \(s=\tan(\xi_{s})\), where \(\xi_{\alpha}\), \(\xi_{\theta}\) and \(\xi_{s}\) are uniformly distributed in the range of \([\eta_{\alpha}^{1},\eta_{\alpha}^{2})\), \([-\eta_{\theta},\eta_{\theta})\), and \([-\eta_{s},\eta_{s})\), respectively.
Suppose we draw \(N^{\prime}\) samples of \(a\), and \(N\) samples of \(b\), respectively. The nested sum over \(b\) collapses into a one-dimension sum over \(N\) samples for MCG-CNN (Monte Carlo Group-equivariant CNN):
\[\begin{array}{l}f_{c_{o}}^{(l+1)}(x,a_{n})=\sum_{c_{i}}\sum_{n}\sum_{u}w_{c_{o },c_{i}}^{(l)}2^{-2\alpha_{a_{a_{a^{\prime}}}}}\\ \psi(-x+M(-a_{n^{\prime}})u,-a_{n^{\prime}}+b_{n})f_{c_{i}}^{(l)}(u,b_{n}) \end{array} \tag{17}\]
where \(n^{\prime}\in\{1,\ldots,N^{\prime}\}\), and \(n\in\{1,\ldots,N\}\).
#### Ii-B3 Adaptive aggregation of MC-augmented filters
The Monte-Carlo approximation of G-CNN allows a flexible choice of the number of sampling points \(N\) per trainable weight \(w^{(l)}\) independent of the number of dimensions. However, compared with standard CNN, the computational burden of MCG-CNN is still \(N\) times larger. To eliminate the difference in computational burden between MCG-CNN and standard CNN, we propose WMCG-CNN (Weighted Monte Carlo Group-equivariant CNN)2, which reduces the number of transformations per input feature map channel (also per trainable weight) \(N\) to \(1\) and uses filter-weight-wise sampling instead. Specifically, we establish a one-to-one relationship between \(b\), \(c_{o}\) and \(c_{i}\), as well as \(a\) and \(c_{o}\) by using \(c_{o}\) and \(c_{i}\) to index \(a\) and \(b\). Thus we introduce notation \(b_{c_{o},c_{i}}\) and \(a_{c_{o}}\).
Footnote 2: The word “Weighted” in WMCG-CNN is used to emphasize that the number of trainable filter weights becomes transformation-wise in WMCG-CNN, which is thus an adaptive aggregation of augmented filters.
In this way, we yield WMCG-CNN with the equation (17) simplified into:
\[\begin{array}{l}f_{c_{o}}^{(l+1)}(x,a_{c_{o}})=\sum_{c_{i}}\sum_{u}w_{c_{o},c _{i}}^{(l)}2^{-2\alpha_{a_{c_{o}}}}\\ \psi(-x+M(-a_{c_{o}})u,-a_{c_{o}}+b_{c_{o},c_{i}})f_{c_{i}}^{(l)}(u,b_{c_{o},c_{ i}}),\end{array} \tag{18}\]
WMCG-CNN allows us to significantly increase the number of used transformations without increasing the computational burden, which, as shown in the later experiments, helps WMCG-CNN achieve superior performance to traditional discrete G-CNN.
However, due to the changes happening to WMCG-CNN, a question arises, i.e., under which circumstances, the WMCG-CNN can still be analogous to continuous G-CNN as the discrete G-CNN does? Below, we show that random initialization of the trainable weights can help the WMCG-CNN to be analogous to continuous G-CNN.
**Theorem II.2**.: _Let \(f^{(l)}\) be an input feature map of the \(l\)-th layer with the number of channels \(C_{l}\), and for each channel the number of spatial sampling points along vertical direction \(N_{H}\), the number of spatial sampling points along horizontal direction \(N_{W}\). A WMCG-CNN layer is group equivariant when the width of CNN, \(C_{l}\to\infty\), \(N_{H}\to\infty\), \(N_{W}\to\infty\), and
\(\|\int_{\mathbb{R}}wd\mu_{w}(w)\|<+\infty\) with \(\mu_{w}\) a probabilistic measure on \((\mathbb{R},\mathcal{B}(\mathbb{R}))\) for the filter weight \(w\), being a random variable._
Proof.: To prove the theorem, we have two steps: first, we construct a weighted integration function \(I\) and prove it is group equivariant. Then, we show that equation (18) corresponds to the discrete form of \(I\).
1) Given \(g=(x,a_{c_{o}})\) and \(g^{\prime}=(u,b)\), we define the integration on \(\mathbb{R}\times G\) as
\[\begin{array}{l}I(x,a_{c_{o}})\\ =\int_{\mathbb{R}\times G}w\cdot\psi(g^{-1}\bullet g^{\prime})f^{(l)}(g^{ \prime})d\mu(g^{\prime})d\mu_{w}(w)\\ =\int_{\mathbb{R}}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{2}}w\psi(-x+M(-a_{c_{ o}})u,-a_{c_{o}}+b)\\ f^{(l)}(u,b)dudbdw\end{array} \tag{19}\]
Since \(\|\int_{\mathbb{R}}wd\mu_{w}(w)\|<+\infty\), we have the constant \(C=\int_{\mathbb{R}}wd(w)\). Thus
\[\begin{array}{l}I(x,a_{c_{o}})=C\cdot\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{ 2}}\psi(-x+M(-a_{c_{o}})u,-a_{c_{o}}+b)\\ f^{(l)}(u,b)dudb\end{array} \tag{20}\]
which is group equivariant.
2) Let \(q(x,a_{c_{o}},b)=\int_{\mathbb{R}^{2}}\psi(-x+M(-a_{c_{o}})u,-a_{c_{o}}+b)f^{( l)}(u,b)du\), so we have
\[I(x,a_{c_{o}})=\int_{\mathbb{R}}\int_{\mathbb{R}^{3}}wq(x,a_{c_{o}},b)dbdw \tag{21}\]
Now, we consider the transition from continuous to discrete formulations. Since both \(w\) and \(b\) are independently randomly sampled with the samples indexed by \(c_{i}\). According to Theorem II.1, we have
\[\begin{array}{l}I(x,a_{c_{o}})=\lim_{C_{l}\rightarrow\infty}\frac{1}{C_{l}} \sum_{c_{i}}w^{(l)}_{c_{o},c_{i}}q(x,a_{c_{o}},b_{c_{o},c_{i}})\end{array} \tag{22}\]
Since \(u\) is sampled based on the trapezoidal rule, we have
\[\begin{array}{l}q(x,a_{c_{o}},b_{c_{o},c_{i}})\\ =\lim_{N_{H}\rightarrow+\infty}\lim_{N_{H}\rightarrow+\infty}\frac{1}{N_{H}N_ {W}}\sum_{u}2^{-2\alpha_{ac_{o}}}\cdot\\ \psi(-x+M(-a_{c_{o}})u,-a_{c_{o}}+b_{c_{o},c_{i}})f^{(l)}_{c_{i}}(u,b_{c_{o},c _{i}}),\end{array} \tag{23}\]
Meanwhile, we rewrite the corresponding convolution part of WMCG-CNN equation (18) as
\[\begin{array}{l}f^{(l+1)}_{c_{o}}(x,a_{c_{o}})=\frac{1}{C_{l}N_{H}N_{W}}\sum _{c_{i}}\sum_{u}w^{(l)}_{c_{o},c_{i}}2^{-2\alpha_{ac_{o}}}\cdot\\ \psi(-x+M(-a_{c_{o}})u,-a_{c_{o}}+b_{c_{o},c_{i}})f^{(l)}_{c_{i}}(u,b_{c_{o},c _{i}}),\end{array} \tag{24}\]
where \(c_{i}\in\{1,2,\ldots,C_{l}\}\), \(u=(u_{1},u_{2})\) with \(u_{1}\in\{1,2,\ldots,N_{H}\}\) and \(u_{2}\in\{1,2,\ldots,N_{W}\}\). Here we include coefficient \(\frac{1}{C_{l}N_{H}N_{W}}\) so that \(f^{(l+1)}_{c_{o}}\) is the average of the samples.
Therefore, by combining (22) and (23), we have
\[\begin{array}{l}I(x,a_{c_{o}})\\ =\lim_{C_{l}\rightarrow\infty}\lim_{N_{H}\rightarrow+\infty}\lim_{N_{W} \rightarrow+\infty}f^{(l+1)}_{c_{o}}(x,a_{c_{o}})\end{array} \tag{25}\]
The proof is completed.
As we know, random initialization of trainable weights is a common strategy adopted in most existing state-of-the-art deep learning methods. Theorem II.2 proves that the random weight initialization strategy together with the MC-augmented filters can help raise the CNN to a good starting point before training with an optimization algorithm, which therefore makes it easier for the network to find the optimal solution. This starting point is a network that approximately satisfies convolutional-layer-wise group equivariance. Obviously, a necessary condition of an optimal solution is that in contrast to the approximate convolutional-layer-wise group equivariance, it is at least at the level of the entire neural network that the group equivariance is achieved approximately.
From Theorem II.1, we know that the convergence speed of the Monte Carlo integration is slow. When the number of samples are small, the variance may not be satisfactory. However, with the weight \(w\) as learnable parameters and the samples of transformations fixed, the neural network can learn optimal weight distribution to improve the group equivariance, which will be shown in the later experiments (Fig. 2). Such sampling mechanism is thereby similar to that of importance sampling [16]. The difference is that the weight distribution in WMCG-CNN is not manually designed but is learned by iterative data-driven optimization algorithms for neural networks instead.
#### Ii-B4 Filter decomposition and the relationship to traditional CNN filters
In the previous section, we only consider one basis filter function \(\psi\), to increase the expressiveness of networks, we adopt the filter decomposition approach to build convolution filters by the sum of multiple weighted filter basis. Specifically, we have \(W^{(l)}_{c_{o},c_{i}}(x,a)=\sum_{j}w^{(l)}_{c_{o},c_{i},j}\tilde{\psi}_{j}(x,a)\) with \(\tilde{\psi}_{j}(x,a)\) an orthogonal basis function with \(x\in\mathbb{R}^{2}\) and \(a\in\mathbb{R}^{3}\) the transform parameter vector, \(w^{(l)}_{c_{o},c_{i},j}\) the trainable weights, \(j\in[1,K]\), and \(K\) the chosen number of basis functions. In the proposed WMCG-CNN, according to equation (18) the WMCG-CNN can be written in a similar way to the standard CNN in equation (13) as below:
\[\begin{array}{l}f^{(l+1)}(c_{o},x,a_{c_{o}})=\sum_{c_{i}}\sum_{u}2^{-2\alpha _{ac_{o}}}W^{(l)}_{c_{o},c_{i}}(\\ -x+M(-a_{c_{o}})u,-a_{c_{o}}+b_{c_{o},c_{i}})f^{(l)}_{c_{i}}(u,b_{c_{o},c_{i}}), \end{array} \tag{26}\]
In the practical discrete implementation, we adopt the Fourier-Bessel (FB) basis [37]. As in the previous section, the scaling, rotation, and shear transformations are used to augment the filters. Supposing any filter basis is a matrix of size \(k\times k\), for FB basis, we can have \(k^{2}-1\) non-constant basis and a constant scalar basis at the most.
It should be noted that the choice of basis for filter decomposition can also be flexible. When using the basis consisting of translation-augmented discrete Dirac delta functions, the proposed methods fall back into standard CNN filters.
### _Integrating WMCG-CNN into the existing state-of-the-art CNN architectures_
We see that when \(\tilde{\psi}_{j}\) degenerates to a scalar, i.e. a \(1\times 1\) base filter, the convolution is obviously exactly group equivariant, while on the other hand, the non-scalar filter \(\tilde{\psi}_{j}\) requires a huge number of sampling points to approximate the continuous G-CNN. To leverage the advantage of \(1\times 1\) base filter, one can add \(1\times 1\)-filter-based convolution layers as a secondary adaptive aggregation of features from the output channels of WMCG-CNN. By combining the \(1\times 1\) layer with the \(k\times k\) convolution layer into a single unit or block, the total number of considered transformations is increased from \(C_{l}\) to
(i.e., the number of all the \(k\times k\) filters used in the \(l\)-th layer) with a relatively small increase of parameter number. In addition, the \(1\times 1\) CNN layer also helps enrich the design space for WMCG-CNN, where the use of the small \(1\times 1\) kernel helps achieve high parameter efficiency given the same level of expressiveness and the same number of parameters [18].
Interestingly, the secondary aggregation with cascaded \(1\times 1\) convolutional layer is intrinsically similar to the bottleneck architecture that is adopted in all the state-of-the-art CNNs derived from ResNet [18]. The only difference is that the bottleneck architecture uses one extra \(1\times 1\) convolution layer before the \(k\times k\) convolution layer.
Apart from \(1\times 1\) layers, we also note that the channel grouping convolution technique3 proposed in ResNeXt [46] is also a helpful technique for improving CNN's performance.
Footnote 3: It should be noted that here the channel group is a concept that is different from the transformation group. The channel grouping convolution technique divides the input feature map channels into multiple channel groups of the same width to perform convolution operations separately.
Thanks to the flexibility of the proposed WMCG-CNN, we can easily combine these techniques with the WMCG-CNN. An example is shown in Fig. 1. Similar blocks but with different filter sizes will be used in the later experiments for image denoising.
## III Experiments
We test WMCG-CNN on both image classification and image denoising tasks. The ablation experiments are also conducted in image classification tasks.
### _Performance metrics_
We adopt the following performance metrics: the number of trainable parameters in million (\(10^{6}\)), Params(M); the number of Multiply-Accumulate Operations in giga (\(10^{9}\)), MACs(G); the prediction error in percentage, Error(%); mean prediction error on corrupted validation image datasets in percentage, mCE(%); top 1 accuracy in percentage, top-1 acc.(%); top 5 accuracy in percentage, top-5 acc.(%); peak signal-to-noise ratio in dB, PSNR(dB).
In addition, for the section of the ablation experiments, we define mean group-equivariant error (mGE) according to equation (6):
\[mGE=\mathbb{E}(\|\phi(\mathbb{T}_{g}(f))-\mathbb{T}_{g}^{\prime}(\phi(f))\|) \tag{27}\]
where for each input image, a random affine transformation \(g\in G\) is selected with the shear range \([-0.0625\pi,0.0625\pi)\), the scaling range \([1.0,1.1)\) and rotation angle range \([-0.125\pi,0.125\pi)\).
### _Ablation experiments_
For ablation experiments, we consider a subset of the ImageNet1k dataset. ImageNet1k has \(1.28\) million color images for \(1{,}000\) different classes from WordNet. The validation dataset consists of \(50{,}000\) images. For quick experiments, we extract the first \(40\) classes for ablation experiments (i.e., from class \(n01440764\) to class \(n01677366\)), and thus we denote the corresponding datasets as ImageNet40. We scale all the images to \(224\times 224\) resolution and normalize images in a classic way. The prediction Error (%) is used to measure the classification performance.
We use ResNet18, ResNet50, and ResNeXt50 [46] as the baseline networks. We follow the state-of-the-art robust training methods as in [22]. The neural networks are trained for \(90\) epochs with an initial learning rate of \(0.01\) following a cosine decay schedule. The Pixmix augmentation technique is used with its default setting as in [22]. Pixmix uses affine transformations including translation, rotation, and shear transform as well as other augmentation methods to generate augmented clean images. As for WMCG-CNN, we replace all the hidden non-\(1\times 1\) CNN layers with the proposed WMCG-CNN layers. By default, the size of FB basis is \(5\times 5\), the number of basis per filter is \(9\) (Bessel filter of order from \(0\) to \(8\)), the scaling range is \([1.0,2.0)\), the rotation angle range \([-2\pi,2\pi)\), and the shear transform angle range \([-0.25\pi,0.25\pi)\). For simplicity, we name each version of the tested networks with a series of suffixes. Specifically, "kn" means the filter size is \(n\times n\). In the experiments with shear transforms, we use the suffix "shear-\(n\pi\)" to denote shear transform angle \([-n_{s}\pi,n_{s}\pi)\). We change the value of \(n\) from \(0.00\) to \(0.40\pi\) to test the effect of the shear transform. In particular, \(n_{s}=0.00\) means that there is no shear transform applied. In addition, with ResNet18 as a baseline network, we also tested the conventional scale-equivariant CNN, and the proposed MC scale-equivariant CNN. The suffix "scale-\(n\)" means \(n\) scaling transformation \(\alpha=\{0,1/n,2/n,\ldots,(n-1)/n,1\}\) are used. The suffix "MC-scale-\(n\)" means \(n\) MC-augmented scaling transformations are used. The suffix "MC-affine-\(n\)" means \(n\) affine (scaling-rotation-shear) transformations are used. For the implementation of "scale-\(n\)", "MC-scale-\(n\)", and "MC-affine-\(n\)", we draw \(n\) samples of transformation for the input feature
Fig. 1: Integrating the proposed WMCG-CNN into the classic bottleneck architecture. (a) The example bottleneck block with group convolution using \(3\times 3\) filters; (b) An example of filter composition with MC-augmented basis.
map, and we only have \(1\) sample of transformation for the corresponding output feature map to avoid the computational burden from becoming too heavy. "width1/\(n\)" means that the total widths of output feature maps are reduced to \(1/n\) by decreasing the number of channels per transformation. "nb \(n\)" means \(n\) basis used per filter. "nb1-\(n\)" means only Bessel basis of order \(n\) is used.
Figure (a)a shows the mGE results for the the first hidden CNN layer of ResNet18 and ResNet18-k5-WMCG-shear-0.25\(\pi\) for the first 10-epoch traning on ImageNet dataset. We see that compared with ResNet18, WMCG network starts from a lower mGE but continues to converge smoothly. Figure (b)b shows that the distribution of the learned weights are centered around zero.
Table I shows the results of ablation experiments on ImageNet40, where the results with respect to Params(M) and MACs(G) are also displayed. We see that the shear transform with a suitable range shear angle is helpful for increasing WMCG-CNN's performance. In all the following experiments, we adopt \(n_{s}=0.25\) by default if not explicitly stated.
About the results with different versions of ResNet18-WMCG-k5-nb1, we see the choice of FB basis affects the prediction performance significantly. Low-frequency basis, i.e., Bessel basis of low order, is shown to be more important than high-frequency basis. Therefore, to select a fixed number of basis, we must include the low-order Bessel basis first.
The conventional scale-equivariant CNN architecture ResNet18-k5-scale-4 has a decent prediction error. But the computational burden is extremely high. When we try to reduce the computational burden by decreasing the width of the network to get ResNet18-k5-scale-4-width1/4, the number of trainable parameters is reduced significantly at the same time, which leads to poorer prediction performance. The MCG-CNN also has a heavy computational burden and is superior to its corresponding G-CNN when we use a larger number of transformations and more transformation types (such as ResNet18-k5-MC-affine-16-width1/16).
Among the tested ResNet baseline architectures, the results with ResNet18 give the lowest mean error, which indicates that the deeper models such as ResNet50 and ResNeXt50 suffer from overfitting because the number of classes is reduced from 1k to 40. However, the WMCG-CNN is able to reduce the overfitting consistently for all the considered baseline models. WMCG-CNN versions of ResNet18 yield the best classification performance. Generally, the results on ImageNet40 demonstrate that WMCG-CNN is superior to standard CNN in sample efficiency, helps avoid overfitting, and enables a quicker convergence.
### _Experiments on multiple image classification benchmark datasets_
In this section, we test the proposed method on Cifar-10 [26] and ImageNet1k datasets. The Cifar-10 dataset consists of color images of size \(32\times 32\times 3\) for 10 classes. There are \(50{,}000\) training images and \(10{,}000\) testing images. In addition, we use Cifar10-C and ImageNet1k-C [20] validation datasets to test neural networks' robustness and generalizability against image corruptions, where 15 diverse corruption types [20] are included for both the Cifar10-C and ImageNet1k-C validation datasets. Two kinds of training routine are used: robust training strategies with affine transform augmentation included, and the state-of-the-art fully-training strategy for comparison with ConvNeXt [32].
Fig. 2: (a) The mGE of the first hidden CNN layer of ResNet18 and ResNet18-k5-WMCG-shear-0.25\(\pi\) for the first 10-epoch traning on ImageNet dataset. (b) The histogram of the learned weights for the FB basis of order \(0\) in the first hidden CNN layer of ResNet18-k5-WMCG-shear-0.25\(\pi\).
For experiments on Cifar10 dataset [26], and Cifar10-C [20] datasets, we use ResNeXt29 (\(32\times 4\)) [46] as the baseline network and the Augmix-based [21] robust training strategy. We denote "ResNeXt29-k3-WMCG-nb9" as the network created by replacing the \(3\times 3\) convolution layer with WMCG CNN of the \(3\times 3\) FB basis size and each convolutional filter using \(9\) basis. We denote "ResNeXt29-k3-WMCG-nb1-r2" and "ResNeXt29-k5-WMCG-nb1-r2" as the networks that have a similar WMCG CNN layer but use only one FB basis of size \(3\times 3\) and \(5\times 5\), respectively. And both of the FB basis are of order \(2\). Empirically, only for the experiments with Cifar10 dataset, we use scaling range \([1.0,1.5)\), while in other experiments we keep \([1.0,2.0)\). All the CNN are trained with the same training strategy as in [21]. Specifically, all the networks are trained using an initial learning rate of \(0.1\) and a cosine learning rate schedule. The optimizer uses stochastic gradient descent with Nesterov momentum and a weight decay of \(0.0005\). The input images are first pre-augmented with standard random left-right flipping and cropping, and then the Augmix method [21] is applied with its default settings. Augmix uses affine transformations including translation, rotation, and shear transform as well as other augmentation methods to generate augmented clean images.
As for experiments on ImageNet1k [11], and ImageNet1k-C [20] datasets, we use ResNeXt50 [46] as the baseline network for the Pixmix-based [22] robust training. We denote "ResNeXt50-k5-WMCG-nb9" as the network created by replacing the \(3\times 3\) convolution layer with WMCG-CNN of the \(5\times 5\) FB basis size and each convolutional filter using \(9\) basis. The neural networks are trained with the same strategy in Pixmix [22]. All the neural networks are trained from scratch to compare the sample efficiency and convergence speed of different networks.
In addition, we test our methods with the recently proposed ConvNeXt network model [32] on ImageNet40 and ImageNet1k datasets. We use ConvNeXt-S as the baseline network. We denote "ConvNeXt-S-k7-WMCG-nb49" as the network created by replacing all the \(7\times 7\) convolution layer with WMCG-CNN of the \(7\times 7\) FB basis size and each convolutional filter using \(49\) basis. The training on both datasets is in the same way as described in [32], where the neural networks are trained for 300 epochs using an AdamW optimizer. Similar to [32], the top 1 and top 5 accuracies are considered.
Table II shows all the results for our image classification experiments. We see that under the robust training strategies, the proposed WMCG-CNNs reduce the classification errors on both clean and corrupted datasets while using the same or smaller number of parameters. The augmented FB basis of order \(2\) alone achieves the highest robustness with less number of parameters, which is partly because the FB basis of order \(2\) used in "ResNeXt29-k3-WMCG-nb1-r2" and "ResNeXt29-k5-WMCG-nb1-r2" relatively considers more low-frequency signals than FB basis of higher orders that are included in "ResNeXt29-k3-WMCG-nb9". It is also noted that a large filter size can help increase the classification precision and robustness of neural networks. As for the experiment with ConvNeXt, WMCG-CNN improves ConvNeXt-S on both ImageNet40 and ImageNet1k datasets without increasing the number of parameters as well as computational burden. It is also noted that shear transform is also helpful for performance boost under the \(300\)-epoch fully-training routine.
### _Experiments on image denoising_
Although it has been shown that in certain cases with known noise levels, the traditional algorithms can surpass CNNs in denoising quality [50][49], their processing speed is much slower than CNNs. And blind denoising with unknown noise levels is also a more practical scenario in the application. Thus in this paper, we only test the CNNs' performance on blind denoising tasks.
The experiments are divided into three parts: grayscale synthetic additive Gaussian noisy image denoising, color synthetic additive Gaussian noisy image denoising, and real-world color noisy image denoising (whose image noise is generated in the camera imaging process). For grayscale image denoising, as in [47], the same \(400\)\(180\times 180\) images are used for training. The training images are corrupted by synthetic additive Gaussian noise of noise level (i.e., the standard deviation of noise) \(\sigma\in[0,55]\). \(128\times 3\),\(000\) patches of size \(50\times 50\) are cropped to train the CNN model. For color synthetic noisy image denoising, we follow [43], where the same \(400\) color images are augmented with Bicubic downscaling, counterclockwise rotation, and horizontal flip. As for real-world noisy images, as in [43], the training dataset consists of \(100\)\(512\times 512\) JPEG images collected from five digital cameras Canon 80D, Nikon D800, Canon 600D, Sony A7 II and Canon 5D Mark II with ISO of 800, \(1\),\(600\), \(3\),\(200\), \(6\),\(400\), \(12\),\(800\) and \(25\),\(600\).
Five public test datasets are considered, including the grayscale image datasets Set12 [31], BSD68 [31], the color image datasets CBSD68 [31], Kodak24 [12], and the public real noisy consumer camera image dataset CC [35]. The public CC dataset consists of 15 images that are captured by three different digital cameras: Canon 5D Mark III, Nikon D600, and Nikon D800 with ISO values of \(1\),\(600\), \(3\),\(200\), or \(6\),\(400\). The training images are cropped into \(41\times 41\) patches for training the networks.
We consider one of the most famous denoising CNNs, DnCNN-B [4][47] as the baseline network for experiments on gray-scale image denoising. We build a brand new denoising
network called DnNeXt-B by replacing every plain hidden CNN layer in DnCNN-B with the bottleneck block shown in Fig. 1(b). We further denote "DnNeXt-B-k5-WMCG-nb9" as the network created by replacing the hidden \(3\times 3\) convolution layer in DnNeXt-B with WMCG-CNN of the \(5\times 5\) FB basis size and each convolutional filter decomposed by \(9\) basis. Likewise, "DnNeXt-B-k7-WMCG-nb9" is a corresponding version with FB basis of size \(7\times 7\). To emphasize the efficiency of our approach, we also include another Wavelet-based denoising CNN, MWDCNN [44] for comparison. We test all the CNNs on the standard grayscale image datasets Set12 [31], and BSD68 [31]. The DnCNN, DnNeXt, and DnNeXt-WMCG are trained with the same training strategy as in [47]. We use SGD optimizer with a weight decay of \(0.0001\), and a momentum of \(0.9\). The networks are trained for 50 epochs with a batch size of 128. During the 50 epochs of training, the learning rate decreases exponentially from \(1.0\times 10^{-1}\) to \(1.0\times 10^{-4}\).
Table III shows the denoising results with the metric of peak signal-to-noise ratio (PSNR) on images corrupted by simulated white Gaussian noise of different noise levels. The number of trainable parameters and MACs are also displayed. In particular, for all the calculations of MACs in image-denoising experiments, we assume the input patch size is \(3\times 32\times 32\) for a fair comparison of computational burden, which is different from the actual case. We find that the proposed DnNeXt and DnNeXt-MCG outperform DnCNN and MWDCNN with a much smaller number of learnable parameters. In addition, the proposed DnNeXt-WMCG achieves the highest average PSNR of all CNNs and yields especially higher PSNR on high noise levels. The larger FB basis helps gain a higher PSNR score on high noise levels, yet may cause poor performance on low noise levels.
We consider DudeNet [43], an upgrading of DnCNN as the baseline CNN for the synthetic color noisy image denoising and real camera image denoising experiment. We build a new network DudeNeXt by replacing every plain hidden \(3\times 3\) CNN layer in DudeNet with the bottleneck block shown in Fig. 1(b). We further denote "DudeNeXt-k5-WMCG-nb9" as the network created by replacing the hidden \(3\times 3\) convolution layer in DudeNeXt with WMCG-CNN of the \(5\times 5\) FB basis size and each convolutional filter decomposed by \(9\) basis. We follow the same training strategy as in [43]. We use Adam optimizer with an initial learning rate of \(1.0\times 10^{-3}\) and a batch size of 128. The networks are trained for 70 epochs. During the 70 epochs of training, the learning rate decreases exponentially from \(1.0\times 10^{-3}\) to \(1.0\times 10^{-5}\).We compare our methods with two conventional denoising algorithms CBM3D [10], TID [33], as well as three deep learning methods DnCNN [47], DudeNet [43], and MWDCNN [44].
Table IV shows the average PSNR results on the public CBSD68 and Kodak24 color image datasets. Table V shows the PSNR results on the public CC dataset. On both synthetic and real-world color image denoising experiments, generally, the proposed networks achieve superior performance with respect to the average PSNR.
### _Analysis and discussion_
The ablation experiments on ImageNet40 demonstrate the sample efficiency of WMCG-CNN for all the tested baseline network architectures including ResNet18, ResNet50, and ResNeXt50. We note that the proposed method gives a larger improvement in Error for ResNet18 and ResNet50 than that for ResNeXt50. This is probably because a larger proportion of learnable parameters in ResNeXt50 lies in \(1\times 1\) Conv layers which as shown in the results causes heavy overfitting on the small dataset ImageNet40.
The comparison experiments with discrete G-CNN, MCG-CNN, and WMCG-CNN proves that the diversity of transformations is helpful for performance boost. The introduction of MC sampling allows us to consider any mix of affine transforms. In the experiments on ImageNet40, we see that the additional use of shear transform with a suitable shear range can consistently improve image classification. Meanwhile, a high degree of shear transform can harm the performance, which is because, in discrete implementation, shear transform leads to compressing of information along a certain direction that causes information loss.
The shear-transform-augmented convolutional filters can be considered as an example of the classic continuous shear wavelet [1][17]. The shear wavelet can help achieve a highly sparse representation of multidimensional data [17], which explains the superior performance it brings to the proposed WMCG-CNN. In the future, we may exploit Wavelet theory to further improve our methods.
We also note that in the field of MC integral and stochastic simulation, there are a lot of advanced techniques such as quasi-MC sampling [5], Markov chain MC [9], and multi-level MC [15]. There is a potential that these methods can help improve both MCG-CNN and WMCG-CNN further, and we will study this in future work.
In this work, we do not compare with peer filter-decomposition-based G-CNNs proposed in other papers on multiple benchmark datasets. This is because as far as we know, all the existing filter-decomposition-based G-CNNs are much slower than standard CNN, and require larger GPU memory storage. They are typically tested with small datasets such as MNIST [28]. Due to the high degree of parameter sharing and a large number of channels used, those G-CNN can achieve a good inference accuracy but are usually unsuitable and over-expensive for practical application on natural image datasets.
The results of experiments on robust image classification and image denoising show the generalizability of WMCG-CNN. On Cifar10 and Cifar10-C datasets, we see that with the same filter size and number of trainable parameters, WMCG-CNN outperforms plain CNN in prediction performance on both clean and corrupted datasets. By enlarging the filter size, the robustness of WMCG-CNN is enhanced further. This even allows a much smaller number of trainable parameters to surpass the plain CNNs, which demonstrates WMCG-CNN's high parameter efficiency.
The proposed WMCG-CNN shows higher flexibility and controllability than the conventional CNNs. The use of filter decomposition decouples the relationship between the filter
size and the number of trainable parameters. For a certain convolutional kernel, the corresponding number of trainable parameters can be as small as only 1, or as large as any integer. In addition, we can choose a certain custom design basis as one prefer to control the performance of the network. For example, in the experiment on the Cifar10 dataset, we simply choose a single low-frequency FB basis that is of order 2 and can still can a good result on the Cifar10 dataset with higher robustness.
## IV Conclusion
In this paper, we propose an efficient and flexible implementation of group-equivariant CNN based on filter-wise weighted Monte Carlo sampling, which allows a higher degree of diversity of transformations for a performance boost. The proposed WMCG-CNN is shown to be an efficient generalization of standard CNN. The utility of shear transformation for tasks on natural images is demonstrated. The proposed WMCG-CNN shows superior efficiency on both image classification and image denoising tasks. We can also extend it for other computer vision tasks such as image segmentation and image reconstruction.
## Acknowledgments
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant no. 428149221, by Deutsches Zentrum fur Luft- und Raumfahrt e.V. (DLR), Germany under grant no. 01ZZ2105A and no. 01KD2214, and by Fraunhofer Gesellschaft e.V. under grant no. 017-100240/B7-aneg.
|
2305.14376 | PTGB: Pre-Train Graph Neural Networks for Brain Network Analysis | The human brain is the central hub of the neurobiological system, controlling
behavior and cognition in complex ways. Recent advances in neuroscience and
neuroimaging analysis have shown a growing interest in the interactions between
brain regions of interest (ROIs) and their impact on neural development and
disorder diagnosis. As a powerful deep model for analyzing graph-structured
data, Graph Neural Networks (GNNs) have been applied for brain network
analysis. However, training deep models requires large amounts of labeled data,
which is often scarce in brain network datasets due to the complexities of data
acquisition and sharing restrictions. To make the most out of available
training data, we propose PTGB, a GNN pre-training framework that captures
intrinsic brain network structures, regardless of clinical outcomes, and is
easily adaptable to various downstream tasks. PTGB comprises two key
components: (1) an unsupervised pre-training technique designed specifically
for brain networks, which enables learning from large-scale datasets without
task-specific labels; (2) a data-driven parcellation atlas mapping pipeline
that facilitates knowledge transfer across datasets with different ROI systems.
Extensive evaluations using various GNN models have demonstrated the robust and
superior performance of PTGB compared to baseline methods. | Yi Yang, Hejie Cui, Carl Yang | 2023-05-20T21:07:47Z | http://arxiv.org/abs/2305.14376v1 | # PTGB: Pre-Train Graph Neural Networks for
###### Abstract
The human brain is the central hub of the neurobiological system, controlling behavior and cognition in complex ways. Recent advances in neuroscience and neuroimaging analysis have shown a growing interest in the interactions between brain regions of interest (ROIs) and their impact on neural development and disorder diagnosis. As a powerful deep model for analyzing graph-structured data, Graph Neural Networks (GNNs) have been applied for brain network analysis. However, training deep models requires large amounts of labeled data, which is often scarce in brain network datasets due to the complexities of data acquisition and sharing restrictions. To make the most out of available training data, we propose PTGB, a GNN pre-training framework that captures intrinsic brain network structures, regardless of clinical outcomes, and is easily adaptable to various downstream tasks. PTGB comprises two key components: (1) an unsupervised pre-training technique designed specifically for brain networks, which enables learning from large-scale datasets without task-specific labels; (2) a data-driven parcellation atlas mapping pipeline that facilitates knowledge transfer across datasets with different ROI systems. Extensive evaluations using various GNN models have demonstrated the robust and superior performance of PTGB compared to baseline methods.
P TGB: Pre-Train Graph Neural Networks for Brain Network Analysis
## 1 Introduction
Brain network analysis has attracted considerable interest in neuroscience studies in recent years. A brain network is essentially a connected graph constructed from different raw imaging modalities such as Diffusion Tensor Imaging (DTI) and functional Magnetic Resonance Imaging (fMRI), where nodes are composed by the anatomical regions of interest (ROIs) given predefined parcellation atlas, and connections are usually formed with the correlations among ROIs.
Effective brain network analysis plays a pivotal role in understanding the biological structures and functions of complex neural systems, which potentially helps the early diagnosis of neurological disorders and facilitates neuroscience research (Martensson et al., 2018; Yahata et al., 2016; Lindquist, 2008; Smith, 2012).
Graph Neural Networks (GNNs) have emerged as a powerful tool for analyzing graph-structured data, delivering impressive results on a wide range of network datasets, including social networks, recommender systems, knowledge graphs, protein and gene networks, and molecules, among others (Kipf and Welling, 2017; Hamilton et al., 2017; Schlichtkrull et al., 2018; Vashishth et al., 2020; Xu et al., 2019; Ying et al., 2018; Zhang et al., 2020; Liu et al., 2022; Xiong et al., 2020; Cui et al., 2022; Xu et al., 2022). These models have proven their ability to learn powerful representations and efficiently compute complex graph structures, making them well-suited for various downstream tasks. In the field of neuroscience, GNN has been applied to brain network analysis, specifically for graph-level classification/regression (Ying et al., 2018; Xu et al., 2019; Errica et al., 2020; Luo et al., 2022; Dai et al., 2023; Xu et al., 2023a) and important vertex/edge identification (Ying et al., 2019; Luo et al., 2020; Vu and Thai, 2020; Yu et al., 2023; Kan et al., 2022c), towards tasks such as connectome-based disease prediction and multi-level neural pattern discovery. However, deep learning models, including GNNs, require large amounts of labeled data to achieve optimal performance (Hu et al., 2020; You et al., 2020; Zhu et al., 2021a). While neuroimaging datasets are available from national neuroimaging studies such as the ABCD (Casey et al., 2018), ADNI (Hinrichs et al., 2009), and PPMI (Aleksovski et al., 2018), these datasets are still relatively small compared to graph datasets from other domains, such as datasets with 41K to 452K graphs on OGB (Hu et al., 2020) and datasets with thousands to millions of graphs on NetRepo (Rossi and Ahmed, 2016)). The limited amount of data can result in overfitting when training deep models.
Transfer learning offers a solution to the challenge of limited data availability in training deep models. It allows a model pre-trained on large-scale source datasets to be adapted to smaller target datasets while maintaining robust performance. However, the success of transfer learning depends on the availability of similar supervision labels on the source and target dataset. This is not always feasible in large-scale public studies, particularly in the field of brain network analysis. Self-supervised pre-training has been shown to be effective in various domains, such as computer vision (He et al., 2020; Chen et al., 2020), natural language processing (Devlin et al., 2019; Yu et al., 2022), and graph mining (Sun et al., 2022). We aim to explore a self-supervised pre-training approach for GNNs on brain networks that is not restricted by task-specific supervision labels. Despite the promising potential, unique challenges still need to be addressed to achieve effective disease prediction. One of the major challenges is the inconsistent ROI parcellation systems in constructing different brain network datasets, which hinders the transferability of pre-trained models across datasets. The process of parcellating raw imaging data into brain networks is highly complex and usually done ad hoc by domain experts for each study, making it unrealistic to expect every institution to follow the same parcellation system. Although some institutions may release preconstructed brain network datasets (Di Martino et al., 2014), the requirement for universal adherence to a single parcellation system is infeasible.
To tackle the challenge of insufficient training data for GNNs in brain network analysis, we present **P**-**T**raining **G**raph neural networks for **B**rain networks (PTGB), a fully unsupervised pre-training approach that captures shared structures across brain network datasets. PTGB adapts the data-efficient MAML (Finn et al., 2017) with a two-level contrastive learning strategy based on the naturally aligned node systems of brain networks across individuals. Additionally, to overcome the issue of diverse parcellation systems, we introduce a novel data-driven atlas mapping technique. This technique transforms the original features into low-dimensional representations in a uniform embedding space and aligns them using variance-based projection, which incorporates regularizations that preserve spatial relationships, consider neural modules, and promote sparsity.
In summary, our contributions are three-folded:
* We present an unsupervised pre-training approach for GNNs on brain networks, addressing the issue of resource-limited training.
* We propose a two-level contrastive sampling strategy tailored for GNN pre-training on brain networks, which combines with a data-driven brain atlas mapping strategy that employs customized regularizations and variance-based sorting to enhance cross-dataset learning.
* Our experiments against shallow and deep baselines demonstrate the effectiveness of our proposed
PTGB. Further, we provide an in-depth analysis to understand the influence of each component.
## 2 Related Work
GNNs for Brain Network Analysis.GNNs are highly effective for analyzing graph-structured data and there have been some pioneering attempts to use them for predicting diseases by learning over brain networks. For example, BrainGNN (Li et al., 2021) proposes ROI-aware graph convolutional layers and ROI-selection pooling layers for predicting neurological biomarkers. BrainNetCNN (Kawahara et al., 2017) designs a CNN that includes edge-to-edge, edge-to-node, and node-to-graph convolutional filters, leveraging the topological locality of brain connectome structures. BrainNetTF (Kan et al., 2022) introduces a transformer architecture with an orthonormal clustering readout function that considers ROI similarity within functional modules. Additionally, various studies (Cui et al., 2022; Kan et al., 2022; Zhu et al., 2022; Cui et al., 2022; Yu et al., 2023) have shown that, when data is sufficient, GNNs can greatly improve performance in tasks such as disease prediction. However, in reality, the lack of training data is a common issue in neuroscience research, particularly for specific domains and clinical tasks (Xu et al., 2023). Despite this, there has been little research into the ability of GNNs to effectively train for brain network analysis when data is limited.
Unsupervised Graph Representation Learning and GNN Pre-training.Unsupervised learning is a widely used technique for training complex models when resources are limited. Recent advancements in contrastive learning (Chen et al., 2020; He et al., 2020; Yu et al., 2021; Zhu et al., 2022) have led to various techniques for graphs. For instance, GBT (Bielak et al., 2022) designs a Barlow Twins Zbontar et al. (2021) loss function based on the empirical cross-correlation of node representations learned from two different views of the graph (Zhao et al., 2021). Similarly, GraphCL (You et al., 2020) involves a comparison of graph-level representations obtained from two different augmentations of the same graph. DGI (Velickovic et al., 2019) contrasts graph and node representations learned from the original graph and its corruption.
To obtain strong models for particular downstream tasks, unsupervised training techniques can be used to pre-train a model, which is then fined tuned on the downstream tasks to reduce the dependence on labeled training data. The approach has proven highly successful in computer vision (Cao et al., 2020; Grill et al., 2020), natural language processing (Devlin et al., 2019; Radford et al., 2018, 2021; Liang et al., 2020), and multi-modality (e.g. text-image pair) learning (Li et al., 2022; Yao et al., 2022). There are various strategies for pre-training GNNs as well. GPT-GNN (Hu et al., 2020) proposes graph-oriented pretext tasks, such as masked attribute and edge reconstruction. L2P-GNN (Lu et al., 2021) introduces dual adaptation by simultaneously optimizing the encoder on a node-level link prediction objective and a graph-level self-supervision task similar to DGI. Others, such as GMPT (Hou et al., 2022) adopt an inter-graph message-passing approach to obtain context-aware node embedding and optimize the model concurrently under supervision and self-supervision. To the best of our knowledge, the effectiveness of both contrastive learning and pre-training has not been investigated in the context of the unique properties of brain networks.
## 3 Unsupervised Brain Network Pre-training
Problem Definition.The available training resource includes a collection of brain network datasets \(\mathcal{S}=\{\mathcal{D}_{1},\mathcal{D}_{2},\cdots\mathcal{D}_{s}\}\), where each dataset contains a varying number of brain networks. We consider each brain network instance with \(M\) number of defined ROIs as an undirected weighted graph \(\mathcal{G}\) with \(M\) nodes. \(\mathcal{G}\) is represented by a node-set \(\mathcal{V}=\{v_{m}\}_{m=1}^{M}\), an edge set \(\mathcal{E}=\mathcal{V}\times\mathcal{V}\), and a weighted adjacency matrix \(\mathbf{A}\in\mathbb{R}^{M\times M}\). We define a \(\theta\) parameterized GNN model \(f(\cdot)\), and our goal is to propose a pre-training schema that can effectively learn an initialization \(\theta_{0}\) for \(f(\cdot)\) on a set of source datasets \(\mathcal{S}_{\text{source}}\subset\mathcal{S}\) via self-supervision and adapt \(f_{\theta_{0}}(\cdot)\) to a local optimum \(\theta^{*}\) on a target set \(\mathcal{S}_{\text{target}}\in\mathcal{S}\).
### GNN Pre-training for Brain Networks
The goal of pre-training a GNN model for brain networks is to learn an appropriate initialization that can easily be adapted to downstream task. Note that the concept of pre-training is distinct from transfer learning since the latter expects a similarity between the source and target data as well as their learning objectives (_e.g.,_ loss functions), while this is often lacking in brain network analysis due to absence of
sufficient ground truth labels in large scale studies as well as inherent differences in their brain network parcellation methods across varying datasets. Practically, a GNN model can be pre-trained either on a singular task with a single source dataset or on a collection of tasks with multiple source datasets. The proposed PTGB framework adopts the latter option since multi-task pre-training reduces the likelihood of the model being biased towards the knowledge of data from a singular source, which could be particularly concerning if the source and target data shares limited similarity leading to poor downstream adaptation due to information loss during model transfer. However, a naive approach towards multi-task pre-training would not suffice in learning a robust model initialization. Specifically, it presents two underlying risks: (1) the model may not perform consistently well on all tasks and may also overfit to a particular task which significantly undermines model generalizability; and (2) the process could be computationally inefficient with increasing number of tasks regardless if the model is optimized sequentially or simultaneously on all tasks (Yang et al., 2022).
To this end, we adopt the popular data-efficient training techniques presented in MAML (Finn et al., 2017) with the goal of ensuring consistent performance on all tasks as well as computational efficiency. The MAML technique is characterized by an inner-loop adaptation and an outer-loop update (Raghu et al., 2019). At each training iteration, each input dataset is partitioned into an inner-loop support set and an outer-loop query set. The model is first trained on the support set without explicitly updating the parameters. Instead, the updates are temporarily stored as fast weights (Ba et al., 2016). These fast weights are then used to evaluate the query set and compute the actual gradients. This approach makes use of approximating higher-order derivatives (Tan and Lim, 2019) at each step, allowing the model to foresee its optimization trajectory a few steps ahead, which practically reduces the number of required training iterations to reach local optima. In our scenario, the joint optimization involves summing the loss over each brain network dataset, i.e., for \(n\) number of datasets and their respective temporary fast weights \(\{\theta_{i}^{\prime}\}_{i=1}^{n}\) and outer-loop queries \(\{\text{query}_{i}\}_{i=1}^{n}\), the step-wise update of the model parameter at time \(t\) is \(\theta^{t+1}=\theta^{t}-\alpha\nabla_{\theta^{t}}\sum_{i=1}^{n}\mathcal{L}_{ \text{query}_{i}}f_{\theta_{i}^{\prime}}(\cdot)\). We hereby summarize this process in Algorithm 1. In addition, we will also demonstrate the advantages of MAML-styled pre-training over vanilla multi-task pre-training as well as single task pre-training through experiments which will be discussed in Section 4.1.
### Brain Network Oriented Two-Level Contrastive Learning
Given the high cost of acquiring labeled training data for brain network analysis, our pre-training pipeline
Figure 1: Overview of the proposed framework PTGB. The initial features of the source datasets are projected to a fixed dimension through atlas transformation followed by variance-based feature alignment, which facilitates self-supervised GNN pre-training on multiple datasets via the novel two-level contrastive learning objective. The learned model can serve as the parameter initialization and be further fine-tuned on target tasks.
of PTGB adopts to the effective label-free learning strategy of contrastive learning (CL). CL aims to maximize the mutual information (MI) between an anchor point of investigation \(X\) from a data distribution \(\mathcal{H}\) and its positive samples \(X^{+}\), while minimizing MI with its negative samples \(X^{-}\). The contrastive objective function is formulated as follows:
\[\mathcal{J}_{\text{con}}=\arg\min\left[\left(-I(X;X^{+})+I(X;X^{-})\right) \right]. \tag{1}\]
In the context of graph CL, given an anchor node representation \(z_{\alpha}\), a set of positive samples \(\mathbf{S}^{+}\), and a set of negative samples \(\mathbf{S}^{-}\), the training objective is based on the Jensen-Shannon divergence (Hjelm et al., 2019),
\[\mathcal{J}_{\text{JSD}}(z_{\alpha})=\arg\min\left[\left(-I(z_{\alpha}; \mathbf{S}^{+})+I(z_{\alpha};\mathbf{S}^{-})\right)\right], \tag{2}\]
where
\[I(z_{\alpha};\mathbf{S}^{+}) =\frac{1}{|\mathbf{S}^{+}|}\sum_{z_{s^{+}}\in\mathbf{S}^{+}}\text {sp}\left(\frac{z_{\alpha}^{\top}z_{s^{+}}}{\|z_{\alpha}\|\|z_{s^{+}}\|}\right), \tag{3}\] \[I(z_{\alpha};\mathbf{S}^{-}) =\frac{1}{|\mathbf{S}^{-}|}\sum_{z_{s^{-}}\in\mathbf{S}^{-}}\text {sp}\left(\frac{z_{\alpha}^{\top}z_{s^{-}}}{\|z_{\alpha}\|\|z_{s^{-}}\|}\right), \tag{4}\]
and \(\text{sp}(\cdot)=\log(1+e^{\cdot})\) is softplus nonlinearity.
The ultimate goal of our framework is to localize effective GNN CL learning (Zhu et al., 2021) for brain networks. Given a dataset \(\mathcal{D}\) and an anchor node \(i\) from graph \(\mathcal{G}_{p}\in\mathcal{D}\) with the learned representation \(z_{i,p}\), we propose to categorize the possible sample selections into three fundamental types (a visualization is shown in Figure 2):
* \(\mathbf{\underline{S_{1}}}\): \(\{z_{j,p}\,:\,j\in\mathcal{N}_{k}(i,p)\}\) refers to the node representation set within the the \(k\)-hop neighborhood of the anchor in graph \(\mathcal{G}_{p}\).
* \(\mathbf{\underline{S_{2}}}\): \(\{z_{j,p}\,:\,j\notin\mathcal{N}_{k}(i,p)\}\) refers to the remaining node representation set in graph \(\mathcal{G}_{p}\) that are not in the the \(k\)-hop neighborhood of the anchor.
* \(\mathbf{\underline{S_{3}}}\): \(\{z_{j,q}\,:\,\mathcal{G}_{q}\in\mathcal{D},\,j\in\mathcal{G}_{q},\,q\neq p\}\) refers to the node representation set of nodes in all the other graphs of dataset \(\mathcal{D}\).
Notice that our framework leverages the \(k\)-hop substructure around the anchor node to further differentiate \(\mathbf{S_{1}}\) and \(\mathbf{S_{2}}\) for contrastive optimization. This design is driven by two considerations: **(1) Regarding GNN learning.** Given that node representations are learned from the information aggregation of its \(k\)-hop neighborhood, maximizing the MI of an anchor to its \(k\)-hop neighbors naturally enhances lossless message passing of GNN convolutions. **(2) Regarding the uniqueness of brain networks.** Brain networks can be anatomically segmented into smaller neural system modules (Cui et al., 2022), thus capturing subgraph-level knowledge can provide valuable signals for brain-related analysis.
Building on these three fundamental types of samples, we take advantage of the property of brain networks that ROI identities and orders are fixed across samples to introduce an additional sample type. This encourages the GNN to extract shared substructure knowledge by evaluating the MI of an anchor against its presence in other graphs. Given an anchor representation \(z_{i,p}\) of node \(i\) from graph \(\mathcal{G}_{p}\in\mathcal{D}\), the novel inter-graph sample type is defined as:
Figure 2: Visual demonstration of the sample types where \(X_{i,p}\) is the anchor and \(\mathbf{S_{1}}/\mathbf{S_{4}}\) are sampled as 1-hop neighbors.
* \(\mathbf{S_{4}}\):\(\{z_{j,q}\,:\,j\in\mathcal{N}_{k}(i,q)\cap\mathcal{N}_{k}(i,p),\,\mathcal{G}_{q} \in\mathcal{D},\,q\neq p\}\), refers to the node representation set within the \(k\)-hop neighborhood of node \(i\) in all other graphs in \(\mathcal{D}\). Conceptually, \(\mathbf{S_{4}}\) is a special subset of \(\mathbf{S_{3}}\).
It is important to note that for an anchor node \(i\), its \(k\)-hop neighborhood structures might not be identical among different graphs. As a result, we only consider shared neighborhoods when evaluating the mutual information across multiple graphs. To encourage the learning of unique neighborhood knowledge within a single brain network instance and shared substructure knowledge across the entire dataset, we configure \(\mathbf{S_{1}}\) and \(\mathbf{S_{4}}\) as positive samples while \(\mathbf{S_{2}}\) and the set \(\mathbf{S_{3}}-\mathbf{S_{4}}\) as negative samples, as illustrated in Figure 3. Strictly speaking, \(\mathbf{S_{1}}\) does not include the anchor itself, but the anchor is always a positive sample to itself by default. Furthermore, our sampling categorization can also help understand the objective formulations in various state-of-the-art graph CL frameworks (Velickovic et al., 2019; Qiu et al., 2020; Xia et al., 2022; Sun et al., 2019; Zhu et al., 2021). We summarize our findings in Table 1. Specifically, "+" denotes positive sampling; "-" denotes negative sampling; and "/" means that the sample type is not considered. It can be observed that DGI and InfoGraph (InfoG) use graph representation pooled from node representations as a special sample, which is essentially equivalent to jointly considering \(\mathbf{S_{1}}\) and \(\mathbf{S_{2}}\) without explicit differentiation. On the other hand, GCC and EGI, which are more closely related to our framework, leverage neighborhood mutual information maximization on a single graph, but fail to extend this to a multi-graph setting like ours.
### Data-driven Brain Atlas Mapping
MotivationWhen fine-tuning a pre-trained model on a new data domain, the misalignment between source and target signals can negatively impact its adaptation. This issue is particularly relevant in brain networks, where it is hard, if not impossible, to require every brain network data provider to stick to the same brain atlas template, and each template can use a unique system of ROIs. For instance, the HIV dataset we obtained is parcellated from the AAL90 template (Tzourio-Mazoyer et al., 2002), leading to 90 defined ROIs; while the PPMI dataset uses the Desikan-Killiany84 template (Desikan et al., 2006), resulting in 84 defined ROIs. As a result, brain networks in the two datasets will have different ROI semantics and graph structures. Although GNNs can handle graphs without fixed numbers and orders of nodes, constructing the most informative ROI (_i.e.,_ node) features as the connection profiles (_i.e.,_ adjacency) (Cui et al., 2022, 2022) can result in different feature dimensions and physical meanings. While manual conversion can be performed to translate between templates, it is a costly process that requires domain expertise to perform even coarse cross-atlas mappings.
To address this issue, we aim to provide a data-driven atlas mapping solution that is easily accessible and eliminates the strong dependency on network construction. The data-driven atlas mapping solution, which transforms the original node features into lower-dimensional representations that preserve the original connectivity information and align features across datasets, is learned independently on each dataset prior to GNN pre-training.
#### 3.3.1 Autoencoder with Brain Network Oriented Regularizers
PTGB adopts a one-layer linear autoencoder (AE) as the base structure. The AE consists of a linear projection encoder \(\mathbf{W}\) and a transposed decoder \(\mathbf{W}^{\top}\)
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & \(\mathbf{S_{1}}\) & \(\mathbf{S_{2}}\) & \(\mathbf{S_{3}}\) & \(\mathbf{S_{4}}\) \\ \hline DGI & + & + & / & / \\ \hline InfoG & + & + & – & / \\ \hline GCC & + & – & – & / \\ \hline EGI & + & – & – & / \\ \hline Ours & + & – & – & + \\ \hline \end{tabular}
\end{table}
Table 1: The sampling configuration of some existing graph contrastive learning methods. “+” denotes positive sampling, “-” for negative, and “/” for no consideration.
Figure 3: The sampling configuration of the proposed PTGB framework. \(\mathbf{S_{1}}\) and \(\mathbf{S_{4}}\) are positive samples, \(\mathbf{S_{2}}\) and the set \(\mathbf{S_{3}}-\mathbf{S_{4}}\) are negative samples.
with the goal of learning a low-dimensional projection that can easily reconstruct the original presentation. The loss function is defined as minimizing the reconstruction error \(\mathcal{L}_{\text{rec}}=(1/M)\|\mathbf{X}-\mathbf{X}\mathbf{W}\mathbf{W}^{\top} \|_{2}^{2}\), where \(\mathbf{X}\in\mathbb{R}^{M\times M}\) is the input and \(\mathbf{W}\in\mathbb{R}^{M\times D}\) is the learnable projection (Hinton and Zemel, 1993). To further enhance the feature compression and to guide the overall AE optimization, we propose to incorporate several regularizers that take into account the unique characteristics of brain networks.:
**Locality-Preserving Regularizer (LR).** We aim to ensure that the compressed features preserve the spatial relationships of the original brain surface. To achieve this, we incorporate a locality preserving regularizer (He et al., 2005) to the AE objective. The regularizer is formulated as \(\mathcal{L}_{\text{loc}}=(1/M)\|\mathbf{Y}-\mathbf{T}\mathbf{Y}\|^{2}\), where \(\mathbf{Y}\in\mathbb{R}^{M\times D}\) represents the projected features from the AE and \(\mathbf{T}\in\mathbb{R}^{M\times M}\) is a transition matrix constructed from the \(k\)-NN graph of the 3D coordinates of ROIs.
**Modularity-Aware Regularizer (CR).** Brain networks can be segmented into various neural system modules that characterize functional subsets of ROIs. In graph terminology, they are community structures. The projected feature should also capture information about neural system membership. However, obtaining ground-truth segmentations is a difficult task that requires expert knowledge. To overcome this challenge, we resort to community detection methods on graphs, specifically based on modularity maximization. The regularizer (Salha-Galvan et al., 2022) is defined as minimizing
\[\mathcal{L}_{\text{com}}=-\frac{1}{2D}\sum_{i,j=1}^{M}\left[\mathbf{A}_{ij}- \frac{k_{i}k_{j}}{2D}\right]\exp(-\|y_{i}-y_{j}\|_{2}^{2}), \tag{5}\]
where \(\mathbf{A}\in\mathbb{R}^{M\times M}\) is the graph adjacency matrix, \(k_{i}\) denotes degree of node \(i\), and \(y_{i}\) is the AE projected features. Essentially, this optimization minimizes the \(L_{2}\) distance between representations of nodes within the same communities, as measured by the modularity score, and maximizes the distance between representations of nodes in different communities.
**Sparsity-Oriented Regularizer (SC).** Sparse networks have proven to be effective in learning robust representations from noisy data (Jeong et al., 2017; Shi et al., 2019; Makhzani and Frey, 2014). In brain connectome analysis, sparsity has also been shown to improve the interpretation of task-specific ROI connections in generation and classification tasks (Kan et al., 2022). To this end, we implement the popular KL-divergence smoothing to enforce sparsity in the parameters of the linear projection encoder, \(\mathbf{W}\)). This is formulated as:
\[\mathcal{L}_{\text{KL}}=\sum_{i=1}^{M}\sum_{j=1}^{D}\left[\rho\log\left(\frac {\rho}{\hat{\rho}_{ij}}\right)+(1-\rho)\log\left(\frac{1-\rho}{1-\hat{\rho}_ {ij}}\right)\right], \tag{6}\]
where \(\rho\) is a small positive float set as the target sparsity value, and \(\hat{\rho}_{ij}\) represents the element-wise activation of the encoder projection matrix \(\mathbf{W}\in\mathbb{R}^{M\times D}\).
#### 3.3.2 Variance-based Dimension Sorting
In addition to transforming dataset-specific features, cross-dataset alignment of feature signals is also crucial for improving model adaptation. The one-layer AE transforms the original feature vectors into weighted combinations of multiple dimensions, creating new feature dimensions which we name as _virtual ROIs_. In the context of brain networks, this process helps to group ROIs and their signals. This idea is inspired by the well-studied functional brain modules (Philipson, 2002; Anderson et al., 2004; Hilger et al., 2020; Brodmann, 1999; Zhou et al., 2020), which provide a higher-level and generic organization of the brain surface, as opposed to fine-grained ROI systems. Since the variations in ROI parcellations are due to differences in clinical conventions, it is reasonable to assume that there exists a shared virtual ROI system underlying different parcellation systems, similar to the discretization of functional brain modules. The community learning and neighborhood preserving regularizers, introduced in Section 3.3, allow us to capture these shared virtual ROIs in a data-driven manner. Our ultimate goal is to align the discovered virtual ROIs across datasets, so that each virtual ROI characterizes the same functional module in the human brain, regardless of its origin. This cross-dataset alignment of virtual ROIs ensures that the model can effectively adapt to new datasets and provide meaningful insights into the different downstream analyses.
The objective of the one-layer linear AE is similar to PCA, as discussed in more detail in Appendix A.1, with the added benefit of incorporating additional regularizers. PCA orders dimensions based on decreasing levels of sample variance (Hotelling, 1933). PTGB leverage this approach by utilizing the learned parameters of the AE projection to estimate the variance of each virtual ROI (_i.e._, projected feature di
mension). The sample variance of each virtual ROI indicates its representativeness of the original data variations. Given the shared patterns across different parcellation systems, we expect that similar virtual ROIs in datasets with different atlas templates will have similar variance scores, especially in terms of their order. By sorting the same number of virtual ROIs based on their sample variance in each dataset, we aim to align virtual ROI cross datasets, so that each virtual ROI represents the same functional unit in the human brain. The procedure is explained in detail in Algorithm 2 in Appendix A.2.
## 4 Experiments
We evaluate the effectiveness of PTGB through extensive experiments on real brain network datasets, with a focus on the following research questions:
* **RQ1**: How does PTGB compare with other unsupervised GNN pre-training frameworks adapted to the scenario of brain networks?
* **RQ2**: What is the contribution of each major component in PTGB to the overall performance?
* **RQ3**: How does the choice of sampling method affect model convergence and performance?
* **RQ4**: How effective is the variance-based sorting in aligning virtual ROIs among different parcellation systems?
Datasets, Configurations, and Metrics.Our experiments are conducted on three real-world brain network datasets: PPMI, BP, and HIV. The PPMI dataset is parcellated using the Desikan-Killiany84 atlas template and includes brain networks from 718 subjects, 569 of whom are Parkinson's Disease (PD) patients and 149 are Healthy Control (HC). The networks are constructed using three tractography algorithms: Probabilistic Index of Connectivity (PICo), Hough voting (Hough), and FSL. The BP dataset is parcellated using the Brodmann82 template and includes resting-state fMRI and DTI modalities from 97 subjects, 52 of whom have Bipolar I disorder and 45 are HCs. The HIV dataset is parcellated using the AAL90 template and includes fMRI and DTI modalities from 70 subjects, with 35 early HIV patients and 35 HCs. We pre-train the model on the PPMI dataset and evaluate the downstream performance on BP and HIV. Further details about the datasets can be found in Appendix B.
PTGB employs GCN as the backbone for the GNN (Kipf and Welling, 2017) encoder. We also benchmark PTGB with GAT (Velickovic et al., 2018) and GIN (Xu et al., 2019), and the results are provided in Appendix D.1. The hyperparameter settings are described in detail in Appendix C. The hyperparameter tuning follows the standard designs in related studies such as in (Yang et al., 2021; Wein et al., 2021; Hu et al., 2021). The downstream evaluation is binary graph classification for disease prediction. To assess the performance, we use the two widely used metrics in the medical field (Li et al., 2021; Cui et al., 2022): accuracy score (ACC) and the area under the receiver operating characteristic curve (AUC).
### Overall Performance Comparison (RQ1)
We present a comprehensive comparison of the target performance between the proposed PTGB and popular unsupervised learning strategies in Table 2. To fairly compare the methods, we apply atlas mapping pre-processing and the multi-dataset learning backbone discussed in section 3.1 to all methods. The purpose of this comparison is to effectively highlight the impact of the proposed two-level contrastive pre-training and we will further analyze the effect of atlas mapping in subsequent subsections. In addition, for a clearer presentation, we group the selected baselines according to their optimization strategies:
* No pre-training (NPT): the backbone with randomly initialized parameters for target evaluation.
* Non-CL-based (NCL): methods with cost functions regularized by co-occurrence agreement or link reconstruction, including Node2Vec (Grover and Leskovec, 2016), DeepWalk (Perozzi et al., 2014), and VGAE (Kipf and Welling, 2016).
* Single-scale CL (SCL): methods utilizing either node- or graph-level representations in the CL optimization, including GBT (Bielak et al., 2022), ProGCL (Xia et al., 2022), and GraphCL (You et al., 2020).
* Multi-scale CL (MCL): methods whose CL optimization utilizes both nodes- and graph-level representations, including DGI (Velickovic et al., 2019) and InfoG (Sun et al., 2019).
* Ego-graph sampling (EGS): methods whose contrastive samplings consider \(k\)-hop ego-networks as discriminative instances, which are the most similar to the proposed PTGB, including GCC (Qiu et al., 2020) and EGI (Zhu et al., 2021).
* Our proposed two-level contrastive optimization (Ours): methods include single task pre-training (STP) in which we select the PICo modality of
the PPMI study to be the only source task; multi-task pre-trainig (MTP) which does not utilize the MAML technique; and the full implementation of the PTGB framework. The experiments reveal the following insights:
* The proposed PTGB consistently outperforms all the baselines, achieving a relative improvement of 7.34%-13.30% over the best-performing baselines and 31.80%-38.26% over the NPT setting. The results of PTGB have been statistically compared against baselines using paired \(t\)-tests. With a significance level set to 0.05, the largest two-tailed \(p\) value is reported at 0.042, indicating that PTGB demonstrates a statistically significant performance increase over other selected methods.
* Compared with the transductive methods of Node2Vec and DeepWalk, the GNN pre-trained by VGAE learns structure-preserving representations and achieves the best results in the NCL-type methods. This indicates the potential benefit of the locality-preserving regularizer design in PTGB.
* Maximizing mutual information between augmented instances may hinder GNNs from learning a shared understanding of the entire dataset. For baselines belonging to the categories of SCL, MCL, and EGS, pre-training with non-augmented CL (InfoG, EGI) generally results in a 4.36% relative improvement across both metrics and a 7.63% relative decrease in performance variance compared to their augmentation-based counterparts (GBT, GraphCL, ProGCL, DGI, GCC). This explains why PTGB does not employ data augmentation.
* Multi-scale MI promotes the capture of effective local (_i.e.,_ node-level) representations that can summarize the global (_i.e.,_ graph-level) information of the entire network. The MCL-type methods typically outperform the SCL-type ones by a relative gain of 2.68% in ACC and 3.27% in AUC.
* The group of baselines considering \(k\)-hop neighborhoods (EGS) presents the strongest performance, indicating the importance of local neighborhoods in brain network analysis. The proposed PTGB, which captures this aspect through both node- and graph-level CL, is the only one that comprehensively captures the local neighborhoods of nodes.
* Learning from multiple tasks (MTP) brings significant improvement over STP, reporting a relative increase of 8.47% in accuracy and 6.90% in AUC. Furthermore, the full PTGB framework with MAML-styled training achieves a relative improvement of 11.29% in accuracy, 14.75% in AUC, and a reduced variance over MTP, demonstrating its advantages in enhancing model generalizability.
### Ablation Studies (RQ2)
We examine two key components of PTGB- (1) the two-level contrastive sampling and (2) the atlas mapping regularizers. The best contrastive sampling configuration is fixed when examining the atlas regularizers, and all regularizers are equipped when examin
\begin{table}
\begin{tabular}{l l|c c c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{BP-fMRI} & \multicolumn{2}{c}{BP-DTI} & \multicolumn{2}{c}{HIV-fMRI} & \multicolumn{2}{c}{HIV-DTI} \\ \cline{3-10} & & ACC & AUC & ACC & AUC & ACC & AUC & ACC & AUC \\ \hline NPT & GCN & 50.07\(\pm\)0.70 & 50.11\(\pm\)5.80 & 49.51\(\pm\)0.68 & 51.83\(\pm\)0.80 & 56.27\(\pm\)1.84 & 57.16\(\pm\)5.14 & 51.30\(\pm\)0.42 & 53.82\(\pm\)1.94 \\ \hline \multirow{3}{*}{NCL} & Node2Vec & 48.51\(\pm\)0.30 & 49.68\(\pm\)7.23 & 50.83\(\pm\)1.44 & 46.70\(\pm\)10.30 & 52.61\(\pm\)10.38 & 50.75\(\pm\)10.94 & 49.65\(\pm\)0.30 & 51.22\(\pm\)10.79 \\ & DeepWalk & 50.28\(\pm\)0.33 & 51.59\(\pm\)0.60 & 51.72\(\pm\)5.74 & 38.46\(\pm\)9.37 & 54.81\(\pm\)11.20 & 55.55\(\pm\)11.93 & 52.67\(\pm\)11.20 & 50.88\(\pm\)10.39 \\ & VGAE & 56.71\(\pm\)1.48 & 55.24\(\pm\)11.40 & 54.63\(\pm\)11.20 & 54.11\(\pm\)11.82 & 62.76\(\pm\)4.77 & 61.25\(\pm\)11.54 & 56.90\(\pm\)0.42 & 55.35\(\pm\)0.44 \\ \hline \multirow{3}{*}{SCL} & GBT & 57.21\(\pm\)0.68 & 57.32\(\pm\)10.00 & 56.29\(\pm\)0.53 & 55.27\(\pm\)10.54 & 65.73\(\pm\)10.00 & 66.08\(\pm\)10.63 & 59.80\(\pm\)7.76 & 57.37\(\pm\)0.40 \\ & GraphCL & 59.79\(\pm\)30 & 59.10\(\pm\)0.78 & 57.57\(\pm\)10.63 & 57.35\(\pm\)0.67 & 67.08\(\pm\)7.76 & 69.17\(\pm\)6.68 & 60.43\(\pm\)3.90 & 60.03\(\pm\)10.48 \\ & ProGCL & 62.36\(\pm\)0.50 & 62.61\(\pm\)0.34 & 61.26\(\pm\)37 & 62.67\(\pm\)8.46 & 71.52\(\pm\)10.39 & 72.16\(\pm\)8.55 & 62.48\(\pm\)10.38 & 61.94\(\pm\)10.37 \\ \hline \multirow{3}{*}{MCL} & DGI & 62.44\(\pm\)10.13 & 60.75\(\pm\)10.97 & 58.15\(\pm\)0.60 & 58.95\(\pm\)0.60 & 70.22\(\pm\)11.40 & 70.12\(\pm\)12.16 & 60.83\(\pm\)10.48 & 62.06\(\pm\)10.16 \\ & InfoG & 62.87\(\pm\)0.52 & 62.37\(\pm\)0.67 & 60.88\(\pm\)0.67 & 60.44\(\pm\)0.61 & 72.46\(\pm\)8.71 & 72.94\(\pm\)5.66 & 61.75\(\pm\)76 & 61.37\(\pm\)0.45 \\ \hline \multirow{3}{*}{EGS} & GCC & 63.45\(\pm\)0.62 & 62.39\(\pm\)0.60 & 60.44\(\pm\)0.54 & 60.29\(\pm\)0.10 & 70.97\(\pm\)0.13 & 72.48\(\pm\)10.13 & 61.27\(\pm\)10.66 & 61.38\(\pm\)10.79 \\ & EGI & 63.38\(\pm\)0.63 & 63.58\(\pm\)0.62 & 61.82\(\pm\)0.63 & 61.57\(\pm\)8.27 & 37.46\(\pm\)0.42 & 32.85\(\pm\)0.48 & 60.98\(\pm\)0.42 & 62.41\(\pm\)10.50 \\ \hline \multirow{3}{*}{Ours} & STP & 53.92\(\pm\)12.2\(\pm\)27 & 54.61\(\pm\)11.28 & 55.51\(\pm\)11.28 & 56.73\(\pm\)10.20 & 61.18\(\pm\)14.57 & 62.88\(\pm\)11.55 & 55.29\(\pm\)12.38 & 57.31\(\pm\)11.27 \\ & MTP & 60.37\(\pm\)11.47 & 61.44\(\pm\)11.28 & 59.41\(\pm\)11.26 & 59.92\(\pm\)13.37 & 67.65\(\pm\)12.30 & 68.38\(\pm\)12.36 & 60.54\(\pm\)13.37 & 59.46\(\pm\)12.39 \\ & PTGB & **68.84\(\pm\)0.84** & **68.45\(\pm\)0.86** & **66.57\(\pm\)0.87** & **68.31\(\pm\)0.88** & **77.80\(\pm\)0.98** & **77.22\(\pm\)0.74** & **67.51\(\pm\)0.87** & **67.74\(\pm\)0.88** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Disease prediction performance comparison. All results are averaged from 5-fold cross-validation along with standard deviations. The best result is highlighted in bold and runner-up is underlined. * denotes a significant improvement according to paired \(t\)-test with \(\alpha=0.05\) compared with baselines.
ing the contrastive samplings. The results, shown in Figure 4 (with additional DTI version in Appendix D.2), are analyzed based on the four possible variants of contrastive sampling listed in Table 3. Our analyses yield the following observations: **(1)** leveraging \(k\)-hop neighborhood (_i.e.,_ positive **S2**) MI maximization brings visible performance gain, confirming its benefit in brain structure learning; **(2)** The extension to multi-graph CL (_i.e.,_ consideration of **S3**) facilitates the extraction of unique ROI knowledge, leading to improved results in Var. 3/4; **(3)** Var. 4 outperforms Var. 3 as it effectively summarizes of global (_i.e.,_ graph-level) information in local node representations; **(4)** The full implementation of PTGB brings a relative gain of 4.27% in both metrics on top of Var. 4, highlighting the significance of considering shared substructure knowledge across multiple graphs (_i.e.,_ through the inclusion of **S4**).
The right-side sub-figures examine the impact of the atlas mapping regularizers by comparing the results of the full framework to those without the sparsity regularizer (w/o SR), the locality regularizer (w/o LR), and the community regularizer (w/o CR). Two key observations are made: **(1)** The removal of SR leads to the greatest performance drop, emphasizing its crucial role in learning robust projections that can effectively handle noise and prevent over-fitting; **(2)** The inferior results when LR and CR are absent emphasize the importance of spatial sensitivity and blockwise feature information in brain network analysis. This supports our intuition to consider the relative positioning of ROIs in the 3D coordinate as
Figure 4: Ablation comparisons on contrastive sampling choices (left two) and atlas mapping regularizers (right two). The \(y\)-axis refers to the numeric values of evaluated metrics (in %). The setup of Var. 1 - 4 is described in Table 3. “SC”, “LR”, and “CR” are abbreviations for “sparsity constraints”, “locality regularizer”, and “community (modularity-aware) regularizer” respectively.
Figure 5: In-depth comparison among the four variants and the full model. The \(x\)-axis is epochs. Fig. (a) evaluates the trajectory of pre-training loss, Fig. (b) evaluates their respective testing accuracy on the fMRI view of the HIV dataset, and Fig. (c) reports the pre-training runtime in seconds.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{1}{|c|}{**S1**} & \multicolumn{1}{|c|}{**S2**} & \multicolumn{1}{|c|}{**S3**} & \multicolumn{1}{|c|}{**S4**} \\ \hline Var. 1 & – & – & / & / \\ \hline Var. 2 & + & – & / & / \\ \hline Var. 3 & + & – & – & / \\ \hline Var. 4 & + & + & – & / \\ \hline \end{tabular}
\end{table}
Table 3: The four variants of sampling strategies.
well as knowledge on community belongings based on modularity measures.
### Analysis of Two-level Contrastive Sampling (RQ3)
Figure 5 offers insight into the pre-training convergence, target adaptation progression, and pre-training runtime consumption of the four sampling variants and the full framework. Key observations include: **(1)** As seen in Figure 5(a), all variants demonstrate efficient pre-training convergence due to the multi-dataset joint optimization inspired by MAML. The full model demonstrates the most optimal convergence, highlighting the advantage of learning shared neighborhood information in brain network data through two-level node contrastive sampling. **(2)** Figure 5(b) shows the superiority of our design in terms of downstream adaptation performance compared to other variants. **(3)** Figure 5(c) reveals that the more sophisticated the sampling considerations result in greater computational complexity for mutual information evaluation, leading to longer runtime for each pre-training epoch. However, the total time consumptions are all on the same scale.
### Analysis of ROI Alignment (RQ4)
To further validate the variance-based virtual ROI sorting, we select the top 2 virtual ROIs with the highest sample variances for each atlas template (_i.e.,_ dataset) and backtrack to locate their corresponding projected ROIs. The results are illustrated in Figure 6, which shows a 3D brain surface visualization highlighting the original ROIs. From this, we draw two main conclusions: **(1)** There exists multiple regional overlaps between pairs of two atlas templates, reflecting some working effectiveness of our proposed solution as well as confirming the feasibility of converting between atlas templates. **(2)** It is relatively harder to find regions that overlap across all three atlas templates which shows a limitation of the proposed unsupervised ROI alignment scheme, suggesting a need to modify against the current variance-based heuristic which may inspire further study and research opportunity.
## 5 Conclusion
Brain network analysis for task-specific disease prediction has been a challenging task for conventional GNN frameworks due to the limited availability of labeled training data and the absence of a unifying brain atlas definition, which hinders efficient knowledge transfer across different datasets. To address these challenges, we propose PTGB, a novel unsupervised multi-dataset GNN pre-training that leverages a two-level node contrastive sampling to overcome data scarcity. Additionally, PTGB incorporates atlas mapping through brain-network-oriented regularizers and variance-based sorting to address the issue of incompatible ROI parcellation systems in cross-dataset model adaptation in a data-driven way. Extensive experiments on real-world brain connectome datasets demonstrate the superiority and robustness of PTGB in disease prediction and its clear advantage over various state-of-the-art baselines. As more brain network datasets become available, it will be intriguing to further validate its generalizability.
Figure 6: The virtual ROI mapping across the three investigated datasets. We highlight pairs of overlapping regions with colored boxes. In particular, we use gold boxes for the PPMI and BP mapping; blue boxes for the BP and HIV mapping; and purple boxes for the PPMI and HIV mapping. |
2302.13397 | Efficient physics-informed neural networks using hash encoding | Physics-informed neural networks (PINNs) have attracted a lot of attention in
scientific computing as their functional representation of partial differential
equation (PDE) solutions offers flexibility and accuracy features. However,
their training cost has limited their practical use as a real alternative to
classic numerical methods. Thus, we propose to incorporate multi-resolution
hash encoding into PINNs to improve the training efficiency, as such encoding
offers a locally-aware (at multi resolution) coordinate inputs to the neural
network. Borrowed from the neural representation field community (NeRF), we
investigate the robustness of calculating the derivatives of such hash encoded
neural networks with respect to the input coordinates, which is often needed by
the PINN loss terms. We propose to replace the automatic differentiation with
finite-difference calculations of the derivatives to address the discontinuous
nature of such derivatives. We also share the appropriate ranges for the hash
encoding hyperparameters to obtain robust derivatives. We test the proposed
method on three problems, including Burgers equation, Helmholtz equation, and
Navier-Stokes equation. The proposed method admits about a 10-fold improvement
in efficiency over the vanilla PINN implementation. | Xinquan Huang, Tariq Alkhalifah | 2023-02-26T20:00:23Z | http://arxiv.org/abs/2302.13397v1 | # Efficient physics-informed neural networks using hash encoding
###### Abstract
Physics-informed neural networks (PINNs) have attracted a lot of attention in scientific computing as their functional representation of partial differential equation (PDE) solutions offers flexibility and accuracy features. However, their training cost has limited their practical use as a real alternative to classic numerical methods. Thus, we propose to incorporate multi-resolution hash encoding into PINNs to improve the training efficiency, as such encoding offers a locally-aware (at multi resolution) coordinate inputs to the neural network. Borrowed from the neural representation field community (NeRF), we investigate the robustness of calculating the derivatives of such hash encoded neural networks with respect to the input coordinates, which is often needed by the PINN loss terms. We propose to replace the automatic differentiation with finite-difference calculations of the derivatives to address the discontinuous nature of such derivatives. We also share the appropriate ranges for the hash encoding hyperparameters to obtain robust derivatives. We test the proposed method on three problems, including Burgers equation, Helmholtz equation, and Navier-Stokes equation. The proposed method admits about a 10-fold improvement in efficiency over the vanilla PINN implementation.
## 1 Introduction
Partial differential equations (PDEs) are essential in science and engineering as they represent physical laws that describe basic natural phenomena, like heat transfer, fluid flow, and wave propagation, with applications in optimal control, medical and Earth imaging, and inversion. However, conventional methods for solving PDEs, e.g., finite-difference, finite-element, or spectral-element methods, often require complex discretization, and intensive computation, and are prone to numerical errors. These limitations are unfavorable to inverse design and implementation on regions of complex geometry. With the recent developments in computational resources and the availability of robust machine learning frameworks, scientific machine learning has taken center stage, especially in tasks related to solving PDEs. These solutions are often learned in a supervised manner using numerically generated labels for the training (Guo et al., 2016; Zhu and Zabaras, 2018). However, recently, physics-informed neural networks (PINNs) (Raissi et al., 2019) and operator learning (Lu et al., 2021; Li et al., 2020) have shown their potential to revolutionize scientific computing. While operator learning focuses on representing the kernel that transforms inputs to outputs, like learning a PDE solver for many PDE parameters (included in the training), PINN is meant to learn the functional solution of a specific PDE and is not inherently designed to be applied to various PDE parameters, unless transfer or meta-learning is involved and that is possible when the changes in PDE parameters are small (Goswami et al., 2020; Qin et al., 2022). Since PINN training takes up the role of inference in machine learning, the efficiency of the training is crucial, which is not the case. Despite this fundamental limitation, the quest to solve this problem has only intensified, as the functional representation of the PDE solutions offers all kinds of flexibility and accuracy features.
In PINNs, we are constructing neural network functional representations of the solutions of PDEs that would otherwise be defined on a fixed mesh and computed through numerical methods, like finite difference or finite element. The functional representation (an approximation) offers a solution in a form that would only be available if we could solve the PDE analytically. The neural network, \(f(\mathbf{x})\), where \(\mathbf{x}\) represents the coordinates of the domain of interest offers flexibility in representation in irregular domains, and domains with gaps, and offers accuracy beyond the numerical approximations, especially in computing the derivatives of the function using automatic differentiation (Baydin et al., 2018). This functional representation of the solution of a PDE is attained through a neural network optimization problem that could be costly. Conventional PINN training consumes 10000s of epochs at a cost that is often much higher than traditional numerical methods, which makes the use of PINNs for practical applications, beyond solving exotic equations, less attractive. The primary reason behind the high cost of PINN is the number of epochs needed to converge, especially for complex functional solutions (Wang et al., 2021). Some of these limitations can be attributed to the low-frequency bias of neural networks (Rahaman et al., 2018; Xu, 2019; Huang and Alkhalifah, 2022). It takes the multi-layer perceptron, initialized randomly, a while of training before it can find a path toward a potential local minimum (sometimes thousands of epochs). In other words, it is first lost in a random search, considering the often high dimensional nature of the neural network space, before it finds its footing (Qin et al., 2022). Part of the problem is the small imprint that the solution coordinates values (inputs), limited by its dimension (often three space and time), can exert on the neural network, which forces that network to initially roam freely in the parameter space with little guidance. This limitation has been mildly addressed with increasing the influence of inputs through encoding, which allows for higher dimensional representation of the input space, in which the scalar inputs are replaced by vectors. An example, given by frequency or positional encoding, replaces the small difference in coordinates for neighboring samples to more profound changes in the network represented by bigger differences in the positional vector representation (Liu et al., 2021; Takikawa et al., 2021; Huang et al., 2021).
To improve the efficiency of PINNs training, three components of the PINNs machinery have been addressed: the neural network (NN) architecture design, the loss function design, and the training process (Cuomo et al., 2022). In this paper, we focus on the NNs design, and specifically in representing the input (with an embedding layer). To some extent, PINNs could be regarded as a neural field representation task (e.g., NeRF), whose inputs are spatial coordinates and outputs are the voxels or the physical fields, but with a PDE loss as a training signal. Inspired by the success of embedding methods, and specifically, the encoding of the input coordinates to the neural field representation, and the recent huge progress on NeRF given by multi-resolution hash encoding (Muller et al., 2022), prompted a logical question: _can we leverage hash encoding to accelerate the PINNs training?_ So, hash encoding, used as a form of encryption over the years, of the input coordinates can provide a more locally-aware representation of the coordinate values. This feature over multi-resolution provided considerable acceleration in the convergence of neural network functional representations of images (NeRF). Convergence rates of 1000s of epochs were reduced to below 100. This is a major improvement that allowed for the practical use of NeRFs. However, the supervised training involved in such representations did not require the calculation of derivatives as is needed by the PDE loss function in PINNs.
In this paper, we introduce hash encoding to PINNs. We investigate its capability in solving the outstanding cost limitation of PINN by reducing the number of epochs needed to converge. We also investigate the applicability of automatic differentiation through hash encoding. Alternatively, we use the finite-difference method to ensure the stable calculation of the derivatives of the NN function with hash encoding and use it in the PINNs training. In the numerical examples, we test our approach on several PDEs to demonstrate its potential in considerably reducing the cost of PINNs, as well as share the limitations we encountered.
The main contributions of this study are the following:
* We propose an efficient physics-informed neural network by means of hash encoding.
* We make use of the finite difference method to obtain the first and second-order derivatives and avoid the influence of discontinuous derivatives on automatic differentiation.
* We validate our method on the three PDE boundary value problems, including Burgers equation, Helmholtz equation, and Navier-Stokes equation, and achieve 10-fold acceleration in PINNs training.
In the following sections, we first briefly summarize the related works in Section 2 and then introduce the preliminaries and the proposed PINN using hash encoding in Section 3. To showcase the efficiency and accuracy of the proposed method, we present the settings of the experiment and results in Section 4. Finally, we conclude and discuss potential future work in Section 6.
## 2 Related Work
### Physics-informed neural networks
The concept of using a neural network functional representation to solve PDEs was first introduced in the 20th century and was validated by the universal approximation theorem (Hornik et al., 1989, Lagaris et al., 1998). Raissi et al. (2019) introduced the physics-informed neural network (PINN) framework and showed its application in fluids, quantum mechanics, reaction-diffusion systems, and the propagation of nonlinear shallow-water waves. The general idea of PINNs is to train a mapping function from the input coordinates (spatial coordinates and/or time) to the output physical field, which satisfies the physical governing equation. The loss function is the PDE residuals and any initial or boundary conditions; thus, it is regarded as an unsupervised technique. Alkhalifah et al. (2020), Sitzmann (2020), Huang and Alkhalifah (2022) showed its potential as an efficient surrogate modeling approach for frequency-domain wavefields. PINN solutions can adapt to any model shape, including irregular topography and interior geometry. However, it is trained to provide the solution for specific PDE parameters, and thus, it requires retraining or transfer learning if the PDE parameters change (Goswami et al., 2020), which limits its rapid use as a numerical solver of PDEs and varying parameters, like those we encounter in an inversion process. The model architecture, the training samples, the loss function, and even the initialization of the NN all have distinct affect on the convergence of PINNs. Wu et al. (2023) proposed residual-based adaptive distribution and residual-based adaptive refinement with distribution to improve the sample efficiency during training. Huang and Alkhalifah (2022) proposed a single reference frequency loss function and Huang and Alkhalifah (2022) proposed frequency upscaling and neuron splitting to help the PINN solve the Helmholtz equation converge at high frequencies. Sharma and Shankar (2022) proposed meshless discretizations for derivatives calculation to accelerate the training of PINNs. Qin et al. (2022) proposed Meta-PDE, which involves using gradient-based meta-learning to amortize the training time needed to fit the NN on a problem drawn from a distribution of parameterized PDEs, and achieves an intermediate accuracy approximation in up to an order of magnitude speedup. However, even with those promising developments, there is still a long way to go to the ultimate goal to surrogate the numerical simulation with neural networks.
### Input Encoding
The objective of PINNs is to train an NN function of coordinate inputs to output a solution that respects the physical laws. It has been shown that the success of such a task relies heavily on the embedding that maps the input of the NNs to a higher-dimensional space (positional encoding). Early examples of encoding the input of an NN, training-free encoding, include the basic one-hot encoding (Harris and Harris, 2013), the kernel trick (Theodoridis and Koutroumbas, 2006), and later, the implementation of positional encoding using sine and cosine functions (Vaswani et al., 2017). The latter approach has resulted in convergence improvements in PINNs (Huang et al., 2021). Muller et al. (2020) developed the one-blob encoding, a continuous variant of the one-hot encoding, which shows better performance compared to the encoding using a sinusoidal function. Compared to these analytic encoding methods, recent progress on parametric encodings, which make use of additional trainable parameters in an auxiliary data structure, like grid or tree, has shown state-of-the-art performances, e.g., grid-based or tree-based encoding (Jiang et al., 2020; Mehta et al., 2021; Martel et al., 2021; Sun et al., 2022; Muller et al., 2022). Among these methods, the multi-resolution hash encoding (Muller et al., 2022) has reduced the training cost of NeRF from days to seconds. It reduces the memory access operation and the number of floating point operations by means of a multi-resolution hash table of trainable feature vectors, where values are optimized through stochastic gradient descent, achieving a considerable increase in efficiency. Although the utilization of hash encoding has taken computer vision with neural networks to a new era, its potential benefits to PINNs are still unclear and needs to be explored because unlike NeRF, which relies on supervised learning, PINNs are driven by the corresponding PDE requiring derivative calculations of the solution with respect to the input.
To the best of our knowledge, we are the first to combine hash encoding with physics-informed neural networks with the fundamental purpose of reducing the cost of training PINNs.
## 3 Methodology
In this section, we first review the framework of physics-informed neural networks (PINNs) and the concept behind hash encoding. Then we investigate the incorporation of hash encoding into PINNs, and specifically analyze options for differentiation including finite difference and automatic differentiation considering the discontinuous nature of the vanilla multi-resolution hash encoding derivatives.
### Preliminaries
Considering a connected domain of \(n\) dimensions \(\Omega\subseteq\mathbb{R}^{n}\) and boundary \(\partial\Omega\), a general time-dependent PDE can be defined as:
\[u_{t}(\mathbf{x})+S(u(\mathbf{x}),a(\mathbf{x}))=0,\quad\mathbf{x}\in\Omega, \quad t\in[0,T], \tag{1}\]
where \(t\) and \(\mathbf{x}\) are the time and spatial coordinates, respectively, \(S(u,a)\) is a non-linear differential operator, and \(a\in\mathcal{A}\) represents the parameters of the PDE, e.g., coefficients and initial or boundary conditions, and \(u\) represents the physical field we want to solve for. In vanilla PINNs, a neural network \(\Phi(\theta,\mathbf{x},t)\), parameterized by the trainable parameters \(\theta\), is trained to map the input coordinates (including time for time-dependent equations) to the output, which represents the physical field (e.g., velocity, pressure, vorticity, and so on) at the input coordinate location, satisfying the following equation:
\[\frac{\partial\Phi(\theta,\mathbf{x})}{\partial t}+S(\Phi(\theta,\mathbf{x}), a(\mathbf{x}))=0. \tag{2}\]
Thus, we can use the mean square residual of this PDE, as well as any initial or boundary conditions, in the loss function,
\[\mathbf{L}=\underbrace{\frac{1}{N_{i}}\sum_{j=1}^{N_{i}}\left(\frac{\partial \Phi(\theta,\mathbf{x}_{j})}{\partial t}+S(\Phi(\theta,\mathbf{x}_{j}),a( \mathbf{x}_{j}))\right)^{2}}_{\text{Interior PDE loss}}+\underbrace{\frac{1}{N_{b}} \sum_{i=1}^{N_{b}}\left(\mathcal{B}(\Phi(\theta,\mathbf{x}_{i}))-u_{b}\left( \mathbf{x}_{i}\right)\right)^{2}}_{\text{Supervised loss on boundary}}, \tag{3}\]
to optimize the parameters of the NN, \(\theta\), where \(N_{i}\) is the number of collection points in the domain and \(N_{b}\) is that on the boundary, \(u_{b}\) is the boundary values, and \(\mathcal{B}\) is the boundary operator, denoting derivatives or values of the field.
### Hash encoding
For function learning, we aim to improve the approximation quality of the NN to the PDE, and also the speed of training for a given NN. Note that speed is the main objective of this paper. A smart way is to encode the input query, e.g., the spatial coordinates and time, to a high-dimensional space. Here, we use the hash encoding proposed by Muller et al. (2022). The general idea of hash encoding is to combine a multi-resolution decomposition with a hash table to represent 3D shapes compactly. Then the complex 3D real world could be represented by a small neural network and trainable hash encoding parameters. Figure 1 includes a diagram depicting the hash encoding mechanism modeled after the diagram used by Muller et al. (2022). Each sample in the simulation domain \(\mathbf{x}_{i}\) can be described using \(S\) levels of resolution from low to high. For each level of resolution, like the pink or the blue dots in Figure 1, we calculate the embedding vector. Specifically, we first find its voxel vertices (4 vertices for the 2D case, and 8 vertices for the 3D case) and then use a trainable hash table, which includes fixed features of length \(L\) and hash table size \(T\) for each level of resolution, to evaluate the corresponding embedding vector for each vertex. We, then, use linear interpolation of the vertices vectors to obtain the embedding vector for \(\mathbf{x}_{i}\) at different levels. Finally, the hash encoding for \(\mathbf{x}_{i}\) is the concatenation of these embedding vectors from different levels.
Specifically, given the encoding level \(S\), and the finest and coarsest resolution \(R_{f}\) and \(R_{c}\), the resolution of each level \(R_{s}\) is determined by means of geometric progression, as follows:
\[\begin{split} R_{s}&:=\left\lfloor R_{c}\cdot b^{s} \right\rfloor,\\ b&:=\exp\left(\frac{\ln R_{f}-\ln R_{c}}{S-1} \right).\end{split} \tag{4}\]
Then for \(\mathbf{x}_{i}\), its vertices are \([\mathbf{x}_{i,s}]:=[\mathbf{x}_{i}\cdot R_{s}]\) and \([\mathbf{x}_{i,s}]:=[\mathbf{x}_{i}\cdot R_{s}]\). As for the coarse resolution, where the number of vertices \((R_{s}+1)^{d}\) is smaller than the hash table \(T\), the hash table can provide a one-to-one query. However, for higher resolution, the mapping between the vertices and the hash table is achieved by a hash function
\[h(\mathbf{x}_{i})=\left(\bigoplus_{j=1}^{d}x_{i,j}\pi_{j}\right)\mod T, \tag{5}\]
where \(\bigoplus\) is a bitwise "exclusive or" XOR, and \(\pi_{j}\) are unique and large prime numbers Teschner et al. (2003). This kind of encoding not only provides a compact representation of the input dimensions but also is quite efficient with a computational complexity of \(O(1)\) due to the practically instant hash table lookup mechanism.
### PINNs with hash encoding
We aim to make use of hash encoding to accelerate the convergence of PINNs. However, unlike in NeRF applications, the loss function of PINNs requires derivatives of the output field with respect to the input coordinates. Since the proposed hash encoding includes a linear interpolation, these derivatives can be discontinuous, which results in inaccurate evaluations, especially near the boundary of the resolution grid, and these potential discontinuous derivatives are more frequent for high resolution levels of the hash encoding. Taking a simple function \(f(x)=sin(x)\) as an example, whose various order derivatives are readily available, we test the performance of automatic differentiation (which is used often in PINNs) on a simple network function of \(x\) trained to output the value of \(f(x)\). However, this simple network will incorporate a hash encoding layer. As shown in Figure 2, we observe that the values of the derivatives based on NN with hash encoding are not accurate and their accuracy highly depends on the hyper-parameters of the hash encoding. Specifically, we need to choose the coarsest and finest resolution, encoding levels, as well as hash table size, carefully to mitigate the impact of the discontinuities, which will also depend on the collocation points. Our opinion is that the strength of the hash encoding, which fits well with high-frequency details in high resolution hashing will be negated by this weakness in the calculation of AD. For example, using a lot of levels and including the finest of resolution would make the whole neural network (NN with encoding) fit the values at the training sample points, but the resulting function will
Figure 1: The diagram of the hash encoding, where different colors denote the different scales (resolution) and corresponding embedding vectors.
lack smoothness yielding unstable derivatives. This is a direct consequence of the linear interpolation used for the hash vectors. As a result, the derivative of the NN evaluated via automatic differentiation (AD) using the current implementation of hash encoding is unstable.
In the quest for an efficient implementation, we, instead, use the finite-difference (FD) method to calculate the derivatives, as automatic differentiation is also expensive for higher-order derivatives. Since, the finite difference, owning to its name, calculates derivatives over a finite length, it is relatively immune to point-induced derivative discontinuities. Nevertheless, the accuracy might suffer slightly when dealing with functions with abrupt changes, which is a general weakness of PINNs. The FD method is built on the Taylor series expansion. Given a grid point \(\mathbf{x}_{i}\), its physical field \(u(\mathbf{x}_{i})\) can be approximated by limiting the length of its Taylor series expansion, as follows:
\[u\left(\mathbf{x}_{i}+\Delta\mathbf{x}\right)=u\left(\mathbf{x}_{i}\right)+ \left.\Delta\mathbf{x}\frac{\partial u}{\partial\mathbf{x}}\right|_{\mathbf{x }_{i}}+\left.\frac{\Delta\mathbf{x}^{2}}{2}\frac{\partial^{2}u}{\partial \mathbf{x}^{2}}\right|_{\mathbf{x}_{i}}+\cdots. \tag{6}\]
Stopping at the second-order accuracy, the finite-difference first- and second-order derivatives are given by:
\[\begin{split}\frac{\partial u}{\partial\mathbf{x}}\bigg{|}_{ \mathbf{x}_{i}}&\approx\frac{u\left(\mathbf{x}_{i}+\Delta \mathbf{x}\right)-u\left(\mathbf{x}_{i}-\Delta\mathbf{x}\right)}{2\Delta \mathbf{x}},\\ \frac{\partial^{2}u}{\partial\mathbf{x}^{2}}\bigg{|}_{\mathbf{x }_{i}}&\approx\frac{u\left(\mathbf{x}_{i}+\Delta\mathbf{x} \right)-2u\left(\mathbf{x}_{i}\right)+u\left(\mathbf{x}_{i}-\Delta\mathbf{x} \right)}{\Delta\mathbf{x}^{2}}.\end{split} \tag{7}\]
During the training, the mesh points needed for the derivative calculation should be fed into the NN to get the corresponding field values.
As shown in Figures 3(a) and 3(b), the derivatives of the NN with hash encoding are generally more accurate than the AD ones, but we still need to carefully pick the encoding hyperparameters. Here, we show a failure case with FD method in Figure 3(c) resulting from using a small hash table, which forces the NN to learn to distinguish the samples at different locations, yielding a decrease in the accuracy of the derivative calculation (Figure 3(c)). Compared to the AD method, although the second-order derivative in (b) is not smooth, its trend is consistent with analytical solutions. Later we will share our choice of it and other parameters, e.g., the level of resolutions.
## 4 Experiments
In this section, we will showcase the effectiveness of the proposed method through its applications on three well-known partial differential equations (PDEs). In all cases, we use the MLP architecture as the backbone and Tanh as the activation function. For a fair efficiency comparison, we use the FD method to obtain the derivatives for vanilla PINN and PINN with hash encoding. We slightly increase the width of the vanilla PINN to make the number of trainable parameters almost equivalent to the PINN with hash encoding. We train these neural networks with an Adam optimizer and a decaying learning rate in all experiments. All experiments used an 80GB NVIDIA A100 GPU card. Our objective is to demonstrate the gains in efficiency in training PINNs with hash encoding. Thus,
Figure 2: Illustration of the accuracy of the first- and second-order derivatives calculation by the AD method. We use this NN to fit \(f=sin(x)\) with a multi-resolution hash encoding and visualize its first- and second-order derivatives for a hash table size of 10 in (a), and also visualize the derivatives with hash table sizes of 8 and 4 in (b) and (c), respectively.
for each test, we set a threshold for the solution accuracy (the absolute errors between the predicted solution via the NN and the reference solution) to stop the training and evaluate the approaches based on the number of epochs and the cost of each epoch. Due to the flexibility of the FD method for derivative calculations, we implemented the NN with tiny-cuda-nn Muller et al. (2021) framework to accelerate the training even further.
**Burgers equation**. First, we consider a one-dimensional time-dependent equation with a Dirichlet boundary condition representing the one-dimensional flow of a viscous fluid, called the Burgers equation, a widely used benchmark in PINNs. The governing equations are given by (Burgers, 1948):
\[\begin{split}&\frac{\partial u}{\partial t}+u\frac{\partial u}{ \partial x}=\nu\frac{\partial^{2}u}{\partial x^{2}},\ \ \ t\in[0,1],\ \ x\in[-1,1],\\ & u(t,-1)=u(t,1)=0,\\ & u(0,x)=-sin(\pi x),\end{split} \tag{8}\]
where \(\nu\) is the viscosity parameter and is set to \(\frac{0.01}{\pi}\) here. For the vanilla PINN, we use an MLP with three hidden layers {96,96,96}, while for PINN with hash encoding, we use an MLP of size {64,64,64}. The learning rate is 1e-3 and it is reduced by a factor of 0.8 at the 3000, 5000, 7000, and 10000 epochs. We consider the numerical solution as a reference to evaluate the accuracy of the predictions. Figure 4(a) shows the convergence rates of the proposed method and the vanilla PINN using 12800 collocation points. If we, as stated earlier, focus on the convergence speed by measuring the number of epochs required to attain a predefined accuracy threshold, we found that the PINN with hash encoding could reach this target accuracy within less than 2500 epochs, while the vanilla PINN needed almost 20000 epochs. The threshold accuracy considered here admitted a solution that is equivalent to the reference solution (Figure 4(b)).
**Helmholtz equation**. Next, we test the PDE on a second-order derivative problem, given by the infamous Helmholtz equation, which describes wave phenomena and has a lot of applications in
Figure 4: a) The histories of convergence and testing data errors for the Burgers equation tests, and b) the prediction of PINN with hash encoding and the numerical reference solutions.
Figure 3: Illustration of the accuracy of the first- and second-order derivatives calculation by the FD method. We use an NN to fit \(f=sin(x)\) with the multi-resolution hash encoding and visualize its first- and second-order derivatives for a hash table size of 10 in (a), and also visualize the derivatives with hash table sizes of 8 and 4 in (b) and (c), respectively.
seismic and electromagnetic fields (Riley et al., 2002). Here we consider a simple form of the Helmholtz equation
\[\begin{split}&\frac{\partial^{2}u}{\partial^{2}x}+\frac{\partial^{2}u }{\partial^{2}y}+\lambda u-f(x,y)=0,\\ & u(x,2)=0,\ \ u(-2,y)=0,\ \ u(x,-2)=0,\ \ u(y,2)=0,\\ & f=-\left(a_{1}\pi\right)^{2}\sin\left(a_{1}\pi x\right)\sin \left(a_{2}\pi y\right)\\ &\quad-\left(a_{2}\pi\right)^{2}\sin\left(a_{1}\pi x\right)\sin \left(a_{2}\pi y\right)\\ &\quad+\lambda\sin\left(a_{1}\pi x\right)\sin\left(a_{2}\pi y \right),\end{split} \tag{9}\]
where \(f\) is the source function, \(u\) is the wavefield, \(\lambda\) is the square of the wavenumber, and \(a_{1}\) and \(a_{2}\) are the parameters to control the sinusoidal nature of the source term. An analytical solution for this equation exists and is given by (Wang et al., 2021a):
\[u(x,y)=sin(a_{1}\pi x)sin(a_{2}\pi y). \tag{10}\]
In this case, we use an MLP with three hidden layers {144,144,144} for the vanilla PINN, while for PINN with hash encoding, we use an MLP of {128,128,128}. The learning rate is 1.5e-3 and it is reduced by a factor of 0.8 at the 3000, 5000, and 7000 epochs. We uniformly sample 10000 collection points to train the NN. The convergence rate for the Helmholtz equation training is shown in Figure 5(a)a. We observe that the PINN with hash encoding admits much faster convergence, and the PDE loss and testing data errors of PINN with hash encoding can still decrease. Nevertheless, the predicted and reference solutions, shown in Figure 5(a)b, look the same.
**Navier-Stokes equation.** Finally, we test the proposed method on a well-known equation in dynamic fluids, the Navier-Stokes equation. Specifically, we consider the incompressible fluid case, yielding the two governing equations based on mass and momentum conservation, as follows (Ethier and Steinman, 1994):
\[\begin{split}\partial_{t}\vec{u}(x,y,t)+\vec{u}(x,y,t)\cdot \nabla\vec{u}(x,y,t)+\nabla p&=\frac{1}{Re}\Delta\vec{u}(x,y,t)+ f(x,y),& x\in(0,1)^{2},t\in(0,T]\\ \nabla\cdot\vec{u}(x,t)&=0,& x\in(0,1)^{2},t \in[0,T]\\ \vec{u}(x,0)&=\vec{u}_{0}(x),& x\in(0,1)^{2} \end{split} \tag{11}\]
where \(Re\) is the Reynolds number and is set to 100 in our experiments, \(\nabla\) is the divergence operator, \(\Delta\) is the Laplacian operator, \(\vec{u}\) is the velocity field, \(\vec{u}_{0}\) is the initial velocity field, \(p\) is the pressure, and \(f\) is the external force, in which we set to zero here. The vanilla PINN has three hidden layers {112,112,112}, in contrast, we use {64,64,64} for the PINN with hash encoding. The learning rate is 1.2e-3 and is reduced by a factor of 0.8 at the 3000, 5000, and 7000 epochs. We uniformly sample 10000 collection points to train the NN. The results are shown in Figure 6(a), where the reference solution are obtained from numerical solvers. Like previous experiments, the proposed method has fast convergence and can reach the target accuracy (1.5e-3) with only 2270 epochs. However, even with 50000 epochs, the vanilla PINN can not meet the target accuracy. This demonstrates that the proposed method accelerates training and improves accuracy.
Figure 5: The histories of convergence and testing data errors for the Helmholtz equation tests, and b) the prediction of PINN with hash encoding and the numerical reference solutions.
**Training efficiency comparison.** The above experiments demonstrate that PINN with hash encoding can be trained to achieve a good target accuracy within far fewer epochs than the vanilla PINN. In table 1, we share a quantitative comparison of the two methods for the three examples we used here. We found that PINN with hash encoding can solve these three famous equations within 30 seconds using a single NVIDIA A100 GPU card.
## 5 Discussion
The neural network solution of a PDE in the form of a function of the coordinates of the solution space allows for continuous representation of the solution and its derivatives rendering opportunities in interpolation, extrapolation, and inversion. The training of such an MLP neural network has proved to be challenging as the high-dimensional topology of the loss function is rather complex, especially for complex solutions. The back-propagation necessary to determine the direction in which we update the neural network parameters encounters a limited imprint of the training samples' coordinates, given by their scalar input values with no neighborhood awareness, in the forward propagation process. Such scalar inputs are also missing any scale resolution information, which is helpful for the network training to have a more locally aware input. This renders the conventional scalar inputs PINNs to have a more point-dependent training. Encoding offers the input a more profound impact on the network, often in the form of a vector representation of the input. With multi-resolution hash encoding, we manage to embed some regional information, beyond the point, and at multi scales, into the inputs. Such area-aware information embedded in the forward propagation improves the topology of the loss function and renders more effective updates to the neural network almost instantly, even if initialized randomly.
The hash encoding implementation we inherited involves learnable parameters given by the lookup feature vectors. These learned parameters are crucial to capturing the multi-resolution nature of
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline Examples & Methods & Time/epoch & Total cost & Parameter size \\ \hline \multirow{2}{*}{Burgers equation} & Vanilla PINN & 7.43 ms & 144 s & 21504 \\ & PINN with hash encoding & 7.53 ms & 16.8 s & 20416 \\ \hline \multirow{2}{*}{Helmholtz equation} & Vanilla PINN & 7.03 ms & 155 s & 46080 \\ & PINN with hash encoding & 7.10 ms & 21 s & 42944 \\ \hline \multirow{2}{*}{Navier-Stokes equation} & Vanilla PINN & 12.8 ms & 640 s & 28672 \\ & PINN with hash encoding & 13.1 ms & 27 s & 24608 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The efficiency comparison between the vanilla PINN and PINN with hash encoding.
Figure 6: a) The histories of convergence and testing data errors for the Navier-Stokes equation tests, and b) the prediction of PINN with hash encoding and the numerical reference solutions, where \(u\) and \(v\) are the horizontal and vertical component of \(\vec{u}\).
the PDE solution with respect to the input coordinates. To optimize the hash encoding, we have to determine the optimal hyperparameters for the lookup feature vector, and that includes the number of resolution levels, the number of feature vectors per level (the size of the hash table), the base resolution, and the size of the feature vector. We set the latter two parameters to 2 and 4, respectively. The other two parameters depend on the expected resolution and complexity of the solution of the PDE. In Burgers equation, we use 9 for both parameters, while in Helmholtz equation, we use 8 for both. Then in the Navier-Stokes equation, we use 9 and 10 for the number of resolution levels and feature vectors, respectively.
The hash function, unlike positional encoding, is not globally differentiable. It includes obvious discontinuities between the hash interval boundaries and the linear interpolation used for the hash vectors. Thus, due to the point nature of the automatic differentiation, this limitation is exaggerated when the hash table is small. as a result, to mitigate this problem, the hash encoding hyperparameters must be chosen carefully. An alternative solution is provided by using the finite-difference scheme to approximate the derivatives of the solution. This approach also admits more efficient calculation of higher order derivatives as compared to AD. Thus, we resorted, in this study, to finite difference calculation of the derivatives. However, we could also utilize higher-order interpolation methods for the Hash vectors as recently proposed by Heo et al. (2023) for NeRF applications, which we will explore in future work.
## 6 Conclusion
We proposed a physics-informed neural network combined with hash encoding, resulting in a fast convergence to an accurate solution of boundary value problems. Specifically, we investigate the limitations of NN with hash encoding in calculating the derivatives via automatic differentiation and propose using the finite difference method as an alternative to address the issue of the non-smooth gradients, and to help speed up such calculations. We apply our method to a number of examples, including the Burgers equation, Helmholtz equation, and Navier-Stokes equation. With the proposed PINN with hash encoding, the training cost decreases 7 to 24 times. It has the ability to achieve semi-instant training for PINNs, addressing the main drawback of PINN in terms of the training cost.
## 7 Acknowledgement
The authors thank KAUST for supporting this research and the SWAG group for the collaborative environment. This work utilized the resources of the Supercomputing Laboratory at King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia.
|
2303.00954 | Large Deviations for Accelerating Neural Networks Training | Artificial neural networks (ANNs) require tremendous amount of data to train
on. However, in classification models, most data features are often similar
which can lead to increase in training time without significant improvement in
the performance. Thus, we hypothesize that there could be a more efficient way
to train an ANN using a better representative sample. For this, we propose the
LAD Improved Iterative Training (LIIT), a novel training approach for ANN using
large deviations principle to generate and iteratively update training samples
in a fast and efficient setting. This is exploratory work with extensive
opportunities for future work. The thesis presents this ongoing research work
with the following contributions from this study: (1) We propose a novel ANN
training method, LIIT, based on the large deviations theory where additional
dimensionality reduction is not needed to study high dimensional data. (2) The
LIIT approach uses a Modified Training Sample (MTS) that is generated and
iteratively updated using a LAD anomaly score based sampling strategy. (3) The
MTS sample is designed to be well representative of the training data by
including most anomalous of the observations in each class. This ensures
distinct patterns and features are learnt with smaller samples. (4) We study
the classification performance of the LIIT trained ANNs with traditional batch
trained counterparts. | Sreelekha Guggilam, Varun Chandola, Abani Patra | 2023-03-02T04:14:05Z | http://arxiv.org/abs/2303.00954v1 | # Large Deviations for Accelerating Neural Networks Training
###### Abstract
Artificial neural networks (ANNs) require tremendous amount of data to train on. However, in classification models, most data features are often similar which can lead to increase in training time without significant improvement in the performance. Thus, we hypothesize that there could be a more efficient way to train an ANN using a better representative sample. For this, we propose the LAD Improved Iterative Training (LIIT), a novel training approach for ANN using large deviations principle to generate and iteratively update training samples in a fast and efficient setting. This is exploratory work with extensive opportunities for future work. The thesis presents this ongoing research work with the following contributions from this study: (1) We propose a novel ANN training method, LIIT, based on the large deviations theory where additional dimensionality reduction is not needed to study high dimensional data. (2) The LIIT approach uses a Modified Training Sample (MTS) that is generated and iteratively updated using a LAD anomaly score based sampling strategy. (3) The MTS sample is designed to be well representative of the training data by including most anomalous of the observations in each class. This ensures distinct patterns and features are learnt with smaller samples. (4) We study the classification performance of the LIIT trained ANNs with traditional batch trained counterparts.
Large deviations, anomaly detection, high-dimensional data, multivariate time series 00 Month
2. We present four LAD score based sampling strategies to design the MTS. Obtaining the LAD score based on a large deviations principle is computationally inexpensive. Therefore, one can analyze large and high dimensional datasets without additional dimensionality reduction procedures allowing more accurate and cost effective scoring schema.
3. The use of MTS which is a smaller training sample reduces the cost of computational time significantly for large datasets.
4. We perform an empirical study on publicly available classification benchmark datasets to analyze the performance of the proposed method.
The work presented here is limited to simple classification based neural networks. Future work will include extending it to more complex ANNs.
## 2 Related Work
In this section, we provide a brief overview of sensitivity to training samples and speed of neural network training.
Artificial neural networks are powerful for general classification. However, its excellent performance often depends largely on a huge training set. A large body of research exists that study the impact of training data size on neural network learning [6, 2]. In particular, it is evident that smaller training data leads to less efficient models. However, the vast computational expense associated with training on large sets of data makes the need to improve training practices essential, specially for online or real-time models.
Many methods that try to model faster neural networks exist. For instance, Wang et al. [7] use batch normalization in deep neural networks to improve the convergence rates. Zhong et al. [8] work on image classification using their agile convolution neural network SatCNN, for quick and effective learning with small convolutional kernels and deep convolutional layers. However, these works are limited to the domain problems and cannot be easily scaled to other data types.
Another alternative to improve the training speed can be by modifying the training samples. For instance, studies like Shanker et al. [5] look at the effect of standardization of data on the learning of the neural network. Kavzoglu [3] emphasizes on characteristics of training samples and uses representative training to improve the classification. These methods, however, fail to study the impact of smaller data on model performance and efficiency.
In this part of the thesis, we propose a novel training strategy that can be generalized across domains. The method is used to replicate the true representation of the training features in a smaller sample which can be in turn used for faster training and convergence. Due to the proper representation of even the most extreme observations, this method ensures faster learning with competitive performance.
## 3 Methodology
The most important aspect of classification models is the adequacy of the representative training samples for each class. Although the size of the training data is of considerable importance, acquiring a large number of representative training data may be impractical where a large number of classes are involved. In particular, since most observations within each true class have similar features, multiple samples add low value in terms of novel information/pattern. In this section, we describe the traditional batch training approach in brief followed by the LAD Improved Iterative Training approach. We present 4 sampling strategies used in the LIIT training and their respective algorithms.
### Definitions and Terminology
Before describing the detailed methodology, we list out the terminology and corresponding definitions that are used for this study.
**Definition 1**.: **LAD Score** is the Large deviations Anomaly Detection (LAD) generated anomaly score for each observation in the data.
**Definition 2**.: **Full-Training Data** is the available complete training dataset for the ANN. It must be noted that only a subset of the Full Training Data might be used to train the ANN in the LIIT approach. Hence we present a different terminology to differentiate it from the training data.
**Definition 3**.: **Batch Training** is the traditional ANN training method using mini-batches of training data.
**Definition 4**.: **Modified Training Sample (MTS)** is a smaller sample generated from the training data using a specific sampling algorithm.
**Definition 5**.: **LAD Improved Iterative Training (LIIT)** is the novel improved batch training approach to train the ANN.
### Classification Neural Network
For this analysis, we look at a basic classification algorithm. Figure 1 shows the architecture of a simple three layer dense neural network.
The model is trained using full training samples with the convergence criterion set to zero validation loss for 5 epochs with the maximum number of epochs set to 180. Three different activation functions, RELU, Tanh, Softmax are used for the three consecutive dense layers respectively. A simple model was chosen to study the proof of concept of the representative sampling strategy presented in the part of the thesis. Further studies are needed to understand the relation between the model choice and training sampling techniques.
### LAD Improved Iterative Training of The Neural Network
Traditionally, in batch training, the full training data is divided into smaller samples or batches. The ANN learns from each batch sequentially till all the observations from the full training data are exhausted, as demonstrated in Figure 2. In the LIIT training, we iteratively design and update the modified training samples, MTS, from the full training data. At each iteration, we train the ANN using batch training on MTS till convergence. This partially trained model is then tested on the full training data to identify potential learning flaws. Since the current work is limited to classification models, the learning flaws include the misclassified data. The misclassified data is then used to derive the updated MTS which is used to retrain the ANN. The process is illustrated in Figure 3. This is inspired by Boosting techniques [4] where the subset creation depends on the previous model. However, unlike in boosting setting, we retrain the same ANN. 1
Figure 1: Simple Classification Neural Network: The figure illustrates a dense neural network to classify data into 3 classes. The network takes an input of 10 dimensions and returns scores used to assign each class.
To determine and extract the MTS sample, any sampling algorithm can be used. However, to ensure a good representation, we designed four LAD score based sampling algorithms along with the random sampling approach which is used as a baseline. The following are the sampling strategies used in our analysis:
1. **LAD Anomaly only (Repeated Entry)**: Observations with the highest anomaly scores in each true class are added to the training batch. Multiple copies of the observation can be added over iterations when the model fails to classify them after numerous re-training. See Algorithm 1.
2. **LAD Anomaly + Normal (Unique Entry)**: Equal parts of the high and low anomaly score observations are sampled for each true class. The final training batch contains a unique set of observations with no duplicate entries. See Algorithm 2.
3. **LAD Anomaly only (Unique Entry)**: This is similar to the **LAD Anomaly only (Repeated Entry)** approach. Observations with the highest anomaly scores in each true class are added to the training batch. However, the final training batch contains a unique set of observations with no duplicate entries. See Algorithm 3.
4. **LAD Quantile Samples (Repeated Entry)**: The observations are sampled using different quantiles of the anomaly score for each true class. Multiple copies are maintained in the training batch to ensure weighting from under-represented latent classes within each known true class. See Algorithm 4.
5. **Random**: In this model, we use random sampling from the available data. See Algorithm 5.
For this part, we sample \(\sim 5-6\%\) of the full training data at each iteration that is later added to the modified training sample. We ensure equal weights for all true classes for the analysis. The LIIT approach is implemented with 6 iterations (1 initial and 5 updates) which brings to \(\sim 30\%\) of the full training sample used in the LIIT approach.
## 4 Experiments
In this section, we evaluate the classification performance of the simple neural networks on real data when trained using LAD sub-sampled data. We focus on the performance of the neural networks under different training and sampling settings.
The following experiments have been conducted to study the model:
Figure 2: Mini-Batch Training Algorithm
1. Computational Expense: The LIIT trained ANN model's ability to train on a smaller set of training samples and converge faster is compared to the fully trained model.
2. Classification Performance: The overall performance of the sub-sampled models on multiple benchmark datasets is studied. For this analysis, we consider Area Under the Curve (AUC) as the performance metric to study classification.
3. Stability to Perturbations: Perturbations upto 8% are added to the test data which is used to study the change in performance in all models.
To maintain fair comparison, the number of epochs is fixed to a maximum count of 180 for the ANN model trained on the full training data a.k.a. the full model and 30 per iteration of all the LIIT trained ANNs (totaling to 180 epochs for complete training). For each trained ANN, we evaluate performance on 5 independent reruns. The average results are presented for all evaluations.
Figure 3: LIIT Training Algorithm
```
0: Dataset \(X\) of size \((n,d)\), number of iterations \(N_{iter}\), threshold \(th\), number of true classes in the data \(K\), sample size from each class \(c_{size}\), number of iterations \(i_{iter}\), ANN classification model \(model_{iiti}\). Initialization: Split data into \(x_{train},x_{test},x_{val},y_{train},y_{test},y_{val}\) (train, test and validation) Derive LAD score \(ana_{score}\) for all observations in training data i.e. \(ana_{score}=LAD(x_{train},y_{train})\)
1:\(MTS=[]\) (create empty MTS sample indices list)
2:for each class \(k\)do
3: Generate list of indices of all observations in class \(k\), \(ind_{k}\)
4: Subset anomaly scores for each class \[ana_{score_{k}}=ana_{score}[ind_{k}]\]
5: Identify top \(c_{size}\) observations with least anomaly scores and add them samples to the \(MTS\) sample i.e. (most non-anomalous observations)
6:for each iteration \(i\leq i_{iter}\)do
7: Fit the ANN on \(MTS\) using batch training, \[model_{iiti}.fit(x_{train}[MTS],y_{train}[MTS])\]
8: Predict model classification on \(x_{train}\), \(z_{pred}=model_{iiti}.predict(x_{train})\)
9: Identify all miss-classified observations' indices in training data \[err_{inds}=np.vhere(z_{pred}.!=y_{train})\]
10:for each class \(k\)do
11: Identify all miss-classified observations \(ind_{err_{k}}\)
12: Subset anomaly scores for miss-classified data in class \(k\) \[ana_{err_{k}}=ana_{score}[ind_{err_{k}}]\]
13: Identify \(c_{size}\) observations with highest anomaly scores from \(ind_{err_{k}}\) i.e. (most anomalous observations) and add them to \(MTS\) sample.
14:endfor
15:endfor
16:endfor
```
**Algorithm 1** LAD Anomaly only (Repeated Entry)
### 4.1 Datasets
We consider a variety of publicly available benchmark data sets from the UCI-ML repository [1]) (See Table 1 ) for the experimental evaluation. For training, test and validation, the data was randomly split into 80%, 10% and 10% of the data respectively.
#### Computational Time
In this section, we look at the time taken by each ANN to train on the datasets. Since the LIIT trained ANNs use only one-third of the full training data, the training time is evidently lower as compared to training for the full model. This can be clearly seen in the Figures 4.
```
0: Dataset \(X\) of size \((n,d)\), number of iterations \(N_{iter}\), threshold \(th\), number of true classes in the data \(K\), sample size from each class \(c_{size}\), number of iterations \(i_{iter}\), ANN classification model \(model_{limit}\). Initialization: Split data into \(x_{train},x_{test},x_{val},y_{train},y_{test},y_{val}\) (train, test and validation) Derive LAD score \(ana_{score}\) for all observations in training data i.e. \(ana_{score}=LAD(x_{train},y_{train})\)
1:\(MTS=[]\) (create empty MTS sample indices list)
2:for each class \(k\)do
3: Generate list of indices of all observations in class \(k\), \(ind_{k}\)
4: Subset anomaly scores for each class \[ana_{score_{k}}=ana_{score}[ind_{k}]\]
5: Identify top \(c_{size}\) observations with least anomaly scores and add them samples to the \(MTS\) sample i.e. (most non-anomalous observations)
6:for each iteration \(i\leq i_{iter}\)do
7: Fit the ANN on \(MTS\) using batch training, \[model_{lift}.fit(x_{train}[MTS],y_{train}[MTS])\]
8: Predict model classification on \(x_{train}\), \(z_{pred}=model_{limit}.predict(x_{train})\)
9: Identify all miss-classified observations' indices in training data \[err_{inds}=np.where(z_{pred}!=y_{train})\]
10:for each class \(k\)do
11: Identify all miss-classified observations \(ind_{err_{k}}\)
12: Subset anomaly scores for miss-classified data in class \(k\) \[ana_{err_{k}}=ana_{score}[ind_{err_{k}}]\]
13: Identify \(c_{size}/2\) observations each for the lowest and highest anomaly scores from \(ind_{err_{k}}\) i.e. (most anomalous as well as least anomalous observations) and add them to the \(MTS\) sample indices.
14:endfor
15: Remove repeated indices in the updated modified training sample, \[MTS=unique(MTS)\]
16:endfor
17:endfor
```
**Algorithm 2** LAD Anomaly + Normal (Unique Entry)
```
0: Dataset \(X\) of size \((n,d)\), number of iterations \(N_{iter}\), threshold \(th\), number of true classes in the data \(K\), sample size from each class \(c_{size}\), number of iterations \(i_{iter}\), ANN classification model \(model_{limit}\). Initialization: Split data into \(x_{train},x_{test},x_{val},y_{train},y_{test},y_{val}\) (train, test and validation) Derive LAD score \(ana_{score}\) for all observations in training data i.e. \(ana_{score}=LAD(x_{train},y_{train})\)
1:\(MTS=[]\) (create empty MTS sample indices list)
2:for each class \(k\)do
3: Generate list of indices of all observations in class \(k\), \(ind_{k}\)
4: Subset anomaly scores for each class \[ana_{score_{k}}=ana_{score}[ind_{k}]\]
5: Identify top \(c_{size}\) observations with least anomaly scores and add them samples to the \(MTS\) sample i.e. (most non-anomalous observations)
6:for each iteration \(i\leq i_{iter}\)do
7: Fit the ANN on \(MTS\) using batch training, \[model_{lift}.fit(x_{train}[MTS],y_{train}[MTS])\]
8: Predict model classification on \(x_{train}\), \(z_{pred}=model_{limit}.predict(x_{train})\)
9: Identify all miss-classified observations' indices in training data \[err_{inds}=np.where(z_{pred}!=y_{train})\]
10:for each class \(k\)do
11: Identify all miss-classified observations \(ind_{err_{k}}\)
12: Subset anomaly scores for miss-classified data in class \(k\) \[ana_{err_{k}}=ana_{score}[ind_{err_{k}}1]\]
13: Identify \(c_{size}\) observations with highest anomaly scores from \(ind_{err_{k}}\) i.e. (most anomalous observations) and add them to \(MTS\) sample.
14:endfor
15: Remove repeated indices in the updated modified training sample, \[MTS=unique(MTS)\]
16:endfor
17:endfor
```
**Algorithm 3** LAD Anomaly only (Unique Entry)
of the models. It is discernible that the Quantile Sampling along with LIIT trained ANN model is on par with the fully trained model.
#### 4.1.3 Stability to Perturbations
Since the training samples have a significant influence on the model's learning and performance, we try to look at the stability of the model to various perturbations in the test data. For this, random noise is sampled from a multivariate normal distribution with the \(0-8\%\) of the training data mean and variance and is added to all the observations in the test data. Each ANN's performance is evaluated in these settings for all benchmark datasets. The final classification performances are seen in Figures 5. It was interesting to note that different datasets had better and relatively more stable performances using different sampling strategies.
Now, to see the individual changes in performance to perturbations, we look at the raw change in AUC values due to the addition of perturbations for all models. Figures 6 show the change in performance for different datasets. In particular, Figures 6 a and 6 b show a group of datasets that show better performance using Quantile (Repeated), while Figures 6 c - 6 e show performance on datasets where Anomaly (Unique), Anomaly + Normal (Repeated) and Anomaly (Repeated) sampling approaches have respectively outperformed.
It can be seen that the Quantile Sample Trained Model has a higher mean AUC as well as lower deviation in AUC than the fully trained model in most datasets.
Here, we can see that different LIIT models outperform for different datasets. We hypothesize that the data distribution and heterogeneity play important role in the overall performance and stability. We intend to continue the study of the proposed hypothesis as future research.
## 5 Conclusion
We present a new training strategy for enhancing the learning speed of a neural network whilst maintaining the performance of the model. We present the LAD Improved Iterative Training (LIIT) which is an improved iterative training version of the traditional batch training approach. The LIIT approach uses a modified training sample (MTS) generated and updated using a LAD score based sampling approach that ensures enough representation of extreme and rare behaviours. In particular, the LAD score based Quantile Sampling approach allows ample heterogeneity within the sample data. We study the classification performance of the LIIT trained ANN in comparison with ANN trained on full training data on real benchmark datasets. Though the current research is limited to simple classification neural networks, the work has immense research potential. The LIIT training approach combined with specific LAD sampling methodology might draw out the best performance in a dataset based on the data characteristics. Future studies might help understand the impact of data heterogeneity and sampling method on the performance of ANN.
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline Name & \(N\) & \(d\) & \(c\) \\ \hline Ecoli & 336 & 7 & 8 \\ Imgseg & 2310 & 18 & 7 \\ Skin & 245057 & 4 & 2 \\ Shuttle & 58000 & 10 & 2 \\ Wisc & 699 & 9 & 2 \\ Iono & 351 & 33 & 2 \\ Zoo & 101 & 16 & 7 \\ Letter & 20000 & 16 & 26 \\ Comm And Crime & 1994 & 102 & 2 \\ Vowel & 990 & 10 & 11 \\ Fault & 1941 & 28 & 2 \\ Sonar & 208 & 60 & 2 \\ Balance-Scale & 625 & 4 & 3 \\ Pageb & 5473 & 11 & 2 \\ Spambase & 4601 & 58 & 2 \\ Wave & 5000 & 22 & 2 \\ Tae & 151 & 3 & 3 \\ Thy & 215 & 5 & 3 \\ Opt Digits & 5620 & 63 & 2 \\ Concrete & 1030 & 9 & 2 \\ \hline \end{tabular}
\end{table}
Table 1: classification Benchmark Datasets: Description of the benchmark data sets used for evaluation of the classification detection capabilities of the proposed model. \(N\) - number of instances, \(d\) - number of attributes, \(c\) - number of true classes in the data set.
## References
* [1]D. Dheeru and E. Karra Taniskidou (2017) UCI machine learning repository. External Links: Link Cited by: SS1.
* [2]J. Djolonga, J. J. Yung, M. Tschannen, R. Romijnders, L. Beyer, A. Kolesnikov, J. Puigcerver, M. Minderer, A. D'Amour, D. Moldovan, et al. (2021) On robustness and transferability of convolutional neural networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition16458-16468. Cited by: SS1.
* [3]K. Kavzoglu (2009) Increasing the accuracy of neural network classification using refined training data. Environmental Modelling & Software24 (7), pp. 850-858. Cited by: SS1.
* [4]R. E. Schapire (2003) The boosting approach to machine learning: an overview. Nonlinear estimation and classification149-171. Cited by: SS1.
* [5]M. Shanker, M. Y. Hu, and M. S. Hung (1996) Effect of data standardization on neural network training. Omega24 (4), pp. 385-397. Cited by: SS1.
* [6]D. Soekhoe, D. Van Der Putten, and A. Plaat (2016) On the impact of data set size in transfer learning using deep neural networks. International symposium on intelligent data analysis, pp. 50-60. Cited by: SS1.
* [7]J. Wang, S. Li, Z. An, X. Jiang, W. Qian, and S. Ji (2019) Batch-normalized deep neural networks for achieving fast intelligent fault diagnosis of machines. Neurocomputing329, pp. 53-65. Cited by: SS1.
* [8]Y. Zhong, F. Fei, Y. Liu, B. Zhao, H. Jiao, and L. Zhang (2017) Satcnn: satellite image dataset classification using agile convolutional neural networks. Remote sensing letters8 (2), pp. 136-145. Cited by: SS1.
Figure 4: Computation time for different datasets: The figures illustrate the computation time for different LIIT trained ANN models in comparison to the ANN trained on full training data (Full model).
|
2306.06523 | Finding Hamiltonian cycles with graph neural networks | We train a small message-passing graph neural network to predict Hamiltonian
cycles on Erd\H{o}s-R\'enyi random graphs in a critical regime. It outperforms
existing hand-crafted heuristics after about 2.5 hours of training on a single
GPU. Our findings encourage an alternative approach to solving computationally
demanding (NP-hard) problems arising in practice. Instead of devising a
heuristic by hand, one can train it end-to-end using a neural network. This has
several advantages. Firstly, it is relatively quick and requires little
problem-specific knowledge. Secondly, the network can adjust to the
distribution of training samples, improving the performance on the most
relevant problem instances. The model is trained using supervised learning on
artificially created problem instances; this training procedure does not use an
existing solver to produce the supervised signal. Finally, the model
generalizes well to larger graph sizes and retains reasonable performance even
on graphs eight times the original size. | Filip Bosnić, Mile Šikić | 2023-06-10T21:18:31Z | http://arxiv.org/abs/2306.06523v1 | # Finding Hamiltonian cycles with graph neural networks
###### Abstract
We train a small message-passing graph neural network to predict Hamiltonian cycles on Erdos-Renyi random graphs in a critical regime. It outperforms existing hand-crafted heuristics after about 2.5 hours of training on a single GPU. Our findings encourage an alternative approach to solving computationally demanding (NP-hard) problems arising in practice. Instead of devising a heuristic by hand, one can train it end-to-end using a neural network. This has several advantages. Firstly, it is relatively quick and requires little problem-specific knowledge. Secondly, the network can adjust to the distribution of training samples, improving the performance on the most relevant problem instances. The model is trained using supervised learning on artificially created problem instances; this training procedure does not use an existing solver to produce the supervised signal. Finally, the model generalizes well to larger graph sizes and retains reasonable performance even on graphs eight times the original size.
Machine learning, Neural nets, Graph algorithms, Heuristics design
## I Introduction
When dealing with problems that are computationally too costly to solve explicitly, such as NP-hard problems, it is common to rely on heuristics. The idea of using neural networks to train such heuristics is quite appealing and has attracted considerable interest over the years. One aims to enhance an algorithm, such as greedy search, with a neural network module that is trained to improve the decision-making of the algorithm. See [4, 8] or [29] for an introduction and an overview of the area. In practice, problem instances typically come from a distribution with specific biases which are hard to describe explicitly. These can be exploited by a neural network. As an illustration, let us consider the Hamiltonian cycle problem (HCP), which is at the core of this paper (nodes in the _cycle_ can not repeat). It asks the following:
**Problem 1** (HCP).: _Determine whether or not there exists a cycle that passes through all vertices of a given graph. If it exists, such a cycle is called a Hamiltonian cycle, and the graph is said to be Hamiltonian._
The general HCP is known to be NP-complete and thus computationally intractable. Currently, the fastest known exact solution algorithm is due to [5] and has worst-case complexity of \(\mathcal{O}(1.657^{n})\).
As far as applications are concerned, HCP is used to improve runtimes of rendering engines, see [2]. To do so, one solves the HCP for a dual graph of triangulation and renders the triangles in that order which reduces the number of points to process. Another application of HCP comes from genomics, more specifically, the problem of de novo genome assembly. The task here is to reconstruct the genetic material of an organism, i.e. the exact sequence of nucleobases on all of its chromosomes, from a large number of sequenced fragments called _reads_. As chromosomes contain hundreds of millions bases, correctly assembling a single one is already a huge undertaking, see [19] for an example. Interpreting overlaps between reads as edges, after preprocessing and cleaning (see [32]), one ends up with a _string graph_ as proposed in [20]. The Hamiltonian cycle in the string graph corresponds to the correct assembly of the chromosome. For more details see [22, 3, 28] and [14]. Both triangular meshes of 3d objects and string graphs of various assemblers (such as [3] or [28]) have specific structures and statistical properties arising from the context. These could make solving the HCP easier but are difficult to exploit directly. We show here how to exploit them using graph neural networks in a similarly specific setting of Erdos-Renyi random graphs.
For HCP in general, heuristics based on Hopfield networks were already trained in the early 90-ties, see [17, 18]. More recently, however, the area of geometric deep learning and graph neural networks has seen rapid developments and produced neural network layers such as message passing [9] or graph attention layers [30]. These layers are built to exploit any graph structure in data and can handle arbitrarily large graphs with a limited set of parameters, resembling convolution layers in computer vision. They have found applications in image and text processing, combinatorial optimization, physics, chemistry [9] and biology [22]. See [35] and [7] for a deeper dive into the area. In particular, they are excellent candidates for heuristics of graph-based problem. However, most efforts so far have been directed towards combinatorial optimization problems, the two-dimensional traveling salesman problem in particular. Heuristics for the 2d-TSP based on transformer architecture were trained in [16, 6] and those based on graph
neural networks in [34] and [12]. The state-of-the-art result is achieved in [6] where a comprehensive list of references can be found as well. It has to be noted that previously mentioned models still perform worse than the Concorde TSP solver [1], a state-of-the-art _exact_ solver based on branch and bound search combined with the cutting plane method. Nevertheless, theoretical complexities of neural network models are superior to Concorde. Let us also mention [13, 26] and [27] which work with general combinatorial optimization and constraint satisfaction problems.
In this paper we present a HCP solver based on _graph_ neural networks and show that it easily outperforms most hand-made heuristics. The code is available at [https://github.com/lbcb-sci/GNNs-Hamiltonian-cycles](https://github.com/lbcb-sci/GNNs-Hamiltonian-cycles).
## II Relation to TSP and 2d-TSP
It is known that the HCP can be reformulated as a special case of the _general traveling salesman problem (TSP)_:
**Problem 2** (TSP).: _Given a graph with a non negative length assigned to each edge, find the shortest cycle passing through all its vertices._
Hence, TSP solvers can be used for HCP and we shall exploit this by using _Concorde TSP solver_, see [1], to evaluate the performance of our models in Section V. While it is tempting to assume that all papers studying TSP are immediately applicable to the HCP, this _is not the case at all_. In particular, papers presenting neural network TSP solvers, such as [6, 12, 16] or [34] only study the special case of _two-dimensional TSP_:
**Problem 3** (2d-TSP).: _Given a set of points in the unit square \([0,1]^{2}\), find the shortest (in terms of Euclidean distance) cycle which passes through all of them._
The 2d-TSP introduces two simplifications to the general TSP:
* graphs are always fully connected and
* distances between nodes comply with Euclidean structure (triangle inequality).
Only \(2n\) point coordinates are required to describe a 2d-TSP instance, in contrast to \(n^{2}-n\) adjacency matrix weights needed for the general TSP. Moreover, 2d-TSP solvers can not be used to solve the HCP. On the contrary, we find it better to view the HCP and the 2d-TSP as two quite different aspects of the general TSP. The HCP focuses on complexities arising from discrete connectivity structure while the 2d-TSP deals with difficulties coming from the choice of edge lengths.
## III Problem setup
We only consider simple, undirected graphs and denote a typical graph example by \(G\) and its size (number of nodes) by \(n\). The HCP is classically posed as a decision problem: _Determine whether the graph contains a Hamiltonian cycle or not_. However, to put more emphasis on finding the actual cycle, which is important in practice, we also require that solvers produce at least one Hamiltonian cycle. In case the output of a solver is not a valid Hamiltonian cycle, which is straightforward to check, we assume the solver predicted that no Hamiltonian cycle exists.
### _Inputs and outputs_
A solver receives as input a graph \(G\) and outputs a walk \(v_{1}v_{2}\ldots v_{k}\) on \(G\) proposing a Hamiltonian cycle. The walk is considered to be closed if \(v_{1}=v_{k}\) and thus is a Hamiltonian cycle only if \(k=n+1\) and nodes \(v_{1},v_{2},\ldots v_{k-1}\) are all distinct.
### _Evaluation distribution_
The performance of HCP heuristics depends heavily on properties of graphs they are required to solve. Indeed, it is reasonable to have heuristics constructed specifically to achieve good performance on particular types of graphs, such as duals of triangulations or string graphs mentioned in Section I. As there are many possible applications of the HCP, finding a good class of evaluation graphs is a challenging task. Currently at least, there seems to be no agreed-upon class for this purpose. There are datasets of collected HCP problems, see, for example, [23] or [10], but they are not quite large enough to train neural networks on. A natural approach, used in early works such as [17, 18, 33] is to use random graphs generated by adding edges between pairs of vertices independently with a fixed probability \(p\in(0,1)\). Such random graphs are known as _Erdos-Renyi random graphs_ with edge probability \(p\). Papers working with 2d-TSP typically use a similar idea of evaluation on randomly generated problems, concretely the _random uniform euclidean (RUE)_ sets of two-dimensional points chosen uniformly at random from the unit square \([0,1]^{2}\).
However, using Erdos-Renyi graphs with _constant_ edge probability parameter \(p\) for evaluating the HCP has a major flaw. Intuitively it is clear that the HCP gets more difficult as the size of the graph increases. This is not the case for Erdos-Renyi graphs with _constant_\(p\) as indicated by Table I. It tracks performances of Concorde TSP solver and HybridHam heuristic from [25]. The performance of either solver clearly improves as the graph size increases, suggesting that the problem is in fact getting easier. The issue is that large graphs end up having too many edges, leading to many Hamiltonian cycles thus making it easier to find one.
This can be mended by carefully decreasing parameter \(p\) as the size of the graph increases. We rely on the following theorem from [15].
**Theorem 1** (Paraphrase of [15], Theorem 1.).: _Let \(\text{ER}(n,p)\) denote the Erdos-Renyi graph on \(n\) nodes with edge proba
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{5}{c}{graph size} \\ \cline{2-6} Name & \(25\) & \(50\) & \(100\) & \(150\) & \(200\) \\ \hline Concorde & 0.80 & 1.0 & 1.0 & 1.0 & 1.0 \\ HybridHam & 0.41 & 0.68 & 0.79 & 0.84 & 0.87 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Fraction of solved instances out of 5000 in supercritical regime, \(p=0.25\)
bility parameter \(p\). For every \(p_{H}\in(0,1)\) there is an explicit sequence \((p_{n})_{n\in\mathbb{N}}\) such that_
\[\mathbb{P}\left(\text{ER}(n,p_{n})\text{ is Hamiltonian}\right)\xrightarrow{n \rightarrow\infty}p_{H}.\]
_Concretely, one can take \(p_{n}=\frac{\ln n+\ln\ln n-\ln\ln p_{H}^{-1}}{n-1}\)._
In other words, for any \(p_{H}\) there is a procedure of generating graphs such that they contain a Hamiltonian cycle with a probability approximately equal to \(p_{H}\). We call this the _critical regime_ for the HCP. If the asymptotic behavior of \(p_{n}\) is above the one from the previous theorem, we speak _of the supercritical regime_. Examining the performance of Concorde solver in Table II shows that the empirical fraction of Hamiltonian cycles remains relatively stable and is fairly close to the asymptotic value of \(p_{H}=0.8\). By controlling the existence probability of Hamiltonian cycles we control their expected number in a graph and hence also the difficulty of the HCP. This motivates our use of Erdos-Renyi random graphs in the critical regime as the evaluation class. For simplicity, we use \(p_{H}=0.8\) for the rest of the paper although other values of \(p_{H}\) would work equally well. Two examples of random graphs in the critical regime are shown Fig. III.1.
### _Datasets_
We work exclusively with generated datasets. Our test dataset is sampled from the evaluation distribution described in the previous section and consists of \(5000\) Erdos-Renyi graphs in critical regime with \(p_{H}=0.8\) for each size \(n=25\), \(50\), \(100\), \(150\) and \(200\). This sample size is large enough so that the fraction of Hamiltonian graphs stays within \(\pm 2\%\) interval with \(95\%\) probability for every size \(n\). Train and validation datasets are generated from a different distribution described in Section IV-B. They are never explicitly sampled. Instead, graph examples are generated on the fly when needed. The train dataset is _limited_ to graphs of size \(25\) in order to emphasize generalization properties of the model.
## IV Model details
Our model is autoregressive and decodes the Hamiltonian cycle a single node at a time. It begins by selecting a starting node and then chooses between neighbors in each following step. The chosen node is then appended to the partial solution and the process repeats until a node gets visited twice. There are two main components, a _neural network component_ that guides the neighbor selection at each step and a _search algorithm_ which combines selected nodes into a Hamiltonian cycle. Concretely, given a _partial solution walk_\(v_{1}v_{2}\ldots v_{k}\) at \(k+1\)-th step of autoregressive decoding, the neural network component estimates with \(\mathcal{P}(v|v_{1}\ldots v_{k})\) the probability that extending the walk by node \(v\) will eventually lead to a Hamiltonian cycle (HC):
\[\mathcal{P}(v|v_{1}\ldots v_{k})\approx\mathbb{P}\left(v_{1}\ldots v_{k}v \subseteq\text{HC}\big{|}v_{1}\ldots v_{k}\subseteq\text{HC}\right).\]
The search algorithm then selects the neighbor \(v\) greedily according to estimated probabilities. It stops decoding when a node gets visited twice, i.e. \(v\in\{v_{1},\ldots v_{k}\}\), and returns \(v_{1}v_{2}\ldots v_{k}v\) as the solution. The greedy approach is the simplest case of beam search algorithm with beam width \(\beta=1\). For beam width \(\beta>1\), at each step \(k\) the algorithm keeps track of the top \(\beta\) partial walks according to score
\[\mathcal{S}(v_{1}v_{2}\ldots v_{k}) :=\prod_{j=1}^{k}\mathcal{P}(v_{j}|v_{1}\ldots v_{j-1})\] \[\approx\mathbb{P}(v_{1}v_{2}\ldots v_{k}\text{ is contained in a HC})\]
and extends them over all possible neighbors. A new set of top \(\beta\) partial solutions is then selected and the process repeats. Clearly, larger beam width \(\beta\) compensates for the performance of neural network at the cost of additional computations. While we report the performance of various beam widths in Table II, our basic model employs the simplest possible search algorithm (\(\beta=1\)) in order to emphasize the neural network part.
Our neural network uses _persistent node features_\(\mathbf{h}\) in the same way as in [31]. These features are passed on between applications of the neural network, adding a sort of recurrent structure to the network. This provides a way for the network to access information from previous decoding steps.
### _GNN architecture_
Since _graph neural networks (GNN)_ form the central component of our model, HCP information needs to be represented
Fig. 1: Examples of random graphs in the critical HCP regime. \(25\) nodes in top and \(50\) nodes in bottom row. Graphs in each row are identical. Right column graph is ordered in circle following a Concorde TSP solution, with the HP predicted by our basic model shown in solid red.
in the suitable form. We represent the adjacency matrix of the graph as a list of edges and one hot-encode the following three node-level feature channels. Two channels to mark the start and the end node of the partial solution plus a channel to mark all nodes the solution contains. Note that this is precisely the information needed to correctly extend the walk by an unvisited node or close the HC if necessary.
We employ the _encode-process-decode_ architecture analogue to the one used in [31]. This means that our GNN is divided into the _encoder_, _processor_ and _decoder_ networks. The whole GNN has around \(22\) thousand parameters. Both encoder and decoder are single layer, fully connected networks with ReLU activation that operate on node features _individually for each node_. The processor network, containing about \(95\%\) of all parameters, is the core part. It is a residual stack of \(5\) max-aggregation message passing layers, see [9] for more details. As names suggest, an input example is encoded, then processed and finally decoded by applying the above networks successively. In addition, we augment the output of the encoder with a randomized vector of features which was shown to improve the performance of GNNs in [24]. Algorithm 1 presents the pseudocode of a single forward pass. A "free" index \(i\in G\) in a line indicates that this line should be repeated for each node; symbol \(\bigoplus\) denotes concatenation in feature dimension; operator \(\max_{j\sim i}\) stands for maximum over neighbors of \(i\).
```
Input:\(G\) - graph with \(n\) vertices; \(\mathbf{x}\in\mathbb{R}^{(n,d_{n})}\) - partial walk repr.; \(\mathbf{h}\in\mathbb{R}^{(n,d_{n})}\) - persistent features Output:\(\mathbf{p}\in[0,1]^{n}\) - next-step probabilities per node. Hyperparams:\(d_{\text{in}}=3,d_{h}=32,d_{r}=4,n_{p}=5\) Params:\(\theta\equiv\{W_{E},b_{E},W_{P},b_{P},\ldots\}\) - NN weights // Encoder - Initialize features \(\mathbf{z}_{i}=W_{E}(\mathbf{x}_{i}\bigoplus\mathbf{h}_{i})+b_{E}\in\mathbb{ R}^{d_{h}-d_{r}}\) \(\mathbf{r}=\text{Uniform}\left([0,1]^{n\times d_{r}}\right)\in\mathbb{R}^{(n,d_ {r})}\) \(\mathbf{h}_{i}=\mathbf{z}_{i}\bigoplus\mathbf{r}_{i}\in\mathbb{R}^{d_{h}}\) // Processor - Apply residual max-MPNN layers for\(k=1,2,\ldots n_{p}\)do \(\mathbf{m}_{i}=\max_{j\sim i}\operatorname{ReLU}\left(W_{M}^{k}(\mathbf{h}_{i} \bigoplus\mathbf{h}_{j})+b_{M}\right)\in\mathbb{R}^{d_{h}}\) \(\mathbf{h}_{i}=\mathbf{h}_{i}+\operatorname{ReLU}\left(W_{P}^{(k)}\left( \mathbf{h}_{i}\bigoplus\mathbf{m}_{i}\right)+b_{P}^{(k)}\right)\in\mathbb{R}^ {d_{h}}\) // Decoder - Extract logits and probabilities \(\mathbf{l}_{i}=W_{D}(\mathbf{z}_{i}\bigoplus\mathbf{h}_{i})+b_{D}\in\mathbb{R}\) for\(i=1,2,\ldots,n\)do if\(i\sim\text{GetLastNode}(\mathbf{x})\)then \(\mathbf{l}_{i}=-\infty\) \(\mathbf{p}=\operatorname{softmax}\mathbf{l}\in\mathbb{R}^{n}\) return\(\mathbf{p}\), \(\mathbf{h}\)
```
**Algorithm 1**ApplyGNN\((G,\mathbf{x},\mathbf{h};\theta)\).
### _Training_
Our supervised approach requires a large number of solved HCP instances during training. Even though they can easily be generated using existing HCP solvers, we will show it is possible to train on artificially generated graphs such that HCP solution is known in advance. We believe that such methods are useful when working with problems similar to HCP for which no exact solvers are available. The construction of a training example starts from a graph \(G\) of arbitrary size but with no edges. A random permutation of nodes is then connected into a single cycle by adding appropriate edges into \(G\). This will be a Hamiltonian cycle in the final graph and is stored as a supervision signal. Finally, for every pair of vertices in \(G\) we add and edge connecting them with probability \(p_{\text{edge}}=0.125\) (independently of other pairs). \(p_{\text{edge}}\) is treated as a training hyperparameter and was determined through experimentation. While the distribution of training samples generated in this way is quite different from the evaluation distribution which consists of ER graphs, the results show that the basic model still generalizes well. Note also that the final graph may have Hamiltonian cycles other than the original one. All such cycles are ignored during training.
The training procedure is summarized in Algorithm 2. A single training example consists of a graph \(G\) and a Hamiltonian cycle \(v_{1}v_{2}\ldots v_{n}v_{1}\) on \(G\). The network is trained using _teacher forcing_ along this Hamiltonian cycle on the conditional cross-entropy loss \(\mathcal{L}\) defined by
\[\mathcal{L}\left(v_{1}\ldots v_{n}v_{1}\right)=-\sum_{i=2}^{n+1}\ln\left( \mathcal{P}(v_{i}|v_{1}\ldots v_{i-1})\right),\]
where \(v_{n+1}:=v_{1}\) for notational convenience. Remark that the summation index starts from \(2\) because the choice of the first node in a cycle is completely arbitrary. Loss \(\mathcal{L}\) is minimized over minibatches of 8 training examples using Adam optimizer with a learning rate of \(10^{-4}\) for 2000 epochs of 100 gradient updates. The final model checkpoint was selected based on the fraction of solved instances on validation set generated in the same way as the training set. The whole training was performed on a single NVIDIA GeForce RTX 3080 GPU and took about 2.5 hours. Weight initialization and other optimizer hyperparameters are kept to default PyTorch 1.11.0 values, [21].
## V Results and discussion
We evaluate the performance of our models by measuring the fraction of successfully solved problems on test dataset described in Section III and compared it with following heuristics:
1. _Concorde TSP solver_ - the state-of-the-art exact TSP solver from [1],
2. _HybridHam_ - an HCP heuristic from [25],
3. _Ant-inspired heuristic_ - an HCP heuristic presented in [33],
4. _Least degree first heuristic_ - simple greedy heuristic always selecting the neighbor with the lowest degree.
Let us remark that the ant-inspired heuristic is a convergence procedure which we terminate after \(5n^{2}\ln n\) steps. This bound matches the theoretical complexity of the basic model leading to a relatively fair comparison. In [33], authors suggest to
terminate after \(\mathcal{O}(n^{3})\) iterations but this is very time consuming. We list evaluation results in Table II and average inference times in Table III. Keeping in mind that testing can be performed on a different sample of \(5000\) graphs, the 95% confidence interval for all values in Table II is below \(\pm 0.02\). Models were run on a single NVIDIA GeForce RTX 3080 GPU while all other solvers were run on a single core of Intel Core i7-12700 processor. Note also that HybridHam, least degree first and ant-inspired heuristic were reimplemented in Python 3.8 and could be optimized for better performance.
Our HCP setup makes it impossible for a solver to produce a false positive prediction. Consequently, all solvers have perfect precision and metrics such as \(F_{1}\), \(F_{2}\) are unnecessarily complicated. As the number of true positives (solvable HCPs) is stable by construction of the evaluation set (0.8 in the limit), accuracy, recall and fraction of solved instances have similar qualitative behavior. Thus we only report the fraction of solved instances for each model.
In conclusion, after only a few hours of training our basic model clearly outperformed existing heuristic solvers without using any pre-solved HCP. We believe that techniques similar to the ones presented here can be used to quickly develop heuristic for variations or generalizations of the HCP. For example, the task of finding the longest cycle in a graph. Or the task of finding the route of minimal length which covers all the nodes in the graph (some of them maybe more than once). The class of Erdos-Renyi random graphs is used for simplicity and evaluation convenience since it allows for rough estimate of the difficulty of the HCP with respect to its size. Another class of graphs can be used just as well, provided that it is specific enough so that the neural network can exploit its statistical or structural peculiarities. But this typical happens with graph instances coming from practical problems. Moreover, polynomial complexity of \(\mathcal{O}(n^{2}\log n)\) for our basic model is superior to exponential complexity of exact solvers. For example, Concorde TSP solver on the RUE 2d-TSP instances was experimentally found to have complexity of \(\mathcal{O}(1.24\sqrt{n})\) in [11], although it is not clear how this translates to the critical regime HCP. Nevertheless, neural network solvers are yet to achieve reasonable performance on large input graphs and Concorde TSP solver remains the best-performing HCP solver. This comes as no surprise since Concorde also outperforms all existing neural network solvers for the 2d-TSP problem.
## VI Ablation study & training stability
The neural network component from Section IV is enhanced with persistent features and vectors of randomized features but can function without either of them. To estimate their importance, we separately removed each one and trained the corresponding reduced model 5 times from scratch. Average performances and confidence intervals of 2 standard deviation are shown in Fig. VI.1.
As shown on Fig. VI.1, persistent features play a crucial role in our model. Without them the model can fail to converge during training. This is probably because persistent features allow the model to updated its internal node representations throughout decoding process which results in an RNN-like behavior and consequently increases the range of message passing neural network layers. The use of randomized features is not as significant but becomes noticeable when generalizing to large graphs. Note also that Fig. VI.1 shows the standard deviation of training procedure for the main model to be around 5% of graphs solved.
|
2310.05900 | Learning to Decode the Surface Code with a Recurrent, Transformer-Based
Neural Network | Quantum error-correction is a prerequisite for reliable quantum computation.
Towards this goal, we present a recurrent, transformer-based neural network
which learns to decode the surface code, the leading quantum error-correction
code. Our decoder outperforms state-of-the-art algorithmic decoders on
real-world data from Google's Sycamore quantum processor for distance 3 and 5
surface codes. On distances up to 11, the decoder maintains its advantage on
simulated data with realistic noise including cross-talk, leakage, and analog
readout signals, and sustains its accuracy far beyond the 25 cycles it was
trained on. Our work illustrates the ability of machine learning to go beyond
human-designed algorithms by learning from data directly, highlighting machine
learning as a strong contender for decoding in quantum computers. | Johannes Bausch, Andrew W Senior, Francisco J H Heras, Thomas Edlich, Alex Davies, Michael Newman, Cody Jones, Kevin Satzinger, Murphy Yuezhen Niu, Sam Blackwell, George Holland, Dvir Kafri, Juan Atalaya, Craig Gidney, Demis Hassabis, Sergio Boixo, Hartmut Neven, Pushmeet Kohli | 2023-10-09T17:41:37Z | http://arxiv.org/abs/2310.05900v1 | # Learning to Decode the Surface Code
###### Abstract
Quantum error-correction is a prerequisite for reliable quantum computation. Towards this goal, we present a recurrent, transformer-based neural network which learns to decode the surface code, the leading quantum error-correction code. Our decoder outperforms state-of-the-art algorithmic decoders on real-world data from Google's Sycamore quantum processor for distance 3 and 5 surface codes. On distances up to 11, the decoder maintains its advantage on simulated data with realistic noise including cross-talk, leakage, and analog readout signals, and sustains its accuracy far beyond the 25 cycles it was trained on. Our work illustrates the ability of machine learning to go beyond human-designed algorithms by learning from data directly, highlighting machine learning as a strong contender for decoding in quantum computers.
## 1 Quantum error correction
### Context and background
The idea that quantum computation has the potential for computational advantages over classical computation, both in terms of speed and resource consumption, dates all the way back to Feynman [26]. Beyond Shor's well-known prime factoring algorithm [83] and Grover's quadratic speedup for unstructured search [39], many potential applications in fields such as material science [59, 3, 4, 75], machine learning [8, 47, 46], and optimization [25, 86], have been proposed.
Yet, for practical quantum computation to become a reality, errors on the physical level of the device need to be corrected so that deep circuits can be run with high confidence in their result. Such fault-tolerant quantum computation can be achieved through redundancy introduced by grouping multiple physical qubits into one logical qubit [82, 52].
One of the most promising strategies for fault-tolerant computation is based on the surface code, which has the highest-known tolerance for errors of any codes with a 2D nearest-neighbor connectivity [15, 29, 53]. In the surface code, a logical qubit is formed by a \(d\times d\) grid of physical qubits, called _data qubits_, such that errors can be detected by periodically measuring \(X\) and \(Z\) stabilizer checks on groups of adjacent data qubits, using \(d^{2}-1\)_stabilizer qubits_ located among the data qubits (Fig. 1A). A _detection event_ occurs when two consecutive measurements of the same stabilizer give different parity outcomes. A pair of observables \(X_{L}\) and \(Z_{L}\), which commute with the stabilizers but anti-commute with each other, define the logical state of the surface code qubit. The minimum length of these observables is called the _code distance_, which represents the number of errors required to change the logical qubit without flipping a stabilizer check. In a square surface code, this is the side length \(d\) of the data qubit grid. The task of an error correction _decoder_ is to use the history of stabilizer measurements, the _error syndrome_, to apply a correction to the noisy logical measurement outcome in order to obtain the correct one.
However, decoding quantum codes is a hard problem. For instance, they exhibit _degeneracy_, whereby exponentially many configurations of errors may produce the same history of stabilizer measurements. They must also contend with rich noise models induced by quantum circuits that include _leakage_, qubit excitations beyond the computational states \(|0\rangle\) and \(|1\rangle\) that are long-lived and mobile [30]; and _crossstalk_, unwanted interactions between qubits inducing long-range and complicated patterns of detection events [93]. Degeneracy, circuit-level correlations, leakage, and the difficulty in modeling these errors produce decoding problems that resist many tried-and-true methods commonly utilized for classical codes [73, 70, 57, 43].
Despite significant progress on quantum error correction [100, 60, 37, 84, 88, 55, 23, 76, 104, 40], challenges remain. Ultimately, to perform fault-tolerant quantum computation such as the factorization of a \(2\,000\) bit number, the logical error rate needs to be reduced to less than \(10^{-10}\) per logical operation [29, 33]. Logical errors for the surface code are suppressed exponentially, \(\sim\Lambda^{-d/2}\), when increasing the code distance \(d\), where \(\Lambda\) is a 'quality factor' determined by the accuracy of the device and the performance of the decoder. This means that improving the inference accuracy of the decoder will reduce the required size or required gate fidelity of a quantum processor to run a quantum algorithm. Consequently, an accurate decoder is vital to realizing a fault-tolerant quantum computer using realistic noisy hardware and minimal resources. In addition, the decoder must be fast enough to keep up with the rate of syndrome information produced by the quantum computer, test it creates an exponentially increasing backlog of syndrome information to process [91].
### Quantum error correction with machine learning
In recent years, there has been an explosion of work applying machine-learning techniques to quantum computation, including decoding. Initial decoders used restricted Boltzmann machines [92], and many subsequent approaches use reinforcement learning [89, 27, 2, 62] or supervised learning [95, 61, 19, 22]. Several previous works have focused on the surface code, observing that machine-learning techniques could utilize correlations introduced by \(Y\)-type errors to outperform popular minimum-weight perfect matching (MWPM) decoding [54], and could even be used as a preprocessing step for graph-based decoders [64]. Other works have focused more on speed and scalability [68, 103, 66, 31], including the use of symmetries to improve performance [24, 99]. There have also been examples of machine-learning decoders applied to the fully fault-tolerant setting with more realistic noise models [20, 7, 56]. In particular, on distance-7 color codes, a recurrent neural network architecture has been used to demonstrate good performance over many error correction cycles [6]. However--unlike us--none of these works considered crosstalk or leakage.
More recently, [94] built on the architecture of [6], and assessed its performance on the Sycamore surface code experiment [37]. They trained their recurrent neural network-based decoder on a circuit-level depolarizing noise model (with noise parameters fitted to the experimental data) and evaluated on experimental data, demonstrating parity with the best previously-published decoder at code distance 3. Furthermore, the authors directly quantified the benefits of modelling correlations. They also explored the use of analog inputs (modelled by a symmetric Gaussian I/Q readout noise model), which allowed a slight increase in accuracy.
In this work, we push the boundary of both scale and accuracy of machine-learning decoding in the fault-tolerant setting of circuit-level noise. We present a novel recurrent architecture using transformers and convolutions, which learns to decode the surface code. We evaluate our machine-learning decoder on experimental data from a surface code logical qubit implemented on Google's Sycamore quantum computer [37], and observe markedly better error suppression than state-of-the-art correlated matching and tensor network decoders [37, 43, 14, 35] by training on data directly.
Furthermore, for larger code distances up to 11, we benchmark our ML decoder against a noise model which realistically simulates the effects of leakage, crosstalk, and analog readout as present in a real-world quantum device. The ML decoder outperforms state-of-the-art correlated matching decoders, and sustains its accuracy over at least 100 000 error correction cycles (when only trained on up to 25 cycles). Our recurrent transformer model achieves a further accuracy boost by being able to seamlessly incorporate raw measurement signals from the device (in the form of "in-phase and quadrature" readout signals [48]), and leakage information from the syndrome.
These capabilities, emerging from the ML decoder's ability to learn from raw data--as well as the decoder's provision of calibrated error probabilities--come without computational overhead. In short, our decoder expands the scope of fault-tolerant machine-learning decoders while offering better accuracy in realistic settings than the most competitive alternatives.
## 2 A recurrent syndrome transformer
### Model architecture
Our machine-learning decoder is a neural network with a structure designed to match the structure of the error correction problem, which learns to decode the surface code by training on experimental and simulated data. Its design mirrors the time-invariance of the stabilizer readout by repeated iteration of a fixed computational block (Fig. 1B). This _recurrent_ architecture constructs a fixed-size _decoder state_ representation which stores information about the stabilizers observed up to the current cycle.
Since the data received at each cycle correspond to individual stabilizers, our model maintains the decoder state as a vector representation _per stabilizer_. The model applies a neural network block (Fig. 1D) at each cycle to update the decoder state by incorporating the current cycle's stabilizers (Fig. 1C). While decoders like MWPM typically accept sparse binary _detection events_, a neural network allows for less restrictive inputs. We find better results and more stable training when providing both measurements and events, rather than events alone
(see materials and methods, Appendix B.4). In Section 2.3 we will show that these inputs can be extended further to include probabilistic representations of I/Q readouts and leakage information. Each stabilizer \(i\) at cycle \(n\) is represented by a stabilizer embedding vector \(S_{ni}\), created by combining the inputs in the network of Fig. 1C, and directly added in to the decoder state (Fig. 1D).
The key processing operation is the _Syndrome Transformer_ (Fig. 1E) which updates the decoder state by passing information between stabilizer representations in a learned, structured manner. The Syndrome Transformer augments the multi-headed attention of a conventional Transformer [97] with an _attention bias_ (Fig. S6) and spatial convolutions, each of which learn to modulate the information flow between stabilizer representations based on their physical relationship and type. At any stage in the computation, the decoder state can be processed by a readout network (Fig. 1F) to predict whether a logical error has occurred. In the readout network the decoder state is scattered to a 2D representation and projected to a per-data-qubit representation before pooling into a representation per-row or column of data qubits (representing logical \(X_{L}\) or \(Z_{L}\) observables, respectively), which a final residual network processes before predicting the probability of a logical error. We show the effectiveness of these architecture design decisions by ablation (Fig. S15).
Figure 1: **The neural network architecture designed for surface code decoding.****(A)**\(5\times 5\) rotated surface code layout, with data qubits (dark grey dots), \(X\) and \(Z\) stabilizer qubits (labelled light grey dots, or highlighted in blue/red when they detect a parity violation) interspersed in a checkerboard pattern. Logical observables \(Z_{\text{L}}\) and \(X_{\text{L}}\) are shown as bold lines on the left and bottom grid edges respectively. **(B)** The recurrent network iterates over time updating a representation of the decoder state and incorporating the new stabilizers at each cycle. **(C)** Creation of an embedding vector \(S_{ni}\) for each new stabilizer. **(D)** Each block of the recurrent network combines the decoder state and the stabilizers \(S_{n}\) for one cycle (scaled down by a factor of 0.7). The decoder state is updated through three Syndrome Transformer layers. **(E)** Each Syndrome Transformer layer updates the stabilizer representations through multi-headed attention modulated by a learned attention bias followed by a dense block and dilated 2D convolutions. **(F)** Logical errors are predicted from the final decoder state.
### Decoding the Sycamore memory experiment
As a first demonstration, we apply our ML decoder to the _Sycamore memory experiment dataset_, experimental data for surface code logical qubits on Google's Sycamore device [37]. The experiment comprised both distance 3 and 5 surface codes; the \(3\times 3\) code block executed at four separate locations (labelled _north_, _east_, _south_, _west_, 'NESW'), within the Sycamore chip, and the \(5\times 5\) code block executed at a single location. Both \(X\) and \(Z\) memory experiments were performed for up to 25 error correction cycles. Within each experiment, stabilizer syndromes were measured over each cycle, followed by a final cycle of data qubit measurements, from which a final set of stabilizers in the experiment basis as well as the logical readout were computed. \(50\,000\) experiments were performed for each total cycle count \(n\in\{1,3,\ldots,25\}\), and the resulting data split into _even_ and _odd_ subsets for 2-fold cross-validation.
Decoder performance is quantified by the _logical error per round_ (where round means the same as cycle, abbreviated LER), a measure of the fraction of experiments in which the decoder fails for each additional error correction cycle. As in [37], we calculate the LER by a linear regression of the log-fidelities for experiments of different numbers of cycles \(n\in\{3,5,\ldots,25\}\) (see materials and methods, Appendix A.2.1, Fig. 2A).
We trained our decoder in two stages: pre-training and fine-tuning. In the pre-training stage, and e.g. for the even fold, we train on \(2\times 10^{9}\) samples drawn from detector error noise models (DEMs, [32]) fitted to the even experimental detection event distributions, and choose the model with the best validation LER, computed on the even experimental samples. For the fine-tuning stage we randomly partition the even experimental samples, training on \(19\,880\) with weight decay relative to the pre-trained model to regularize the parameters. We choose the parameters giving the lowest LER on the remaining \(5\,120\) samples (after \(\approx 120\) passes through the data). This procedure allows us to train a decoder to high accuracy with limited access to experimental data, while holding back the other (odd) fold as test set. When the pre-training stage uses a more'standard' noise model such as circuit depolarizing noise, fine-tuning compensates for most of the mismatch (see materials and methods, Appendix B.5).
Our ML decoder achieves a lower LER (\(2.901\pm 0.023\%\) at distance 3, and \(2.748\pm 0.015\%\) at distance 5, for a \(\Lambda=1.056\pm 0.010\)) than the tensor network (TN) decoder (\(3.028\pm 0.023\%\) resp. \(2.915\pm 0.016\%\), \(\Lambda=1.039\pm 0.010\)), the most accurate decoder hitherto reported for this experiment [37, 14]. The accuracy gain becomes even more pronounced in comparison to state-of-the-art MWPM-based decoders, such as correlated matching (MWPM-Corr), matching with belief propagation (MWPM-BP), as well as PyMatching, an open-source implementation of MWPM [43, 37, 42] (Fig. 2A). Fine-tuning on experimental data, as well as _ensembling_ (training multiple models and combining their outputs, see materials and meth
ods, Appendix A.7) take our technique from parity with the TN decoder to a clear advantage (Fig. 2B).
Figure 2: **Accuracy of our machine-learning decoder and other leading decoders on the Sycamore experimental data.** All results are averaged across bases, even/odd cross-validation splits and, for the \(3\times 3\) experiments, the location (NESW). (**A**) Fidelity (\(2\times\text{accuracy}-1\)) vs. error correction cycle for code distance 3 and 5 memory experiments in the Sycamore experimental dataset for the baseline tensor network decoder (black), our decoder (blue) and three variants of MWPM (shades of gray). In the legend, we show the logical error per round (LER), calculated from the slope of the fitted lines. (**B**) Fitted logical error per round for different decoders for the \(3\times 3\) and \(5\times 5\) datasets. Variants of our ML decoder without ensembling or fine-tuning are also compared to show the advantage of the complete training procedure.
### Beyond Pauli noise: leakage, cross-talk, I/Q noise
To achieve reliable quantum computation, the decoder must scale to higher code distances. Since hardware implementations at the time of our study do not reach scales capable of running a surface-code experiment beyond distance 5, we explore the performance of our decoder for larger code distances using simulated data at error rates both comparable to and significantly lower than the Sycamore experimental data (see materials and methods, Appendix A.6, Fig. S8). The conventional circuit-noise model [69, 7] is limited, failing to account for several crucial real-world effects, such as cross-talk, leakage and analog readout information. A decoder that recognizes these patterns can be especially valuable, since correlated errors are highly damaging to fault-tolerance [1]. In this section, we use circuit simulators that can model and modulate the above effects independently at code distances \(3,5,7,9\), and \(11\).
#### 2.3.1 Analog readouts
Projective measurement is essential to extract information about errors from the surface code. Typically, each measurement is classified as a discrete outcome \(|0\rangle\) or \(|1\rangle\). However, in some quantum computing architectures, there is underlying analog data with information about uncertainty and leakage to non-computational states. This is the case for standard dispersive measurement of superconducting qubits, where a microwave pulse probes the frequency of a superconducting resonator that is coupled to a qubit in order to infer the qubit's state [101, 9, 48]. The returned microwave signal contains information in its amplitude and phase, traditionally represented in a two-dimensional space of in-phase ("I") and quadrature ("Q") amplitudes. It is simple to provide this information to our ML decoder: for each stabilizer readout, instead of a binary value (0 if the most likely state is \(|0\rangle\), 1 if \(|1\rangle\)), we provide the probability of being in state \(|1\rangle\), based on a decay model, parameterized by the signal-to-noise ratio (SNR) and normalised measurement time \(t=t_{\rm meas}/T_{1}\). This is provided as the network input at cycle \(n\) along with a probabilistic analog to detection events (see materials and methods, Appendix A.1.7, Fig. S4).
Figure 3A shows that ML can outperform correlated matching for a superconducting-inspired circuit depolarizing noise model (SI1000 noise, see materials and methods, Appendices A.1.5 and A.6.2, [34]), augmented with I/Q measurements (where we always threshold the final data qubits to prevent the logical measurement from leaking into the training inputs, see materials and methods, Appendix B.2). If the network is provided with the richer probabilistic inputs derived from I/Q readouts (which our ML decoder can process without runtime penalty), the performance improves further, for both \(3\times 3\) and \(5\times 5\) codes. Incorporating probabilistic measurement inputs into MWPM-Corr [71] also leads to an improvement (though this comes with a runtime penalty).
#### 2.3.2 Leakage and crosstalk
Training and evaluating on Pauli+ simulated data (see materials and methods, Appendix A.1.9), we show that our ML decoder is robust to crosstalk and leakage (Fig. 3B). For SNR = 10, characteristic decay rate \(t_{\mathrm{meas}}/T_{1}=t=0.01\), and 0.1% leakage (meaning 0.1% leakage chance for stabilizer readouts and \(\approx 0.3\%\) on the data qubit measurements, where leakage is removed after each round [63]), our ML-based decoder is more accurate than MWPM-Corr for all code distances 3-9 when trained on a consistent \(2\times 10^{9}\) samples, and at distance 11 when training for \(10^{10}\) samples. We highlight that the number of network parameters is kept almost constant across code distances (see materials and methods, Appendix A.8, Table S5).
At distance 11, the ML decoder achieves a logical error per round of \(5.37\pm 0.01\times 10^{-6}\) (resp. \(6.74\pm 0.02\times 10^{-6}\) for MWPM-Corr). While the shape of the ML decoder curves suggest a decrease in relative error suppression \(\Lambda\) at larger code distances, training to convergence takes longer (cf. Fig. S18); the additional point at distance 11 illustrates that at sufficient training samples an average error suppression factor from distance 3 to 11 of \(\Lambda=4.28\pm 0.02\) is possible, on par with MWPM-Corr (\(\Lambda=4.33\pm 0.04\), see Table S6). \(\Lambda\) for matching is determined by the distribution of errors and the decoder's noise model. For an ML decoder it also becomes a function of training: improved training improves the logical error per round, and this improvement versus the code distance is what determines \(\Lambda\).
Hitherto, leakage states were usually ignored and classified as either \(|0\rangle\) or \(|1\rangle\), but can sometimes be distinguished in the I/Q representation [78, 51]. With traditional binary inputs and without leakage information, our machine-learning decoder outperforms MWPM-Corr at code distances 3-9, and achieves parity at distance 11. Providing leakage information in the form of an additional binary input (leaked/not leaked) slightly increases the lead (Fig. 3B, blue dashed vs. dotted lines) beyond MWPW-Corr (dotted magenta line). Yet, if we provide the model with richer soft information (a probability of being leaked, and a conditional probability of being in \(|1\rangle\), given the measurement was not in a leaked state), the advantage against correlated matching becomes greater (Fig. 3B, solid blue line), even when MWPM-Corr incorporates probabilistic measurement inputs (Fig. 3B, solid magenta line).
For intermediate code distances, the relative penalty incurred from increased leakage as compared to the case of no leakage is much less pronounced for our machine learning decoder (Fig. 3C for code distance 7 and Fig. S13 for 3,5,9 and 11). E.g. compared to no leakage, when adding 0.1% leakage, the ML decoder's logical error per round at distance 7 increases by \(\approx 30\%\) (blue bar), whereas leakage-unaware MWPM-Corr suffers a larger penalty of \(\approx 60\%-80\%\) (magenta bar). While there have been proposals for incorporating leakage information into
matching in idealized models [87], our ML decoder architecture learns to mitigate the effects of leakage without the difficulty of directly characterizing them.
We demonstrate that the advantage of our ML decoder against MWPM decoders persists even in the longest experiments we tried. We trained our decoder using Pauli+ simulated experiments of up to 25 error correcting cycles (soft inputs, SNR = 10, \(t\) = 0.01, and leakage 0.1%), and found its performance to generalize to Pauli+ experiments of up to at least 100 000 cycles (Fig. 4, see materials and methods, Appendix B.2).
Ultimately, our ML decoder's flexibility in seamlessly learning from leakage readout, I/Q information, and cross-talk induced error patterns is an advantage over other state-of-the-art decoders, as evidenced by its improved accuracy.
## 4 Results
Figure 3: **Scaling the decoder and the effect of noise levels and input modality on decoder accuracy.****(A)** Decoder performances for SI1000 data generated with different I/Q noise parameter values. Line styles denote whether the model is provided with the richer, continuous input representation for I/Q readouts (“soft inputs”, continuous lines) or with thresholded binary values (“hard inputs”, dotted lines). Both the ML decoder (blue) and MWPM-Corr (magenta) benefit from the soft inputs but our ML-based technique is more accurate for a range of readout noise levels. **(B)** Logical error per round for different decoders at different code distances, for the same Pauli+ noise model (SNR 10, \(t=0.01\), \(0.1\%\) leakage). Line styles represent whether decoder inputs are binary (hard) or soft and whether they contain information about leakage. Our ML decoder (blue), trained on \(2\times 10^{9}\) samples, leads MWPM-Corr (magenta) by a margin that further improves when augmented with leakage information and probabilistic inputs. The additional point at distance 11 extended training to \(10^{10}\) training samples. **(C)** The relative impact of leakage on the performance of different decoders and input modalities at distance 7. Our ML decoder is more robust to leakage than the MWPM-based decoders.
### Postselection: improving accuracy with confidence
We trained the neural network with a logistic output, minimizing cross-entropy against binary labels. As a result, its output can be interpreted as the probability that the true label is \(|1\rangle\), a probability we found to be well-calibrated (Fig. 5A, S10). For example, of samples with prediction probability 0.8, approximately 80% indeed have the true label \(|1\rangle\). Samples with a probability close to 0.5 are more likely to have been misclassified than samples with a probability closer to 0 or 1 (Fig. S11).
The probabilistic output can be used as a confidence measure to discard the least confident samples (Fig. 5B). On the same Pauli+ simulation (SNR=10, \(t=0.01\), 0.1% leakage) as in Section 2.3.2, and by rejecting only 0.2% of the 25-cycle experiments at distance 11, we can achieve a postselected error rate a factor of \(\approx 10\) lower (10% rejection gives a factor \(\approx 250\)), which could prove useful in repeat-until-success protocols [13, 12, 18, 33, 74, 72].
Figure 4: **Generalization to larger number of error correction cycles at code distance 11.** Fidelity (2 \(\times\) accuracy - 1) after up to 100,000 error-correction cycles (**A**) and the corresponding logical error per round (**B**) for PyMatching (green), MWPM-Corr (magenta) and for our ML decoder (blue) trained only on Pauli+ simulated experiments of up to 25 cycles, with 2B training samples (solid line) and 10B training samples (dashed line). Both training and test samples are Pauli+ (SNR=10, t = 0.01, 0.1% leakage). We only plot LER values where the corresponding fidelity is greater than 0.1. The data is generated from the same simulated experiments, stopped at different number of cycles.
### Discussion
#### 2.5.1 Advantages
In addition to improved error-suppression, our ML decoder inherently provides features desirable for a quantum error correction decoder.
Trainable on raw data.The network can be directly trained on examples from real hardware. As such, its performance is less dependent on the availability or accuracy of a noise model characterizing the device. This reduces reliance on the ability to obtain an accurate noise profile of the device, which is often difficult [37, 69].
Ability to use rich inputs.The network can process rich inputs, such as analog readout signals from the hardware [48], I/Q points, signatures of leakage [65], or potentially other side information such as temperature or calibration parameters [50, 37]. This leads to improved accuracy compared to processing stabilizer syndrome inputs alone (Fig. 1C, Fig. 3).
Figure 5: **The use of the network’s output as a confidence measure for post-selection.** For the ensembled model performance of our ML decoder, trained using 25-cycle Pauli+ simulated experiments (\(\text{SNR}=10\), \(t=0.01\), \(0.1\%\) leakage), tested on \(10^{9}\) samples. **(A)** Example calibration plot at distances 5 (green continuous line) and 11 (purple continuous line), with bootstrapped errors (small but present error bars). Black dashed line represents a perfectly calibrated classifier. **(B)** Logical error rate vs. the fraction of experiments discarded when discarding experiments with low confidence, (error bars: standard error of mean from values in each bin, visible for a \(\text{LER}\lesssim 10^{-8}\)).
Calibrated output.Instead of a binary output, the decoder outputs a calibrated error probability that can be used as a confidence measure e.g. for post-selection.
#### 2.5.2 Throughput considerations
To avoid backlog, any quantum error correction decoder needs to achieve a throughput (processing time per cycle) commensurate with the syndrome accumulation speed of the quantum chip--currently \(1\mathrm{\SIUnitSymbolMicro s}\) for superconducting qubits [35, 37], and \(>\)\(1\mathrm{ms}\) for trapped ion devices [80, 77]. Latency (the delay between the final input to the decoder and its output) plays a secondary role in dictating the quantum computer's logical clock speed [91, 68]. Improving throughput and latency remains an important goal for both machine learning and matching-based decoders [48, 58, 85, 90].
While the model has not yet been optimized for inference speed, a host of techniques can be applied to speed up the model (see materials and methods, Appendix A.8); yet even without those, when running our ML decoder on an accelerator and for code distances up to 25 (untrained for distances \(>\)\(11\)), its throughput is already within about one to two orders of magnitude of the target throughput rate of \(1\mathrm{\SIUnitSymbolMicro s}\) (see Fig. S9). Note that, in contrast to graph-based decoders, our decoder throughput is independent of the noise rate by design [95], avoiding slowdowns caused by spikes in noise rate. The recurrent architecture allows our architecture to decode an indefinite number of error correction cycles, and benefits from parallel processing (particularly GPUs and TPUs). Consequently, the ML decoder's inference time scales efficiently across code distance with a single, fixed-size architecture.
#### 2.5.3 Training data requirements
Fig. S18 shows the number of training samples needed to achieve LER parity with MWPM (PyMatching) and MWPM-Corr. As observed before [95, 67], the data requirements grow exponentially: a hundredfold increase to go from distance 3 to distance 11. Indeed, the trend suggests that one might expect to pre-train a model for code distance 25 to reach parity with MWPM-Corr using \(10^{13}-10^{14}\) training examples. However, this one-off pre-training on a generic noise model can be followed by fast fine-tuning on hardware-specific datasets to achieve the best performance, without training a completely new model from scratch.
## 3 Conclusions and outlook
We have shown that a neural network decoder with inductive biases motivated by the quantum error correction problem can learn to decode the surface code with state-of-the-art error
suppression. On experimental data, it outperforms the previous best-in-class tensor network decoder, which takes orders of magnitude longer to run, although achieving the real-time throughput rates of a superconducting architecture remains a challenge.
Our ML decoder's advantage in accuracy persists at scale, as we continue to outperform correlated matching performance at distances up to 11. This performance is maintained over numbers of error correction cycles that far exceed the training regime. While the architecture itself can be executed on even larger code distances with only a moderate runtime increase, training the decoder to continue suppressing error beyond distance 11 is a further challenge.
As a machine learning model, our decoder's greatest strengths come from its ability to learn from real experimental data. Allowing it to seamlessly take advantage of rich inputs representing I/Q noise and leakage, these capabilities come without a human in the loop to design an explicit algorithm to use each novel feature. This ability to use available experimental information showcases the strength of machine learning in a scientific context.
While we anticipate other decoding techniques will continue to improve, we believe that our work provides evidence that machine learning decoders may achieve the necessary error suppression and speed to enable practical quantum computing.
|
2303.01486 | Understanding plasticity in neural networks | Plasticity, the ability of a neural network to quickly change its predictions
in response to new information, is essential for the adaptability and
robustness of deep reinforcement learning systems. Deep neural networks are
known to lose plasticity over the course of training even in relatively simple
learning problems, but the mechanisms driving this phenomenon are still poorly
understood. This paper conducts a systematic empirical analysis into plasticity
loss, with the goal of understanding the phenomenon mechanistically in order to
guide the future development of targeted solutions. We find that loss of
plasticity is deeply connected to changes in the curvature of the loss
landscape, but that it often occurs in the absence of saturated units. Based on
this insight, we identify a number of parameterization and optimization design
choices which enable networks to better preserve plasticity over the course of
training. We validate the utility of these findings on larger-scale RL
benchmarks in the Arcade Learning Environment. | Clare Lyle, Zeyu Zheng, Evgenii Nikishin, Bernardo Avila Pires, Razvan Pascanu, Will Dabney | 2023-03-02T18:47:51Z | http://arxiv.org/abs/2303.01486v4 | # Understanding plasticity in neural networks
###### Abstract
Plasticity, the ability of a neural network to quickly change its predictions in response to new information, is essential for the adaptability and robustness of deep reinforcement learning systems. Deep neural networks are known to lose plasticity over the course of training even in relatively simple learning problems, but the mechanisms driving this phenomenon are still poorly understood. This paper conducts a systematic empirical analysis into plasticity loss, with the goal of understanding the phenomenon mechanistically in order to guide the future development of targeted solutions. We find that loss of plasticity is deeply connected to changes in the curvature of the loss landscape, but that it often occurs in the absence of saturated units or divergent gradient norms. Based on this insight, we identify a number of parameterization and optimization design choices which enable networks to better preserve plasticity over the course of training. We validate the utility of these findings in larger-scale learning problems from the Arcade Learning Environment.
Machine Learning, Reinforcement Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning, Machine Learning
## 1 Introduction
It is a widely observed phenomenon that after training on a non-stationary objective, neural networks exhibit a reduced ability to solve new tasks (Lyle et al., 2021; Nikishin et al., 2022; Dohare et al., 2021). This loss of plasticity occurs most robustly when the relationship between inputs and prediction targets changes over time, and the network must learn to 'overwrite' its prior predictions (Lyle et al., 2021). While such scenarios are relatively rare in supervised learning, they are baked into the way that deep reinforcement learning (RL) agents are trained. Understanding how plasticity is lost, and whether this loss can be mitigated, is crucial if we wish to develop deep RL agents which can rise to the challenge of complex and constantly-changing environments. Existing methods to promote trainability act on a wide variety of potential mechanisms by which plasticity might be lost, including resetting of layers (Nikishin et al., 2022) and activation units (Dohare et al., 2021), and regularization of the features (Kumar et al., 2020; Lyle et al., 2021). While all of these works observe performance improvements, it is unlikely that they are all obtaining these improvements by the same mechanism. As a result, it is difficult to know how to improve on these interventions to further preserve plasticity.
This paper seeks to identify the mechanisms by which plasticity loss occurs. We begin with an analysis of two interpretable case studies, illustrating the mechanisms by which both adaptive optimizers and naive gradient descent can drive the loss of plasticity. Prior works have conjectured, implicitly or explicitly, that a variety of network properties might cause plasticity loss: we present a falsification framework inspired by the study of causally robust predictors of generalization (Dziugaite et al., 2020), and leverage this framework to show that loss of plasticity cannot be uniquely attributed to any of these properties. While difficult to characterize explicitly, we provide evidence that the curvature of the loss landscape induced by new tasks on trained parameters is a crucial factor determining a network's plasticity, particularly in value-based reinforcement learning algorithms.
We conclude by completing a broad empirical analysis of methods which aim to improve the ability of a network to navigate the loss landscape throughout training, ranging from architectural choices to regularization schemes. We find that architectural choices which have been conjectured to smooth out the loss landscape, such as categorical output representations and normalization layers, provide the greatest improvements to plasticity, while methods which perturb the parameters or provide other forms of regularization tend to see less benefit. To test the generality of these findings, we apply the best-performing intervention, layer normalization, to a standard DQN architecture and obtain significant improvements in performance on the Arcade Learning Environment benchmark. We conclude that controlling the loss landscape sharpness and optimizer stability present highly
promising avenues to improve the robustness and usability of deep RL methods.
## 2 Background
It has long been observed that training a network first on one task and then a second will result in reduced performance on the first task (French, 1999). This phenomenon, known as catastrophic forgetting, has been widely studied by many works (2017). This paper concerns itself with a different phenomenon: in certain situations, training a neural network on a series of distinct tasks can result in worse performance on later tasks than what would be obtained by training a randomly initialized network of the same architecture.
### Preliminaries
**Temporal difference learning.** Plasticity loss naturally arises under non-stationarity; we will focus our analysis on temporal difference (TD) learning with neural networks, an setting known to induce significant non-stationarity. We assume the standard reinforcement learning problem of an agent interacting with an environment \(\mathcal{M}\), with observation space \(\mathcal{S}\), action space \(\mathcal{A}\), reward \(R\) and discount factor \(\gamma\), with the objective of maximizing cumulative reward. Networks trained via temporal difference learning receive as input sampled _transitions_ from an agent's interaction with the environment, of the form \(\tau_{t}=(s_{t-1},a_{t},r_{t},s_{t})\), where \(s_{t-1},s_{t}\in\mathcal{S}\), \(a_{t}\in\mathcal{A}\), and \(r_{t}=R(s_{t})\) some reward. We let \(\theta^{\prime}\) denote the _target parameters_; in practice, \(\theta^{\prime}\) is usually an outdated copy of \(\theta\) from a previous timestep, but other choices include setting it to be equal to the current parameters, or using a moving average of past values. The network \(f:\Theta\times\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is trained to minimize the temporal difference error
\[\ell(\theta,\tau_{t})=\left[f(\theta,s_{t-1},a_{t})-\Box(r_{t}+\gamma f(\theta ^{\prime},s_{t},a^{\prime}))\right]^{2} \tag{1}\]
where \(\Box\) denotes a stop-gradient, \(\gamma<1\) is the discount factor, and \(a^{\prime}\) is chosen based on the variant of TD learning used. Crucially, the regression target \(r_{t}+\gamma f(\theta^{\prime},s_{t},a^{\prime})\) depends on the parameters \(\theta^{\prime}\) and changes as learning progresses. This nonstationarity occurs even if the policy and input distribution are fixed, meaning that we can study the role of nonstationarity independent of the agent's exploration strategy. We will use the shorthand \(\ell(\theta)\) for the expectation of this loss over some input distribution.
**Loss landscape analysis.** We will be particularly interested in the study of the structure of the loss landscape traversed by an optimization algorithm. We will leverage two principal quantities in this analysis: the Hessian of the network with respect to some loss function, and the gradient covariance. The Hessian of a network \(f\) at parameters \(\theta\) with respect to some loss \(\ell(\theta)\) is a matrix defined as
\[H_{\ell}(\theta)=\nabla_{\theta}^{2}\ell(\theta)\in\mathbb{R}^{d\times d} \tag{2}\]
where \(d=|\theta|\) is the number of parameters. Of particular relevance to optimization is the eigenspectrum of the Hessian \(\Lambda(H_{\ell}(\theta))=(\lambda_{1}\geq\cdots\geq\lambda_{d})\). The maximal eigenvalue, \(\lambda_{1}\), can be interpreted as measuring the sharpness of the loss landscape (Dinh et al., 2017), and the condition number \(\lambda_{1}/\lambda_{d}\) has significant implications for convergence of gradient descent optimization in deep neural networks (Gilmer et al., 2022).
We will also take interest in the covariance structure of the gradients of different data points in the input distribution, a property relevant to both optimization and generalization (Fort et al., 2019; Lyle et al., 2022). We will estimate this covariance structure by sampling \(k\) training points \(\mathbf{x}_{1},\ldots,\mathbf{x}_{k}\), and computing the matrix \(C_{k}\in\mathbb{R}^{k\times k}\) defined entrywise as
\[C_{k}[i,j]=\frac{\langle\nabla_{\theta}\ell(\theta,\mathbf{x}_{i}),\nabla_{ \theta}\ell(\theta,\mathbf{x}_{j})\rangle}{\|\nabla_{\theta}\ell(\theta, \mathbf{x}_{i})\|\|\nabla_{\theta}\ell(\theta,\mathbf{x}_{j})\|}. \tag{3}\]
If the off-diagonal entries of \(C_{k}\) contain many negative values, this indicates interference between inputs, wherein the network cannot reduce its loss on one region without increasing its loss on another. If the matrix \(C_{k}\) exhibits low rank (which, given a suitable ordering \(\sigma\) of the data points \(\mathbf{x}_{\sigma(1)},\ldots,bx_{\sigma(k)}\) will yield a block structure) then the gradients are degenerate and largely colinear, which can indicate either generalization when their dot product is positive, or interference when their dot product is negative.
### Defining plasticity
The study of plasticity has concerned neuroscience for several decades (Mermillod et al., 2013; Abbott and Nelson, 2000), but has only recently emerged as a topic of interest in deep learning (Berariu et al., 2021; Ash and Adams, 2020; **7**). Classical notions of complexity from the computational learning theory literature (Vapnik, 1968; Bartlett and Mendelson, 2002) evaluate whether a hypothesis class contains functions that capture arbitrary patterns, but are agnostic to the ability of a particular search algorithm, such as gradient descent, to find them. A billion-parameter neural network architecture might have the _capacity_ to represent a rich class of functions, but if all of its activation units are saturated then it cannot be trained by gradient descent to realize this capacity.
Plasticity, broadly construed, refers to a network's ability to learn new things. Learning can be quantified as either the reduction in the model's training loss after optimization, or as the performance of the model on held-out test points. Studies of plasticity in both supervised and reinforcement learning have observed reduced generalization performance
as a result of overfitting to limited data early in training (Ash and Adams, 2020; Berariu et al., 2021; Igl et al., 2021). However, reinforcement learning tasks often lack a natural notion of a test set as the data gathered by the agent is not generally independent and identically distributed, and many works have identified impaired ability to even reduce the learning objective on the training distribution (Dohare et al., 2021; Lyle et al., 2021; Nikishin et al., 2022). This work will leverage the formulation of Lyle et al. (2021), who define plasticity as the ability of a network to update its predictions in response to a wide array of possible learning signals on the input distribution it has been trained on. This formulation is applicable to learning problems which do not admit a straightforward train-test split, as is the case in many deep RL environments.
Concretely, we consider an optimization algorithm \(\mathcal{O}:(\theta,\ell)\mapsto\theta^{*}\) which takes initial parameters \(\theta\in\Theta\) and some objective function \(\ell:\Theta\rightarrow\mathbb{R}\), and outputs a new set of parameters \(\theta^{*}\). The parameters \(\theta^{*}\) need not be an optimum: \(\mathcal{O}\) could, for example, run gradient descent for five steps. In order to measure the flexibility with which a network can update its predictions under this optimization algorithm, we consider a distribution over a set of loss functions \(\mathcal{L}\) each defined by some learning objective. For example, consider a distribution over regression losses
\[\ell_{f,\mathbf{X}}(\theta)=\mathbb{E}_{x\sim\mathbf{X}}[(f(\theta,\mathbf{x} )-g_{\omega}(\mathbf{x}))^{2}] \tag{4}\]
where \(g_{\omega}\) is induced by a random initialization \(\omega\) of a neural network. In order to match the intuition that more adaptable networks should have greater plasticity, we set a baseline value \(b\) to be the loss obtained by some baseline function (e.g. if \(\ell\) is a regression loss on some set of targets, we set \(b\) to be the variance of the targets), and then define plasticity to be the difference between the baseline and the expectation of the final loss obtained by this optimization process after starting from an initial parameter value \(\theta_{t}\) and optimizing a sampled loss function \(\ell\) subtracted from the baseline \(b\).
\[\mathcal{P}(\theta_{t})=b-\mathbb{E}_{\ell\sim\mathcal{L}}[\ell(\theta_{t}^{* })]\text{ where }\theta_{t}^{*}=\mathcal{O}(\theta_{t},\ell) \tag{5}\]
We then define the loss of plasticity over the course of a trajectory \((\theta_{t})_{t=0}^{N}\) as the difference \(\mathcal{P}(\theta_{t})-\mathcal{P}(\theta_{0})\). We note that this definition of plasticity loss is independent of the value of the baseline \(b\), i.e. the difficulty of the probe task for the network, allowing us to measure the relative change in performance of checkpoints taken from a training trajectory.
## 3 Methodology and motivating questions
The following sections will present a series of experiments which tease apart different causal pathways by which plasticity loss occurs and evaluate the predictive power of a range of hypotheses concerning the root causes thereof. We now outline the experimental methodology and research questions underpinning this investigation.
### Measuring plasticity
In order to determine whether a candidate intervention preserves plasticity, we must first set out a consistent standard by which we will measure plasticity. Given a suitably generic class of target functions inducing the losses \(\ell\), equation 4 characterizes the adaptability of the network to arbitrary new learning signals from the unknown set of possible future tasks. We therefore construct a set of regression targets to sample over which correspond to this uniform prior over possible future update directions. A different distribution over future target functions might give different numerical results; however, we believe that a uniform distribution captures a more universal notion of plasticity.
In our empirical evaluations, we will set \(\mathbf{X}\) to be the set of transitions gathered by an RL agent and stored in some replay buffer, and \(f\) to be the neural network architecture. Given some offset \(a\in\mathbb{R}\), we will apply the transformation \(g(x)=a+\sin(10^{5}f(\mathbf{x};\omega_{0}))\), with \(\omega_{0}\) sampled from the same distribution as \(\theta_{0}\), to construct a challenging prediction objective which measures the ability of the network to perturb its predictions in random directions sampled effectively uniformly over the input space. Because the mean prediction output by a deep RL network tends to evolve away from zero over time as the policy improves and the reward propagates through the value function, we will set \(a\) to be equal to the network's mean prediction in order not to bias the objective in favour of random initializations, which have mean much closer to zero. The optimizer \(\mathcal{O}\) will be identical to that used by the network on its primary learning objective, and we found running this optimizer for a budget of two thousand steps enabled reasonably efficient iteration time while also providing enough opportunity for most random initializations to solve the task.
### Environments
The experimental framework we consider, and which will be touched on in each of the following sections is as follows. We construct a simple MDP analogue of image classification, i.e. the underlying transition dynamics are defined over a set of ten states and ten actions, and the reward and transition dynamics depend on whether or not the action taken by the agent is equal to the state's latent label. We construct three variants of a block MDP whose state space can be given by the discrete set \(\{0,\dots,9\}\) and whose observation space is given by either the CIFAR-10 or MNIST dataset.
**True-label:** each state \(s\) of the MDP produces an observation from that class in the underlying classification dataset. Given action \(a\), the reward is the indicator function \(\delta_{a=s}\). The MDP then randomly transitions to a new state.
**Random-label:** follows the same dynamics as the previous environment, but each image is assigned a random label
in \(\{0\dots 9\}\), and the observation from an MDP state \(i\) is sampled from images with (randomized) label \(i\).
**Sparse-reward:** exhibits the same observation mapping as _true-label_. The reward is equal to \(\delta_{a=s=9}\). The MDP transitions to a random state if \(a\neq s\) and otherwise to \(s+1\).
We design these environments to satisfy two principal desiderata: first, that they present visually interesting prediction challenges with varying degrees of reward smoothness and density, and second that they allow us to isolate non-stationarity due to policy and target network updates independent of a change in the state visitation distribution. In the true-label and random-label variants, the transition dynamics do not depend on the agent's action, whereas in the sparse environment the policy influences the state visitation distribution. The different reward functions allow us to compare tasks which are aligned with the network's inductive bias (in the true-label task) and those which are not (the random-label task).
### Outline of experiments
The experiments presented in Sections 4 and 5 aim to answer a fundamental question: **what happens when neural networks lose plasticity?** Section 4 constructs two experimental settings which illuminate phenomena driving two very different forms of plasticity loss. The first constructs a non-stationary learning problem that induces extreme forms of instability in adaptive optimizers, leading to plasticity loss. The second identifies a bias in the dynamics of gradient descent which leads to a progressive sharpening of the loss landscape of not just the current task, but also new tasks, by contrasting a gradient descent trajectory with a random walk in parameter space.
Section 5 asks **what properties _cause_ plasticity loss?** Disentangling cause from correlation is a notoriously difficult problem throughout science and economics. We put a number of quantities which have been conjectured to drive plasticity loss to the test, evaluating the consistency of quantities such as weight norm, feature rank, and the number of dead units in the network across the range of tasks outlined in Section 3.2. Crucial to this investigation is the notion of a causally robust predictor: a quantity which causally influences plasticity should exhibit consistency across a variety of experimental settings, such as different learning environments or network architectures. We follow up the largely negative results of these experiments with a qualitative analysis of learning curves on the probe tasks described in Section 3.1 that emphasizes the critical role of the loss landscape in plasticity.
Section 6.2 addresses the question: **how can we mitigate plasticity loss?** It evaluates the effectiveness of a range of interventions on the network architecture and on the optimization protocol, focusing on methods known to increase the smoothness of the loss landscape, applying the same evaluation protocol as described in this section in order to measure plasticity across our classification MDP testbed.
## 4 Two simple studies on plasticity
We begin with some interpretable examples of learning problems where plasticity loss occurs. These examples illustrate how the design of optimizers can interact with nonstationarity to produce instabilities that drive plasticity loss in one of the examples above, and explore how the dynamics of gradient-based optimizers might affect more subtle properties of the loss landscape.
### Optimizer instability and non-stationarity
The robustness of existing optimizers across a wide range of datasets and network architectures has played a key role in the widespread adoption of deep learning methods. For example, the Adam optimizer (Kingma & Ba, 2015) with a learning rate of \(10^{-3}\) will often yield reasonable initial results on a range of network architectures from which the practitioner can iterate. However, when the assumptions on stationarity underlying the design of this optimizer no longer hold, the optimization process can experience catastrophic divergence, killing off most of the network's ReLU units. We can see an example of this in a simple non-stationary task in Figure 1. A two-hidden-layer fully-connected neural network is trained to memorize random labels of MNIST images (full details provided in Appendix A.1). After a fixed training budget, the labels are re-randomized, and the network continues training from its current parameters. This process quickly leads a default Adam optimizer to diverge, saturating most of its ReLU units and resulting in trivial performance on the task that a freshly initialized network could solve perfectly.
Figure 1: Abrupt task changes can drive instability in optimizers which depend on second-order moment estimates for adaptive learning rate scaling. Setting these estimators to be more robust to small gradient norms and to update moment estimates more quickly mitigates this issue.
The mechanism of this phenomenon emerges when we consider the update rule for Adam, which tracks a second-order estimate \(\hat{v}_{t}\) along with a first-order moment estimate \(\hat{m}_{t}\) of the gradient via an exponential moving average
\[u_{t}=\alpha\frac{\hat{m}_{t}}{\sqrt{\hat{v}_{t}+\bar{\epsilon}}+\epsilon}. \tag{6}\]
Gradients tend to have norm proportional to the training loss. When the loss changes suddenly, as is the case when the perfectly-memorized MNIST labels are re-randomized (or when the target network is updated in an RL agent), \(\hat{m}_{t}\) and \(\hat{v}_{t}\) will no longer be accurate estimates of their moment distributions. Under the default hyperparameters for deep supervised learning, \(\hat{m}_{t}\) is updated more aggressively than \(\hat{v}_{t}\), and so the updates immediately after a task change will scale as a large number divided by a much smaller number, contributing to the instability we observe in Figure 1. In this instance, the solution is simple: we simply increase \(\epsilon\) and set a more aggressive decay rate for the second-moment estimate, and the network avoids catastrophic instability. Intriguingly, a large value of \(\epsilon\) is frequently used in deep RL algorithms such as DQN (Mnih et al., 2015) relative to the default provided by optimization libraries, suggesting that the community has implicitly converged towards optimizer hyperparameters which promote stability under nonstationarity.
### Loss landscape evolution under non-stationarity
Even when optimization is sufficiently stable to avoid saturated units, prior work still observes reductions in network plasticity (Lyle et al., 2021). The causes of this phenomenon are more difficult to tease apart; neural network initializations have been tuned over several decades to maximize trainability, and many properties of the network change during optimization which could be driving the loss of plasticity. A natural question we can ask in RL is whether the optimization dynamics followed by a network bias the parameters to become less trainable, or whether the loss of plasticity is a natural consequence of any perturbation away from a carefully chosen initialization distribution.
We frame this question as a controlled experiment, in which we compare the evolution of two coupled updating procedures: one follows gradient-based optimization on a non-stationary objective (full details in Appendix A.1); the second follows a random walk, where we add a Gaussian perturbation to the parameters with norm equal to the size of the gradient-based optimizer update. Both trajectories start from the same set of randomly initialized parameters and apply updates of equal norm; the only difference is the direction each step takes. We evaluate how the structure of the local loss landscape with respect to a probe task evolves in both networks by comparing the Hessian eigenvalue distribution, and by comparing the covariance structure \(C_{k}\) of gradients on sampled inputs, with \(k=512\) equal to the batch size used for training. We compute the Hessian matrix for a regression loss towards a perturbation \(\epsilon\sim\mathcal{N}(0,1)\) of the network's current output, i.e. \(\ell(\theta)=[f_{\theta}(\mathbf{X})-\Box f_{\theta}(\mathbf{X})+\epsilon]^{2}\) where \(\Box\) indicates a stop-gradient, to obtain a proxy for how easily the network can update its predictions in arbitrary directions; we do not evaluate the Hessian or gradient structure of the primary learning objective as these will trivially differ between the trajectories.
We observe that the spectral norm of the Hessian of both processes increases over time; however, the outliers of the spectrum grow significantly faster in the network trained with gradient descent. Additionally, the network trained with gradient descent begins to exhibit negative interference between gradients, a phenomenon not observed in the Brownian motion. In other words, the inductive bias induced by gradient descent can push the parameters towards regions of the parameter space where the local loss landscape is less friendly to optimization towards arbitrary new objectives than what would be obtained by blindly perturbing randomly initialized parameters.
## 5 Explaining plasticity loss
While in some instances it is straightforward to deduce the cause of plasticity loss, most learning problems induce com
Figure 2: Evolution of the gradient and Hessian under gradient-based optimization compared to random perturbation of the parameters. Top: the density of the spectrum of the Hessian over different values of \(\lambda\) exhibits a larger outlier peak after gradient descent. Bottom: gradient descent induces more gradient interference between inputs and greater curvature of the loss landscape.
plex learning dynamics that make it difficult to determine root causes. This section will show that a number of plausible explanations of plasticity loss, including the rank of the network's features, the number of saturated units, the norm of its parameters, and the rank of the weight matrices, do not identify robust causal relationships. We provide some evidence supporting the hypothesis that plasticity loss arises due to changes in the network's loss landscape, and conclude with a discussion of the potential trade-offs that must be faced between preserving a trainable gradient structure and accurately predicting a value function.
### Experimental setting
We train a set of DQN agents on each environment-observation space combination in the classification MDP set described in Section 3.2, and evaluate the ability of each network to fit a randomly generated set of target functions as described in Section 2.2 after a fixed number of training steps. In the experiments shown here, we run the DQN agents with a target network update period of 1,000 steps; as mentioned previously, this is the principal source of non-stationarity in the true-label and random-label tasks. Every 5000 steps, we pause training, and from a copy of the current parameters \(\theta_{t}\) we train the network on a set of new regression problems to probe its plasticity. We log the loss at the end of 2,000 steps of optimization, sampling 10 different random functions, then resume training of the RL task from the saved parameters \(\theta_{t}\). We consider two network architectures: a fully-connected network (MLP) and a convolutional network architecture (CNN). Full details of the environments are included in Appendix A.2.
### Falsification of prior hypotheses
Prior work has proposed a number of plausible explanations of why neural networks may exhibit reduced ability to fit new targets over time. Increased weight norm (Nikishin et al., 2022), low rank of the features or weights (Kumar et al., 2020; Gulcehre et al., 2022), and inactive features (Lyle et al., 2021; Dohare et al., 2021) have all been discussed as plausible mechanisms by which plasticity loss may occur. However, the explanatory power of these hypotheses has not been rigorously tested. While a correlation between a particular variable and plasticity loss can be useful for diagnosis, only a causal relationship indicates that intervening on that variable will necessarily increase plasticity.
This section will seek to answer whether the above candidate explanations capture causal pathways. Our analysis is based on a simple premise: that for a quantity to exhibit explanatory power over plasticity loss, it should exhibit a consistent correlation across different experimental interventions (Buhlmann, 2020). If, for example, parameter norm is positively correlated with plasticity in one observation space and negatively correlated in another, then it can be ruled out as a causal factor in plasticity loss. To construct this experiment, we train 128 DQN agents under a range of tasks, observation spaces, optimizers, and seeds. Over the course of training, we log several statistics of the parameters and activations, along with the plasticity of the parameters at each logging iteration.
In Figure 3, we show scatterplots illustrating the relationship between plasticity and each statistic, where each point in the scatterplot corresponds to a single training run. We see that for each of four quantities, there exists a learning problem where the quantity positively correlates with plasticity, and one in which it exhibits a negative correlation. In many learning problems the correlation between plasticity loss and the quantity of interest is nonexistent. In all cases we note that the correlation with plasticity is already quite weak; even so, the ability to reverse the sign of this correlation is a further mark against the utility of these simple statistics as causal explanations of plasticity. For example, we see a positive correlation between weight norm and plasticity loss in environments which use CIFAR-10 observations, but a slight negative correlation in environments which sample observations from MNIST. A similar reversal happens with respect to feature rank across environments.
### Loss landscape evolution during training
If the simple statistics we have considered thus far lack explanatory power, how should we characterize plasticity loss? One open question is whether the reduced ability to fit arbitrary new targets arises because the optimization process gets caught in local optima, or whether it arises due to overall slow or inconsistent optimization progress. To answer this question, we turn our attention towards the learning curves obtained by networks when we ask them to fit new target functions. We study these learning curves primarily because they convey precisely the ease or difficulty of navigating the loss landscape. In particular, the learning curve tells us whether optimization is getting trapped in bad minima (in which case the learning curve would hit an early plateau at a large loss value), or whether the network has greater difficulty reducing the loss enough to find a minimum in the first place (corresponding to a flatter slope).
We show in Figure 4 the learning curves obtained by an optimization trajectory from parameters \(\theta_{t}\) on the probe task from different timesteps \(t\) of training on the RL task. We see that parameters from early training checkpoints quickly attain low losses, but that the slopes of these learning curves are monotonically increasing with the parameter age \(t\). Of particular note is the increasing variance of the curves: in the full-batch case, this non-monotonicity is associated with increasing loss landscape sharpness (Cohen et al., 2021). In the mini-batch optimization setting, we observed both
increasing interference between minibatches as well as non-monotonicity in the loss even on the minibatch on which the gradient was computed. In short, we see that it is increasing difficulty of navigating the loss landscape, rather than poor local minima, that appears to drive plasticity loss.
## 6 Solutions
Thus far, we have demonstrated that neural networks can be demonstrated to lose plasticity even in a task as simple as classifying MNIST digits, assuming that a degree of non-stationarity is introduced into the optimization dynamics. We now turn our attention to means of reducing or reversing this loss of plasticity. Section 6.1 will evaluate whether scaling alone can eliminate plasticity loss. Section 6.2 will evaluate the effects of a variety of interventions on plasticity. We test the applicability of these findings to larger scale tasks in Section 6.3.
### The role of scaling on plasticity
Before considering sophisticated methods to address plasticity loss, we must first answer the question of whether this is simply a disease of small networks. In the context of the impressive successes of large models and the resultant'scaling laws' phenomenon (Kaplan et al., 2020), it is entirely plausible that plasticity loss, like many other challenges, vanishes in the limit of infinite computation. We find that while plasticity loss is easiest to induce in extreme forms in small networks, scaling a CNN to the limit of a single GPU's memory is insufficient to eliminate plasticity loss even in the simple classification tasks described in the previous section. We visualize the relationship between network width and plasticity loss in Figure 5.
These observations suggest that plasticity loss is unlikely to be the limiting factor for sufficiently large networks on sufficiently simple tasks. However, for tasks which do not align with the inductive bias of the network (as in the MLPs trained on CIFAR-10), or for which the network is not sufficiently expressive (as is the case for the small networks of any architecture), we see a reduction in the ability to fit new targets over time. Because we typically cannot guarantee a priori that a learning problem will fall in the first category, we therefore turn our attention to other design choices which
Figure 4: Plasticity loss corresponds to slower training progress, rather than higher plateaus, in the networks studied in this paper. We plot learning curves on a new target fitting task starting from network checkpoints at different points in training. This figure illustrates a CNN trained on the true-label MDP described in Section 6 with a CIFAR-10 observation space.
Figure 5: We observe a consistent decline in plasticity loss across different target update frequencies as a result of scaling in several architecture-dataset combinations; however, even when scaling the architecture to the point where it no longer fits on a single GPU, we are still unable to completely eliminate plasticity loss on these simple classification-inspired problems.
Figure 3: Results of our experimental falsification design: for any variable we consider, it is possible to construct a set of learning problems in which the variable exhibits either a positive or a negative correlation with plasticity. For example, weight norm and weight rank exhibit differing correlation signs depending on the observation space, while feature rank and sparsity depend on the reward structure of the environment.
might further insure networks against plasticity loss.
### Interventions in toy problems
In this section we evaluate the effect of a variety of interventions on plasticity loss. We evaluate interventions on the same task used in Section 5.1, training for 100 iterations of 1000 steps. We consider four architectures: a multi-layer perceptron (MLP), a convolutional neural network (CNN) without skip connections, a ResNet-18 (He et al., 2016), and a small transformer based on the Vision Transformer (ViT) architecture (Dosovitskiy et al., 2020).
We consider the following interventions: **resetting** the last layer of the network at each target network update, a simplified variant of the scheme proposed by Nikishin et al. (2022); **resetting the network optimizer state** at each target network update; adding **layer normalization**(Ba et al., 2016) after each convolutional and fully-connected layer of the CNN and the MLP; performing **Shrink and Perturb**(Ash and Adams, 2020): multiplying the network weights by a small scalar and adding a perturbation equal to the weights of a randomly initialized network; leveraging a **two-hot** encoding, which presents a distributional formulation of scalar regression wherein the network outputs a categorical probability distribution over fixed support and minimizes a cross-entropy loss with respect to an encoding of a regression target which distributes mass across two adjacent bins of the support; **spectral normalization** of the initial linear layer of the CNN and the MLP (Gogianu et al., 2021); and **weight decay,** setting the \(\ell_{2}\) penalty coefficient to \(10^{-5}\).
These methods were chosen to be representative samples of a number of approaches to mitigating plasticity loss: resetting the optimizer state and last layer temporarily remove a source of poor conditioning from the optimization process; layer normalization and residual connections tend to make networks more robust to optimizer choices; weight decay and spectral normalization both regularize the parameters of the network in different ways; shrink and perturb applies a perturbation to the current parameters without significantly changing the decision boundary (though we note that for regression tasks this will still influence the scale of the network outputs, and so may not be suitable ).
We visualize our key takeaways in Figure 6, which compares plasticity loss after 100 iterations of training on each of the architecture-intervention combinations. Overall, explicitly constructing a network parameterization which smooths out the loss landscape is the most effective means of preserving plasticity of all approaches we have considered, and has a greater effect on plasticity than resetting the final layer of the network. We visualize some learning curves of networks with and without layer normalization in Figure 17 in the supplementary material.
We note that while the two-hot encoding does demonstrate significant reductions in plasticity loss, it does so at the cost of stability of the learned policy in several instances we considered. Additionally, this intervention required significantly different optimizer hyperparameters from the regression parameterization, suggesting that while it can be a powerful tool to stabilize optimization, it might not be suitable as a plug-in solution to mitigate plasticity loss in an existing protocol.
### Application to larger benchmarks
We now evaluate whether the benefits of layer normalization on plasticity in toy classification tasks translate to larger-scale benchmarks. We use the standard implementation of double DQN (Van Hasselt et al., 2016) provided by Quan and Ostrovski (2020), and evaluate three seeds on each of the 57 games in the Arcade Learning Environment benchmark (Bellemare et al., 2013). We use the RMSProp optimizer, \(\epsilon\)-greedy exploration, and frame stacking (Mnih et al., 2015). Full implementation details can be found in Appendix A.3. The only difference between the baseline implementation and our modification is the incorporation of layer normalization after each hidden layer in the network.
We see in Figure 7 that the introduction of layer normalization robustly improves performance across the benchmark. We emphasize that we did not perform any optimizer or other hyper parameter tuning. While this improvement cannot be definitively attributed to a reduction in plasticity loss from the evidence provided, it points towards the regularization of the optimization landscape as a fruitful direction towards more robust RL agents. We further observe that many of the environments where layer normalization offers a significant boost to performance are those where the gradient covariance structure of the default architecture is degenerate or where the Hessian is ill-conditioned, and the LN networks which obtain performance improvements tend to have correspondingly better behaved gradient covariance. We provide a hint into this phenomenon in Figure 7, and defer the complete evaluation over all 57 games to Ap
Figure 6: Effect of architectural and optimization interventions on plasticity loss. Colour indicates change in loss on challenge targets between initial and final epoch of training on RL task. Darker shading indicates less plasticity loss.
pendix B.3.
## 7 Related Work
**Trainability:** the problem of finding suitable initializations for neural networks to enable training has a long history (Glorot and Bengio, 2010; He et al., 2015; Sutskever et al., 2013). Without careful initialization and architecture design, it is common to run into the issue that gradients will either explode or vanish as the depth of the network grows (Yang and Schoenholz, 2017). ResNets (He et al., 2016) in particular are known to resolve many of these pathologies by biasing each layer's mapping towards the identity function, leading to better-behaved gradients (Balduzzi et al., 2017). Mean-field analysis (Yang and Schoenholz, 2017; Schoenholz et al., 2017; Yang et al., 2019), information propagation (Poole et al., 2016), and deep kernel shaping (Zhang et al., 2021b; Martens et al., 2021) have all been applied to study trainability in neural networks. A wealth of prior work additionally studies the role of loss landscape smoothness in generalization and performance (Li et al., 2018; Santurkar et al., 2018; Ghorbani et al., 2019; Park and Kim, 2022). Other works highlight the chaotic behaviour of early training periods (Jastrzebski et al., 2020), in particular the 'edge of stability' phenomenon (Cohen et al., 2021) and the 'catapault mechanism' (Lewkowycz et al., 2020), and relate closely to the observations grounding 'linear mode connectivity' (Frankle et al., 2020) to explain generalization and trainability in deep neural networks; however, these approaches all focus on supervised learning with a stationary objective.
**Resetting + continual learning:**(Zhang et al., 2021a), (Berariu et al., 2021)(Hadsell et al., 2020; Rolnick et al., 2019). (Ostapenko et al., 2019) class-incremental learning, differs from our setting because the input distribution changes, not the functional relationship. Tangarasa et al. (2020) propose a modified Hebbian learning rule. Studies of plasticity in task-shift continual learning usually focus on ability to learn under new input distributions (Rolnick et al., 2019), rather than new targets. Most related to our study is the identification of the _loss_ of plasticity as a potentially limiting factor in deep reinforcement learning (Lyle et al., 2021; Dohare et al., 2021). This study can be motivated by the rich literature studying the effect of resetting and distillation on performance (Fedus et al., 2020; Nikishin et al., 2022; Igl et al., 2021; Schmitt et al., 2018).
## 8 Conclusions
The findings of this paper highlight a divide between the study of curriculum learning and foundation models, which identify suitable early training objectives to accelerate learning and improve generalization on later tasks, and the phenomena we have identified concerning the loss of plasticity in non-stationary prediction problems. However, as reinforcement learning algorithms scale up to more complex tasks, the divide between these regimes shrinks. While it is possible that in many settings, plasticity loss is not a limiting factor in network performance and so need not be a concern for many of the relatively small environments used to benchmark algorithms today, we conjecture that as the complexity of the tasks to which we apply RL grows, so will the importance of preserving plasticity.
The findings of this paper point towards stabilizing the loss landscape as a crucial step towards promoting plasticity. This approach is likely to have many ancillary benefits, presenting an exciting direction for future investigation. A smoother loss landscape is both easier to optimize and tends to exhibit better generalization, and it is an exciting direction for future work to better disentangle the complementary roles of memorization and generalization in plasticity.
Figure 7: Layer normalization improves performance and changes the gradient covariance structure in DDQN agents. Top: Human-normalized improvement score (Wang et al., 2016) of adding layer normalization over the default double DQN agent. Bottom: Gradient covariance matrices for Freeway (left) and Kangroo (right). In environments where layer normalization significantly improves performance, it also induces weaker gradient correlation. |
2306.09189 | High-Resolution Convolutional Neural Networks on Homomorphically
Encrypted Data via Sharding Ciphertexts | Recently, Deep Convolutional Neural Networks (DCNNs) including the ResNet-20
architecture have been privately evaluated on encrypted, low-resolution data
with the Residue-Number-System Cheon-Kim-Kim-Song (RNS-CKKS) homomorphic
encryption scheme. We extend methods for evaluating DCNNs on images with larger
dimensions and many channels, beyond what can be stored in single ciphertexts.
Additionally, we simplify and improve the efficiency of the recently introduced
multiplexed image format, demonstrating that homomorphic evaluation can work
with standard, row-major matrix packing and results in encrypted inference time
speedups by $4.6-6.5\times$. We also show how existing DCNN models can be
regularized during the training process to further improve efficiency and
accuracy. These techniques are applied to homomorphically evaluate a DCNN with
high accuracy on the high-resolution ImageNet dataset, achieving $80.2\%$ top-1
accuracy. We also achieve an accuracy of homomorphically evaluated CNNs on the
CIFAR-10 dataset of $98.3\%$. | Vivian Maloney, Richard F. Obrecht, Vikram Saraph, Prathibha Rama, Kate Tallaksen | 2023-06-15T15:16:16Z | http://arxiv.org/abs/2306.09189v2 | High-Resolution Convolutional Neural Networks on Homomorphically Encrypted Data via Sharding Ciphertexts
###### Abstract
Recently, Deep Convolutional Neural Networks (DCNNs) including the ResNet-20 architecture have been privately evaluated on encrypted, low-resolution data with the Residue-Number-System Cheon-Kim-Kim-Song (RNS-CKKS) homomorphic encryption scheme. We extend methods for evaluating DCNNs on images with larger dimensions and many channels, beyond what can be stored in single ciphertexts. Additionally, we simplify and improve the efficiency of the recently introduced multiplexed image format, demonstrating that homomorphic evaluation can work with standard, row-major matrix packing and results in encrypted inference time speedups by \(4.6-6.5\times\). We also show how existing DCNN models can be regularized during the training process to further improve efficiency and accuracy. These techniques are applied to homomorphically evaluate a DCNN with high accuracy on the high-resolution ImageNet dataset for the first time, achieving \(80.2\%\) top-1 accuracy. We also achieve the highest reported accuracy of homomorphically evaluated CNNs on the CIFAR-10 dataset of \(98.3\%\).
## 1 Introduction
Deep learning has emerged as a powerful tool for solving image processing tasks due to its ability to automatically learn relevant features from raw data. Convolutional Neural Networks (CNNs), which are a type of deep learning model specifically designed for image processing, have achieved state-of-the-art performance on a variety of image processing tasks such as image classification [13], object detection [17], and segmentation [20].
Fully homomorphic encryption (FHE) [9; 19] is a technique enabling computation directly on encrypted data, and in particular, enabling Privacy Preserving Machine Learning (PPML). FHE has potential societal impact in applications where user and data privacy are critical, such as in cloud computing, healthcare analytics, and defense applications. However, adoption of FHE has been limited due to the speed of existing FHE neural network inference algorithms, and limitations of FHE itself. Previous work uses narrow or shallow DCNNs on low-resolution data, often using nonstandard activation functions, since FHE can only evaluate polynomials. Furthermore, it is challenging to ensure that polynomial approximations of activation functions are suitably accurate.
Key contributions of this work are summarized as follows:
* We design and implement efficient homomorphic convolution and pooling algorithms, which have been parallelized and handle large inputs and channels via sharding techniques.
* We apply these algorithms to construct three families of ResNet architectures, achieving the highest homomophically evaluated accuracy on CIFAR-10 and ImageNet-1k while re
ducing the inference latency relative to the previous state-of-the-art. We also do not observe any degradation of encrypted model accuracy relative to its unencrypted counterpart.
* We propose a training technique to reduce the input range to our activation functions by penalizing the kurtosis of the distributions of BatchNorm outputs, allowing efficient homomorphic polynomial approximation of the GELU activation function.
## 2 Background
Homomorphic encryptionRNS-CKKS [6; 7] is an FHE scheme that supports arithmetic over encrypted vectors of fixed-point numbers. Ciphertexts in this scheme are elements in the ring \(R_{Q}^{2}\), where \(R_{Q}=\mathbb{Z}_{Q}[x]/(x^{2N}+1)\) and \(Q\) is a large integer, and \(2N\) is called the _ring dimension_. Each such ciphertext has \(N\)_slots_, each of which stores a single real number, so it is useful to conceive of a ciphertext as a vector. Ciphertext vectors support vectorized addition and multiplication operations, as well as cyclic rotations. We pack images into RNS-CKKS ciphertexts.
Each ciphertext has a _level_, or maximum number of multiplications that can be applied before decryption error becomes too high; each multiplication reduces the level by one. The ciphertext level is restored through _bootstrapping_, though this is a time-consuming operation to be used sparingly.
Threat ModelThe threat model assumed is similar to previous PPMLs [5; 15]. We encrypt the input image but not the model weights. A client homomorphically encrypts data it wishes to send, which is then sent to a server for processing. The server performs inference on the encrypted data directly, sending back the encrypted inference result to the client. Since it is assumed that only the client holds the secret key, only they can decrypt the result, which guarantees privacy from the server. Because the server does not see the decrypted inference result, the Li-Macciato attack [16] is not applicable and we do not need to take noise flooding into account in our parameter selection.
## 3 Related Work
Early work on encrypted machine learning evaluated narrow and shallow CNNs with nonstandard activation functions on low-resolution data [5; 10]. Recent papers have begun evaluating larger CNNs with standard design features on encrypted data. Prior work on PPML most similar to ours are Multiplexed Parallel Convolutions [15] and TileTensors [1]. Multiplexed Parallel Convolutions homomorphically evaluates deep but narrow CNNs with standard activation functions on low-resolution data. TileTensors homomorphically evaluates shallow CNNs with nonstandard activation functions on high-resolution data. In this work, we homomorphically evaluate wide and deep CNNs with standard activation functions on high-resolution data.
TileTensors uses concepts similar to our sharding approach to perform inference on \(224\times 224\) images using a modified AlexNet. They rely on shallow CNNs and do not perform the bootstrapping necessary to incorporate standard activation functions, instead relying on the same nonstandard activation function used in CryptoNets [10] and LoLa Nets [5], which is unsuited for DCNNs.
We improve on Multiplexed Parallel Convolutions, hereby defined as the multiplexed ResNet family, by supporting high-resolution images and wide channels that do not fit into a single ciphertext, as well as simplified packing. We also introduce a novel training regularization technique, enabling more efficient homomorphic evaluation of non-linear activations. Our implementation performs encrypted inference on a multiplexed ResNet-20 architecture \(4.6\times\) faster than Ref. [15]. We homomorphically evaluate wide ResNet architectures not supported by the multiplexed algorithms, and achieve significantly higher accuracy than multiplexed architectures on standard datasets.
## 4 Homomorphic Neural Network Operators
Algorithms have been carefully designed to minimize the number of encrypted multiplication and rotation operations to minimize latency. An _image_ consists of many _channels_. All dimensions are assumed to be powers of two, and each channel is assumed to be square in shape. The approach is adaptable to dimensions not powers of two with appropriate rescaling or zero padding. Given an image with \(c\) channels of size \(m\times m\), we homomorphically encrypt and represent it with RNS-CKKS
vectors. To encrypt an image into a ciphertext vector of size \(m^{2}c\), each channel \(M^{i}\) is represented in row-major order, and they are concatenated to obtain a single plaintext vector.
Sharding and Encrypting an ImageIn RNS-CKKS, storage capacity of a single ciphertext is determined by the ring dimension of the scheme, and is typically in the range \(2^{14}\) to \(2^{16}\). If a \(c\times m\times m\) tensor does not fit into a single ciphertext, channels are spread across _multiple_ ciphertexts, such that each ciphertext stores a subset of channels. Here, each ciphertext vector is called a _shard_, and the maximum amount of data storable in a shard is called the _shard size_. The performance of the scheme degrades with increasing ring dimension, so increasing the ring dimension to avoid sharding would negatively impact the efficiency of encrypted inference.
We distinguish the two cases of _image shards_ and _channel shards_. For _image shards_, a shard is large enough to hold at least one channel (\(m^{2}\leq s\)), but multiple shards are needed to store all channels (\(m^{2}c>s\)). See Figure 0(a) for an example of image shards. For _channel shards_ each channel must be split up across multiple shards (\(m^{2}>s\)), so that each shard contains a set of consecutive rows from a single channel. See Figure 0(b).
Duplicating and Permuting ChannelsIf an image does not fill a shard, its channels are _duplicated_. When \(s>m^{2}c\), we define a _duplication factor_ given by \(d=s/m^{2}c\), and place \(d\) copies of each channel when concatenating them together. \(d\) is tracked with the encrypted image as metadata. Our implementation of average pooling can _permute_ input channels. If one tracks the channels' order with a permutation defining the correct order, subsequent convolution operations can also be computed correctly. Therefore, we attach a channel permutation as metadata to an encrypted image.
### Convolution
We describe how to homomorphically convolve a single matrix with a single kernel, using same padding and a stride of \(1\); this does not change the channel's dimensions. Convolution is typically thought of as sliding a kernel over as matrix. However, one may also think of convolution as fixing the kernel, and sliding the matrix, which is a more useful visual in what follows. We formalize this observation and use it to compute convolutions. Denote \(\mathcal{S}_{k,\ell}\) on matrix \(M\) as a function that shifts rows up by \(k\) and columns left by \(\ell\). \(\mathcal{S}_{k,\ell}\) adds zeros when elements are shifted off the matrix. Then:
\[M*K=\sum_{k=-\kappa/2}^{\kappa/2}\sum_{\ell=-\kappa/2}^{\kappa/2}K_{k,\ell} \cdot\mathcal{S}_{k,\ell}(M). \tag{1}\]
See Figure 1. \(\mathcal{S}_{k,\ell}\) is implemented homomorphically: shifting a row-major matrix by one column is done by rotating the ciphertext vector by \(1\), while shifting by a row is done by rotating by \(m\). Wrap-around elements are zeroed out by multiplying the ciphertext vector with an appropriate binary mask. This allows us to homomorphically compute \(\mathcal{S}_{k,\ell}(M)\) for any shifts \(k\) and \(\ell\). To multiply \(\mathcal{S}_{k,\ell}(M)\) by the scalar \(K_{k,\ell}\), we create a vector of size \(m^{2}\) and multiply \(\mathcal{S}_{k,\ell}(M)\) elementwise with this vector. In practice, the multiplications for shift masking and those for kernel element multiplication are combined non-homomorphically before being applied homomorphically.
With a Single ShardRecall that to convolve a \(c\)-channel image with a single filter, \(c\) matrix convolutions are individually computed, and the results are summed. An image is typically convolved with multiple filters to produce multiple channels. Convolutions are computed in parallel all at once.
Figure 1: Illustrations of image sharding and channel sharding.
Given an image \(M\), denote \(M^{f}_{ij}\) as the \((i,j)\)-th element in the \(f\)-th channel of \(M\). Filters \(K\) ordinarily have dimensions \(c_{i}\times c_{o}\times m\times m\), so that \(K^{fg}_{ij}\) is \((i,j)\)-th element in the kernel convolved with the \(f\)-th input channel used to compute the \(g\)-th output channel. We begin with a \(1\times 1\) kernel size, in which case \(K^{fg}\) is the single-element kernel applied to the \(f\)-th input channel, to compute the \(g\)-th output channel. We further assume that \(M\) fits in exactly one shard, and that \(c_{i}=c_{o}=c\), so that \(M*K\) also occupies one shard. Then the \(g\)-th channel of \(M*K\) is given by Equation 2:
\[(M*K)^{g}=\sum_{r=0}^{c-1}K^{r+g,g}\cdot M^{r+g} \tag{2}\] \[\bigparallel_{g=0}^{c-1}K^{r+g,g}\cdot M^{r+g} \tag{3}\]
where index arithmetic above is modulo \(c\). We compute all \(c\) output channels simultaneously. Given \(0\leq r<c\), the \(r\)-th _partial convolution_ is defined in Equation 3. The full convolution is obtained by summing over partial convolutions:
\[M*K=\sum_{r=0}^{c-1}\bigparallel_{g=0}^{c-1}K^{r+g,g}\cdot M^{r+g}=\sum_{r=0} ^{c-1}\left(\bigparallel_{g=0}^{c-1}K^{r+g,g}\cdot\bigparallel_{g=0}^{c-1}M^ {r+g}\right). \tag{4}\]
See Figure 1 for a simple illustration of summing partial convolutions. Each summand corresponds to a single rotation of the ciphertext \(M\) by \(r\cdot m\cdot m\) positions.
When working with larger kernels, the prior approaches combine to compute the \(g\)-th output channel:
\[(M*K)^{g}=\sum_{r=0}^{c-1}\sum_{k=-\kappa/2}^{\kappa/2}\sum_{\ell=-\kappa/2}^{ \kappa/2}K^{r+g,g}_{k,\ell}\cdot\mathcal{S}_{k,\ell}(M^{r+g}). \tag{5}\]
Rotations \(\mathcal{S}_{k,\ell}(M^{r+g})\) are computed once and cached. As with \(1\times 1\) kernels, we use partial convolutions to compute all \(c\) channels at once.
Rather than directly implement strided convolution as in Ref. [15], we instead compose an unstrided convolution with downsampling described in Section 4.2. This preserves the row-major order format and avoids multiplexed packing, and increases efficiency, as the multiplexed convolution algorithm of Ref. [15] has a multiplicative depth of 2, while we only use a single multiplicative level.
Figure 2: (a) Partial convolution computation for a \(4\)-channel image convolved with a \(1\times 1\) kernel. (b) A single convolution computed by shifting the matrix. (c) Shifting rows from channel shards into adjacent ones.
With Image ShardsLet \(M\) be an image of dimension \(c_{i}\times m\times m\), split across \(t\) shards, denoted as \([M]_{0},\dots,[M]_{t-1}\), implying a shard size \(s=\frac{m^{2}c_{i}}{t}\). Suppose we want to convolve \(M\) with filters \(K\) with dimensions \(c_{i}\times c_{o}\times m\times m\). Then the \(v\)-th output shard, \([M*K]_{v}\), is computed as:
\[[M*K]_{v}=\sum_{u=0}^{t-1}[M]_{u}*K^{\iota(u),\iota(v)}, \tag{6}\]
where \(\iota(u)\) is the index interval \(\iota(u)=[z\cdot u:z\cdot(u+1)]\), and \(z=s/m^{2}\), or the number of channels per shard. Intuitively, each single convolution in the summand above is computed using the approach in the previous section 4.1, slicing \(K\) accordingly, and summing up the results. With a shard size of \(s\), \(M*K\) is packed into \(c_{0}m^{2}/s\) shards, and \(v\) ranges over this.
Single Shard with Duplication and PermutationConvolution must work with a shard with \(d\)-duplicated channels. Filters \(K\) can be duplicated accordingly, but we instead index into \(d\) times when computing \(M*K\). Channels can also be permuted by pooling (see 4.2). In this case, the image passed from the previous layer is also assumed to return a permutation \(\tau\) defining the correct channel order. To compute a convolution using this permutation, any time we were to index into the filter \(K\) at input channel \(i\) (so \(K^{i}\)), we instead index into \(K\) at \(\tau(i)\) (so \(K^{\tau(i)}\)).
With Channel ShardsConvolving a channel-sharded image results in a channel-sharded image. Output channels are computed independently from one another, so we initially focus on convolving a shard of a single channel with a single kernel. Let \(M^{f}\) be the \(f\)-th input channel of image \(M\), which we convolve with a single kernel \(K\). Let \([M^{f}]_{u}\) be the \(u\)-th shard. We cache all _cyclic_ rotations \(\mathcal{S}_{k,\ell}([M^{f}]_{u})\), for \(k,\ell\) ranging over the indices of \(K\). \([M^{f}*K]_{v}\) is computed from the cached rotations of the input shards.
Shifting channels requires shifting all associated shards simultaneously. Shifting columns is accomplished by shifting each shard independently. When shifting rows, one needs to shift rows of one shard into an adjacent shard. Each row shift is constructed from two cached rotations (with the exception of first and last shards). See Figure 1(a) showing how rows are shifted between shards.
Each output channel is computed by summing over row and column shifts, and each summand is itself a sum of two kernel-masked shards. That is:
\[[M^{f}*K]_{v}=\sum_{k=-\kappa/2}^{\kappa/2}\sum_{\ell=-\kappa/2}^{\kappa/2} \mathfrak{m}_{k,\ell}(K_{k,\ell})\cdot\mathcal{S}_{k,\ell}([M^{f}]_{v})+ \overline{\mathfrak{m}_{k,\ell}}(K_{k,\ell})\cdot\mathcal{S}_{k,\ell}([M^{f}] _{v+\operatorname{sign}k}) \tag{7}\]
where \(\mathfrak{m}_{k,\ell}(x)\) is the vector given by shard-size-many elements of all \(x\), multiplied by the binary mask used in the shift operator \(\mathcal{S}_{k,\ell}\), and \(\overline{\mathfrak{m}_{k,\ell}}(x)\) is its complement. Then, to compute one shard \([M*K]_{v}\) of a single channel, we simply sum the shards \([M^{f}*K]_{v}\) over the input channels \(f\). Each such shard is computed independently done in parallel, concluding channel-sharded convolution.
### Average Pooling
We implement an average pooling operation with a \(2\times 2\) window; this increases the channel capacity of each shard by a factor of four. Our implementation preserves the format described previously, avoiding multiplexed packing used in Ref. [15], which does not rearrange pixels after downsampling.
With Image ShardsThere are up to three steps involved with pooling: _downsample_, which computes the average pool but leaves the original number of shards intact; _consolidate_, which reduces the number of shards; and _duplicate_, which duplicates channels if there is a single shard remaining.
In the _downsampling_ step, we convolve each channel with a \(2\times 2\) kernel of \(1\)s (as we would with homomorphic convolutions). This replaces each \(2\times 2\) window in each channel with the sum of the elements in the window. Next, we want to select only one of four elements in the new \(2\times 2\) windows; we choose the top-left element. The following is how we operate on individual channels \(M\), but generalizes to applying the operations to all channels within each shard simultaneously.
We _horizontally reduce_ the elements in channels of each shard, which is done with masking and summing over the channels of each shard, as in Equation 8:
\[M^{\prime}=\sum_{i=0}^{(m-1)/2}(M\cdot\mathfrak{m}_{i})\ll i\qquad\text{(8)}\qquad M ^{\prime\prime}=\sum_{j=0}^{(m-1)/2}(M^{\prime}\cdot\mathfrak{m}_{j})\ll 3i \cdot m/2 \tag{9}\]
where \(\mathfrak{m}_{i}\) is the binary mask that selects elements in the \(i\)-th column of each channel \(M\), and \(\ll\) (\(\gg\)) denotes ciphertext rotation to the left (right) by \(i\) slots. Then, we _vertically reduce_ each \(M^{\prime}\), as in Equation 9, where \(\mathfrak{m}_{j}\) is the binary mask that selects the left half of \(2j\)-th row in \(M^{\prime}\). See Figures 3.
After downsampling, each \(m\times m\) channel of the resulting shards contains only \(m/2\times m/2\) non-zero elements, all packed on the left-hand side. If we started with four or more shards, then we _consolidate_ the remaining non-zero elements into a quarter as many shards. This is done by rotating the shards from the previous step, and summing each group of four consecutive shards.
\[S=S_{0}+(S_{1}\gg m^{2})+(S_{2}\gg 2m^{2})+(S_{2}\gg 3m^{2}). \tag{10}\]
Starting with two image shards, we only have two summands in the above, and with one shard, there is no consolidation step. Consolidating shards results in channels out-of-order. See Figure 2(c). If we downsampled from two shards or fewer, then the resulting non-zero elements in the (single) consolidated shard would not fill up the entire shard, so we _duplicate_ the shard's channels:
\[S^{\prime}=S+(S\gg m^{2}/4)+(S\gg 2m^{2}/4)+(S\gg 3m^{2}/4). \tag{11}\]
With two shards, we duplicate by a factor of two, so the above would only have two summands.
With Channel ShardsChannel shards are downsampled individually, and every set of four consecutive shards is consolidated into one. In the edge case where the input image has one channel with two shards, we need to duplicate the resulting single shard by a factor of two. Pooling a channel-sharded image never results in an output with permuted channels.
### Other Layers
Batch NormalizationAt inference time, batch normalization is an affine transformation, which is expressible as additions and multiplications, so can be implemented homomorphically. These are folded into kernel element multiplication and bias addition in the previous convolution, respectively.
LinearEvaluation of a linear layer is a matrix multiplication of the previous layer's output with the weights of the linear layer. Each element of a matrix multiplication is computed as a dot product. The dot product of one vector with another is computed by first taking their elementwise product, then summing the elements of the resulting vector. Elements of vector \(v\) are summed by rotating over its slots, and adding the rotated vector to the original one. The result is a vector whose elements are all \(\Sigma_{i}v_{i}\), and is done in logarithmically many rotations. We get a single activation in the linear
Figure 3: Steps involved in a pooling operation. Duplication is not depicted.
layer's output, and repeat for each activation in the output of the linear layer. Activations are then masked and summed into a single ciphertext.
ResNets often pool each \(m\times m\) input channel to a single pixel, and apply a linear layer at the end. In general, the pool could use a window size larger than \(2\times 2\), which we have not implemented directly. We fuse pool and linear into a _pool-linear_. The linear layer's weights are duplicated as though it were operating on channels of size \(m\times m\), and we divide by a normalization factor of \(m^{2}\).
Gaussian Error Linear Unit (GELU)Non-linear activation functions are computed in RNS-CKKS through polynomial approximation. The polynomial degree and hence latency increases when the approximation must be accurate over a wide range. We introduce novel terms to the loss function during training to encourage hidden layer outputs to match the mean, variance, and kurtosis statistical moments of a Gaussian distribution, constraining the range over which the activation needs to be accurately computed. This allows more efficient low-degree polynomial approximation while minimally impacting model accuracy.
We use a GELU activation function since it is more amenable to polynomial approximations for fast homomorphic evaluation. We homomorphically compute a 59-degree polynomial approximation of GELU in a numerically stable way with a shallow arithmetic circuit by expanding the polynomial in a Chebyshev basis. Details on polynomial approximation of GELU and kurtosis regularization can be found in the Appendix.
## 5 Empirical Results
We use OpenFHE's implementation [3] of FHE with RNS-CKKS to implement the neural network operators described in Section 4 in C++, which are then thinly wrapped with Python bindings to build neural network architectures. Weights are loaded using PyTorch's API, though the approach is indepedent of deep learning framework. OpenMP is used to leverage parallelism from multicore CPUs. As our main focus is on fast encrypted inference of trained models rather than the unencrypted training process, we defer most of the details on the training configuration to the Appendix.
Experiments for ResNet-9 and multiplexed ResNets were run on a machine with a hyperthreaded AMD Ryzen Threadripper 3970X 32-core processor, 128 GB of memory, and an Ubuntu 22.04.2 operating system. Experiments for the encrypted ResNet-50 were run on a server with an AMD EPYC 7742 64-core processor, 800 GB of memory, and RHEL 7.9.
### Datasets
We perform image classification on CIFAR-10, CIFAR-100, and ImageNet, using various ResNets to evaluate the performance of our homomorphic neural network operators. CIFAR-10 and -100 contain \(32\times 32\) color images in 10 and 100 classes, respectively [12]. ImageNet-1k is a much larger scale dataset containing over 1.2 million high-resolution images with 1000 different classes [21], and is typically resized to \(224\times 224\) during inference, though this does not match our assumption that dimensions are powers of two. We evaluate two different models on ImageNet-1k resized to resolutions of both \(128\times 128\) and \(256\times 256\).
### Architectures
We modify DCNN architectures to decrease encrypted inference latency without adversely affecting model accuracy. We use \(2\times 2\) average pooling with stride \((2,2)\), and the GELU activation function. We train models with kurtosis regularization as described in the previous section, and more extensively in the Appendix.1.
Footnote 1: If using kurtosis-regularized GELU is not an option, such as when evaluating pre-existing models, our algorithms are compatible with any approach for computing ReLU over a wider range, such as higher-degree polynomial approximation or the approach in Ref. [14].
We homomorphically evaluate three classes of ResNets on CIFAR-10 and -100. We first evaluate the narrow deep multiplexed ResNet family used in the previous state-of-the-art for homomorphic DCNNs Ref. [15], as well as a wide ResNet-9 architecture taken from DAWNBench [8], and finally a fine-tuned version of the wide and deep ImageNet-1k ResNet-50 v1.5 [11]. The wide ResNet-9
and -50 achieve substantially higher accuracy than the multiplexed family, achieving a best accuracy of 94.7% and 98.3% on CIFAR-10, respectively, surpassing the 92.95% reported in Ref. [15] for a multiplexed ResNet-110 and the 92.8% we achieved for a multiplexed ResNet-56.
Our ImageNet-1k architecture is modified ResNet-50 v1.5 [11] with GELU and average pooling. This is a wide architecture, using between 64 and 2048 channels. On ImageNet-1k, we train and evaluate ResNet-50 on images resized to \(128\times 128\) and \(256\times 256\), respectively. The \(256\times 256\) model requires both channel shards and image shards, while the \(128\times 128\) model only requires image shards. As such, this illustrates a trade-off between model accuracy and inference time for image resolution. The resolution during training was set according to the FixRes [22] optimization, where the training resolution is \(3/4\) of evaluation resolution to account for data augmentation.
### Encrypted Inference Discussion
For the encrypted ResNet-50, we used a RNS-CKKS ring dimension of \(2^{16}\) and shard size of \(2^{15}\) with 59-bit scaling factors and a multiplicative depth of 34. When evaluating the multiplexed ResNets and ResNet-9, we used a lower shard size of \(2^{14}\). This lower shard size trades slower initial layers for faster later layers and bootstrapping operations, and improved the encrypted latency for these narrower architectures. These parameters suffice for a standard 128-bit security level [2]. The distributions prior to GELU are analyzed in order to determine a safe bound for our polynomial approximations; see the Appendix for details. For each model, runtime experiments are collected for 25 inferences; for each run, the runtimes for each algorithm are summed, and then the average is displayed in Tables 2 and 3, where the quoted error is the standard deviation in the total runtime. ResNet-9 and -50 models, which allow the channel dimension to substantially grow, spend less relative time bootstrapping when compared to the multiplexed ResNet family.
During inference on ImageNet-1k, ResNet-50 at 128 resolution uses a maximum of 32 shards, and at 256 resolution uses a maximum of 128 shards. On CIFAR-10, ResNet-50 uses a maximum of 16 shards. Due to channel size, inference on 256 resolution requires the use of channel shards, and has a \(2.9\times\) slower latency. However, note that ResNet-50 on 256 resolution has a \(6.1\%\) higher accuracy, so in this case, using higher resolution images produces a better classifier.
The _logit residual_, which is the difference between decrypted and unencrypted logits, generally form tight Gaussian distributions centered at zero. By using GELU and a small input range, we decreased the noise from bootstrapping and the polynomial approximation. This is reflected in the increased precision of the logit residual distributions, which has standard deviations at the \(10^{-4}-10^{-2}\) level when fit to a Gaussian, see Table 1 in the Appendix for more details. We ran 1000 inferences with ResNet-20 on CIFAR-10, and all encrypted predictions match respective unencrypted predictions; this is an improvement over Ref. [15], where the encrypted classification accuracy is \(0.1-0.5\)% lower than the unencrypted accuracy. Furthermore, the difference in the top-2 logits between the encrypted and unencrypted ResNet-20 are examined, yielding Gaussian standard deviations at the
\begin{table}
\begin{tabular}{l l r r} \hline \hline Dataset & Model & Average Accuracy (\%) & Best Accuracy (\%) \\ \hline CIFAR-10 & ResNet-9 & \(94.5\pm 0.1\) & 94.7 \\ & ResNet-50 & \(98.3\) & 98.3 \\ & ResNet-20* & \(90.6\pm 0.3\) & 91.0 \\ & ResNet-32* & \(92.2\pm 0.2\) & 92.5 \\ & ResNet-44* & \(92.2\pm 0.1\) & 92.3 \\ & ResNet-56* & \(92.8\pm 0.2\) & 93.0 \\ & ResNet-110* & \(92.7\pm 0.2\) & 92.8 \\ \hline CIFAR-100 & ResNet-9 & \(74.9\pm 0.2\) & 75.3 \\ & ResNet-32* & \(66.6\pm 0.4\) & 67.0 \\ \hline ImageNet-1k & ResNet-50 @ 128 & \(74.1\) & 74.1 \\ & ResNet-50 @ 256 & \(80.2\) & 80.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model accuracy is averaged over five runs for all architectures except ResNet-50, and the quoted error is the standard deviation. The (*) represents our implementation of the multiplexed architectures found in Ref. [15]. Due to long training times, ResNet-50s are only trained once.
\(10^{-4}\) level. Thus, using kurtosis and GELU allows us to perform faster and more reliable encrypted inference.
As further discussed in the Appendix, we determined that the logit error is mainly due to bootstrapping noise. By applying MetaBTS [4] to reduce bootstrapping noise we further increased logit precision by a factor of \(20\times\) at the expense of a \(1.7\times\) increase in latency.
## 6 Conclusion and Future Work
We have successfully constructed three families of ResNet architectures that may be evaluated homomorphically: 1) the multiplexed family of architectures [15], 2) the ResNet-9 bag-of-tracks architectures [8], and 3) the popular ResNet-50 architecture [11]. Models have been homomorphically evaluated on a variety of standard datasets, including CIFAR-10, CIFAR-100, and ImageNet-1k. We proposed a training time technique to regularize the range of inputs to the GELU activation function by penalizing the fourth order statistical moment of the outputs of the BatchNorm distributions; this technique allows us to efficiently approximate the GELU function with polynomials under homomorphic constraints. When runtimes are compared to the previously reported runtimes of the multiplexed family, we observe a speedup on the previous state-of-the-art by approximately \(4.6-6.5\times\) without any classification accuracy degradation. We also report the highest homomorphically encrypted accuracy on CIFAR-10 and ImageNet-1k of \(98.3\%\) and \(80.2\%\), respectively.
Future work includes extending our models to more advanced tasks, such as encrypted object detection with the YOLO [18] family of architectures and sensitive document analysis with encrypted transformers [23]. Parallelization in this work was achieved with using multicore CPUs, but vectorized addition and multiplication operations on ciphertexts vectors could be ported to GPUs (or other hardware accelerators) to further accelerate computation and minimize latency.
|
2301.09648 | Identification of galaxy shreds in large photometric catalogs using
Convolutional Neural Networks | Contamination from galaxy fragments, identified as sources, is a major issue
in large photometric galaxy catalogs. In this paper, we prove that this problem
can be easily addressed with computer vision techniques. We use image cutouts
to train a convolutional neural network (CNN) to identify catalogued sources
that are in reality just star formation regions and/or shreds of larger
galaxies. The CNN reaches an accuracy ~98% on our testing datasets. We apply
this CNN to galaxy catalogs from three amongst the largest surveys available
today: the Sloan Digital Sky Survey (SDSS), the DESI Legacy Imaging Surveys and
the Panoramic Survey Telescope and Rapid Response System Survey (Pan-STARSS).
We find that, even when strict selection criteria are used, all catalogs still
show a ~5% level of contamination from galaxy shreds. Our CNN gives a simple
yet effective solution to clean galaxy catalogs from these contaminants. | Enrico M. Di Teodoro, Josh E. G. Peek, John F. Wu | 2023-01-23T19:00:00Z | http://arxiv.org/abs/2301.09648v1 | # Identification of galaxy shreds in large photometric catalogs using Convolutional Neural Networks
###### Abstract
Contamination from galaxy fragments, identified as sources, is a major issue in large photometric galaxy catalogs. In this paper, we prove that this problem can be easily addressed with computer vision techniques. We use image cutouts to train a convolutional neural network (CNN) to identify catalogued sources that are in reality just star formation regions and/or shreds of larger galaxies. The CNN reaches an accuracy \(\sim 98\%\) on our testing datasets. We apply this CNN to galaxy catalogs from three amongst the largest surveys available today: the Sloan Digital Sky Survey (SDSS), the DESI Legacy Imaging Surveys and the Panoramic Survey Telescope and Rapid Response System Survey (Pan-STARSS). We find that, even when strict selection criteria are used, all catalogs still show a \(\sim 5\%\) level of contamination from galaxy shreds. Our CNN gives a simple yet effective solution to clean galaxy catalogs from these contaminants.
Sky surveys (1464) -- Catalogs (205) -- Astronomical techniques (1684) -- Convolutional neural networks (1938) -- Galaxy evolution (594) 0000-0002-4880-2288]Enrico M. Di Teodoro
## 1 Introduction
In the past thirty years, large blind surveys of the sky with modern telescopes have revolutionized our understanding of galaxy formation and evolution. From early optical surveys, like the Digitized Sky Survey (DSS), to the latest surveys, like the Sloan Digital Sky Survey (SDSS, Eisenstein et al., 2011), the Panoramic Survey Telescope and Rapid Response System Survey (Pan-STARRS, Chambers et al., 2016) or the Dark Energy Spectroscopic Instrument (DESI) Legacy Imaging Surveys (Dey et al., 2019), a huge number of astronomical images in multiple photometric bands and in multiple epochs have been produced throughout the years and have allowed astronomers worldwide to study the properties of galaxies on large statistical samples.
A significant fraction of the science done with these surveys is carried out using large catalogs of objects extracted from the astronomical images, rather than with the images themselves. For this reason, over the years, a lot of effort has been put into developing reliable, efficient and fully-automated source finding algorithms, able to identify robustly sources and estimating their most important photometric parameters, like magnitudes, colors, redshifts and sizes. Well-tested codes, like the Source Extractor (SExtractor, Bertin & Arnouts, 1996), the Tractor (Lang et al., 2016) and ProFound (Robotham et al., 2018), are routinely used to extract source catalogs from astronomical images (see Masias et al., 2012, 2013, for a review of different source finding techniques tested in the years).
One of the problems that still affects modern photometric catalogs is the considerable presence of galaxy shreds, i.e. spatially extended galaxies are often broken down into multiple objects. It is clear that this issue is more prominent in relatively low-redshift galaxies, which can cover larger angular regions and can show several resolved star-formation regions in their disks. An example of this is illustrated in Figure 1 for the nearby galaxy UGCA 021. Leftmost panel shows all sources (white circles) in the field that are listed in the photometric catalog of the Legacy survey, based on source extrac
tion with the Tractor. The vast majority of sources in the catalog are in reality star-formation regions within the disk of the galaxy. Catalogs can be partially cleaned by using more aggressive selection criteria, for example based on colors and sizes of detected sources. The central and right panels of Figure 1 show objects left after more stringent criteria are applied to Legacy and SDSS catalogs, respectively (see Section 3.1). Although many spurious detections have been removed, there are still more shreds of the galaxy (yellow circles) than genuine sources (red circles). Shreds can be more or less numerous in different photometric catalogs, as shown in the left panel for the SDSS catalog.
In this paper, we present and make available a new method to clean catalogs from galaxy shreds. A convolutional neural network (CNN) is trained on three-band optical images in order to identify whether objects in catalogs are galaxy fragments or genuine sources. The remainder of this paper is structured as follows. Section 2 describes the data used in this work and the machine-learning algorithm used for identifying galaxy shreds. We apply our CNN to some of the most complete photometric catalogs available nowadays and discuss our findings in Section 3. We summarize and conclude in Section 4. The code used in our analysis and the trained CNN model are publicly available on Github: [https://github.com/editeodoro/CNN_shreds](https://github.com/editeodoro/CNN_shreds).
## 2 Methods
### Data
We used three-band images from the Legacy Surveys to train and test our machine learning algorithm. The DESI Legacy Imaging Surveys (or Legacy, for simplicity, Dey et al., 2019) is a combination of three public projects with DESI, i.e. the DECam Legacy Survey (DECaLS), the Beijing-Arizona Sky Survey (BASS, Zou et al., 2017) and Mayall \(z\)-band Legacy Survey (MzLS). Legacy provides imaging of 14,000 deg2 of the northern sky in three optical or near-infrared filters, i.e. \(g\)-\(r\)-\(z\) bands, covering most of the SDSS footprint with a comparable spatial resolution but significantly deeper integration times than SDSS.
Footnote 2: Available at data.sdss.org/sas/dr17/sdss/atlas/v1.
We put together a dataset for training/testing by starting from the full Legacy DR9 photometric catalog1. Because galaxy shreds will be dominant in low-redshift systems, we used the NASA-Sloan Atlas (NSA) to reduce the pool of candidate systems. The NSA is a catalog of spectroscopically confirmed galaxies at redshift \(z<0.15\)(Blanton et al., 2011). In particular, we used v1.0.1 of the NSA catalog3. We only kept objects in the Legacy catalog that are within 100 kpc from a NSA source, leaving us with \(\sim\)2,000,000 possible targets. Amongst these, we created a training set of 5,000 sources by visually inspecting their images and by man
Figure 1: Photometric sources around the spiral galaxy UGCA 021. In all panels, we show a color image in \(g\)-\(r\)-\(z\) band from the Legacy surveys. In the left panel, we show all sources in the Legacy DR9 catalog, the middle and right panels show sources in Legacy and SDSS DR17 catalogs, respectively, after some cleaning using stricter selection criteria, similar to those described in Section 3.1. In the middle and right panels, galaxy shreds are highlighted in yellow, while genuine galaxy sources in red. The only NSA spectroscopically-confirmed source in this field is the galaxy UGCA 021. Images have \(512\times 512\) pixels with a pixel size of \(0.262^{\prime\prime}\). The white-dashed square in the right panel is an example of a \(128\times 128\) pixel image used by our CNN.
ually labelling them as either a galaxy or a shred. The visual classification of sources took into account a number of features that allow a human brain to distinguish between a genuine galaxy and a shred; this includes, for example, the color, shape, size and coherence of a source with respect to the surrounding region of an image. We selected sources such that, in the final training set, objects are split approximately \(60\%-40\%\) between real galaxies and galaxy shreds.
To generate the images used by the deep learning algorithms, we downloaded image cutouts of our training objects from the Legacy survey viewer website3. We fetched images in the \(g\)-\(r\)-\(z\) filters in the Portable Network Graphics (PNG) format at a 0.262 pix arcsec\({}^{-1}\) resolution. In particular, we used \(128\times 128\) pixel images, corresponding to about \(34^{\prime\prime}\times 34^{\prime\prime}\) per side. Image cutouts were centered on the position centroid of each object in our training set.
Footnote 3: www.legacysurvey.org/viewer
### Convolutional Neural Networks
We adopted a machine-learning algorithm to identify galaxy shreds from multi-band images. Individual star-formation regions within a galactic disk have very distinctive morphological features with respect to a galaxy as a whole. Convolutional Neural Networks (Lecun et al., 1998; Lecun et al., 2015) are ideal to identify these features and classify objects based on their appearance. A CNN is a particular type of neural network that includes a number of layers performing convolution, which can be used to extract weighted features from input images for a given problem. In particular, our task is a simple binary classification problem, i.e. whether a target is either a proper galaxy or just a shred of a galaxy.
The architecture of the CNN used in this paper is illustrated in Figure 2. Our design is inspired by the VGG19 architecture (Simonyan and Zisserman, 2014), a commonly used CNN for large-scale image recognition, but with fewer convolutional layers and parameters. Our network includes 8 convolutional layers (red boxes) and 2 fully-connected layers (blue tall boxes), for a total of 10 layers and approximately 10.6 million trainable parameters. The dimension of inputs is \(128\times 128\times 3\), with the depth denoting the three image bands (\(g\)-\(r\)-\(z\)). Images go through four main stages of convolution with kernel sizes 3, 3, 3 and 5, and 32, 64, 128 and 512 filters, respectively. Multiple convolution layers with small convolution kernels increase the effective receptive field and add more representational flexibility to the model. A max-pooling layer (green boxes) follows convolution to downsample the matrices and to reduce the spatial size and the number of parameters. Sizes of max-pooling are 2, 2, 2 and 4, respectively. Dropouts (\(\rho=0.2\), blue boxes) are applied in each stage to discard unnecessary parameters and to prevent overfitting during the training of the CNN. Finally, the two fully-connected dense layers with 2048 and 1024 hidden units produce the binary classification into galaxy or shred.
For all convolutional layers and the first dense layer, we used a Rectified Linear Unit (ReLU, Nair and Hinton, 2010) activation function, i.e. \(f(x)=\max(0,\mathrm{x})\). The last output layer uses instead a "softmax" func
Figure 2: A schematic of the architecture of the convolutional neural network used in this work. Our CNN has 10 layers, including 8 2D convolutional layers (red boxes) and 2 fully-connected layers (blue tall boxes), with approximately 10.6M parameters. Max pooling and dropout operations are denoted as green and blue boxes, respectively. In convolutional layers, we indicate the size of the 2D windows and the number of output filters. In pooling stages, we also write the size of downsampling. Activation functions are indicated in the circles (R for ReLU and S for softmax function).
tion (Bishop, 2006), \(f(x)=\exp(x)/\sum_{j}\exp(x_{j})\), which provides a sort of probability for each class \(j\). Therefore, for each target, our CNN predicts an estimate (\(p_{\rm CNN}\)) of how likely the object is to be a shred. For our binary problem, we classify objects with \(p_{\rm CNN}\geq 0.5\) as shreds and objects with \(p_{\rm CNN}<0.5\) as galaxies. We optimize the CNN by minimizing the binary cross entropy loss.
We trained the CNN illustrated in Figure 2 using the dataset of 5,000 objects described in previous Section. During the training phase, we used a RMSprop optimizer with plain momentum and a learning rate of 0.001. A \(k\)-fold cross-validation with \(k=5\) was used to better evaluate how well the CNN is performing on independent datasets. We first reserved 20% of the data for testing. The remaining 80% was split into five random subsets, each including a training set (80%) and a validation set (20%), which was used to benchmark the predictions of the CNN. Moreover, we used image augmentation to artificially increase the number of data in the training set: new input images were created by randomly rotating and/or flipping the original dataset, which also helps the CNN to learn translational and rotational symmetry (Dieleman et al., 2015). During training, we set a maximum number of 100 epochs, with an early stopping mechanism based on the trend of the loss function of the validation set, i.e. training stops when the validation loss function does not decrease over 5 consecutive epochs. The CNN typically achieves convergence after 20-25 epochs.
Figure 3 shows the receiving operator characteristic (ROC) plot for our model, i.e. a curve of the true positive rate against the false positive rate for various CNN probability thresholds \(p_{\rm CNN}\). The true positive rate, also referred to as sensitivity or completeness, is \(\mathcal{C}\equiv\rm TP/(TP+FN)\) where TP is the true number of shreds and FN is the number of missed shreds. The false positive rate is \((1-\mathcal{S})\), related to the specificity \(\mathcal{S}=\rm TN/(TN+FP)\), where TN is the true number of galaxies and FP is the number of missed galaxies. The ROC can be used to evaluate the performance of a model: a good model will have a large area under the ROC curve (AUC), i.e. it will be able to maximize the true positive rate and, at the same time, to minimize the false positive rate. A perfect, ideal model would have an AUC of 1, while random guesses (grey dashed line in Figure 3) have \(\rm AUC=0.5\). Our CNN model, shown as a red thick line, has an \(\rm AUC=0.985\), indicating that our CNN has a high efficiency in discriminating between real galaxy and galaxy shreds. The accuracy \(\mathcal{A}=(\rm TP+TN)/(TP+TN+FP+FN)\) of our CNN model is \(0.97-0.98\).
## 3 Applications
In this section, we use our trained CNN to identify galaxy shreds in photometric catalogs from three well-known surveys: SDSS DR174, Legacy DR9 and Pan-STARSS PS1 DR25. We build galaxy catalogs from each survey using appropriate selection criteria and we investigate the level of contamination from galaxy shreds.
Footnote 4: Available at skyserver.sdss.org/dr17.
Footnote 5: Available at mastweb.stsci.edu/ps1casjobs.
### Example catalog selection
Photometric catalogs created with automated software usually includes sky objects that can be either galaxies or stars. In addition to these real objects, they can include fragments of galaxies and other bad sources, for example image artifacts that are mistakenly catalogued as sources. Depending on the science goals, a list of galaxy candidates can be usually extracted from the entire galaxy catalog by imposing appropriate photometric and quality cuts. To test our CNN, we built a source sample by querying photometric catalogs from SDSS, Legacy and Pan-STARSS surveys with a number of standard parameters.
Figure 3: The receiving operator characteristic (ROC) curve for our CNN model. Black thin lines denote the five cross-validation folds used, while the mean result is shown as a thick red line. The AUC for our model is 0.985. The dashed grey line shows the baseline for a random guess (AUC = 0.5).
As an example of a possible scientific case, we used selection criteria inspired by the cuts applied to select host galaxy candidates for the Satellites Around Galactic Analogs survey (SAGA, Geha et al., 2017). SAGA is searching for dwarf galaxies around Milky-Way-like hosts, thus it is especially susceptible to contamination by host shredding. We note that this is just a pedagogical example to show how our CNN could be usefully applied to real case studies. Our main sample selection was done by means of simple cuts in the surface brightness-magnitude and color-magnitude planes (Mao et al., 2021; Wu et al., 2022):
\[r \leq 21.0,\] \[\mu_{\rm eff}+\sigma_{\mu}-0.7\left(r-14\right) >18.5,\] \[(g-r)-\sigma_{gr}+0.06\left(r-14\right) <0.90,\]
where \(\mu_{\rm eff}\equiv r+2.5\log(2\pi R_{r,\rm eff})\) is the effective surface brightness in \(r\)-band, calculated from the extinction-corrected magnitude \(r\) and the half-light radius \(R_{r,\rm eff}\). In the above cuts, \((g-r)\) is the extinction-corrected color, while \(\sigma_{\mu}\) and \(\sigma_{gr}\) are errors on the effective surface brightness and on the color, respectively.
Beside these magnitude, color, and surface brightness cuts, we also applied a number of selection and quality flags to start cleaning our catalogs. Here, we only describe parameters for the Legacy catalog, but analogous cuts were applied to all three surveys. For the Legacy survey, first of all we used the morphological flag TYPE\(\neq\)PSF and TYPE\(\neq\)DUB to select only galaxies and to reject stars6. For the same reason, we also required that objects have measured half-light radius in \(r\) band (SHAPE_R \(>\) 0). A series of quality flags was then applied to define a sample of "good" galaxy targets, i.e. objects that can be well described in terms of a simple galaxy model (exponential, deVacouleur or Sersic profiles):
Footnote 6: See www.legacysurvey.org/dr9/catalogs for details on the catalog quantities.
NOBS \(\geq 1\), DERED_MAG \(\neq\)nan, ALLMASK \(=0\), FRACMASK\(<0.35\), FRACFLUX \(<4\), RCHISQ \(<10\), RCHISQ \(<4\) (any one band), FRACIN \(>0.7\) (any one band), and \(\sigma(\rm magnitude)<0.2\).
where the above criteria are applied to all three bands (\(g\)-\(r\)-\(z\)), unless otherwise noted. The first two criteria require the presence of good measurements in all bands, the third criterion is a standard quality mask for Legacy catalogs. The other criteria reject sources that are not well described by the model due, for example, to bad fits and/or considerable source blending. We note that these criteria for selecting "good" galaxy targets can be considered quite aggressive and should already remove many spurious detections.
Because we know that galaxy shreds will be found within a certain distance from the center of relatively low-redshift galaxies, in each catalog we only kept sources within a projected distance of 100 kpc from a spectroscopically-confirmed galaxy in the NSA catalog, similarly to what we did for the training dataset. Finally, because our CNN is trained with \(g\)-\(r\)-\(z\) band images from the Legacy surveys, we discard all sources that have either corrupted images or that are not covered in the Legacy footprint (for the SDSS and Pan-STARSS catalogs). This leaves us with three catalogs containing approximately 800,000 (SDSS and Pan-STARSS) or 700,000 (Legacy) galaxy candidates.
### Shreds in catalogs
We downloaded \(128\times 128\) color images from the Legacy survey centered on each source in the three catalogs. We fed our trained CNN with these images and obtained a binary classification into galaxy-shred for each object. Figure 4 illustrates 15 randomly-selected examples of objects in each class from the Legacy catalog: the left group of panels shows sources classified as genuine galaxies, while objects rather identified as galaxy shreds are shown in the right panels. Red crosses in all images indicate the centroid position of each source, according to our galaxy catalogs. Images in Figure 4 confirm that our CNN is extremely efficient and powerful in this particular classification problem, being able to disentangle real galaxies from shred in most cases.
The only questionable choice, from a human eye perspective, is the source highlighted with a red frame amongst the galaxy shreds. This is a flocculent star-forming galaxy that does not have a concentration of light in the central regions, which likely led the CNN to classify it as a simple shred. All other objects identified as shreds are, as expected, star-forming regions within larger discs that are misclassified as galaxies in the photometric catalogs. We note that our CNN in some cases is also able to deblend galaxies in background that overlap with much closer galaxies in foreground. An example of this can be seen in the first row of images amongst the real galaxy column in Figure 4. However, we stress that
our CNN is not purposefully trained for this and that several more advanced algorithms for galaxy deblending have been developed in the latest years (e.g., Reiman and Gohre, 2019; Arcelin et al., 2021; Hausen and Robertson, 2022). As a matter of fact, from a visual inspection of several hundred sources classified by our CNN, we realized that most of misclassified objects are actually compact high-redshift galaxies overlapping with low-redshift galaxies that our CNN labels as shreds.
In summary, we found that shreds make up for approximately 5% of sources in the SDSS and Pan-STARSS catalogs, and 4% of sources in the Legacy catalog. Therefore, despite the strict SAGA-like criteria applied to build these catalogs, we still observed a non-negligible contamination from galaxy shreds. We stress that the percentage of these contaminants can significantly increase if more relaxed criteria are used to build galaxy catalogs. For instance, simply removing the quality cut on RCHISQ for the Legacy photometric selection (see Section 3.1) makes the fraction of shreds to rise from \(\sim 4\%\) to \(\sim 7\%\).
## 4 Summary and Conclusions
In this work, we proposed a simple solution to a well-known problem that affects all photometric cat
Figure 4: Binary classification of sources into either a galaxy or a shred using the CNN of Figure 2. The left group of panels shows a random sample of targets identified as genuine galaxies, the right group of panels a random sample of targets classified as galaxy shreds. The source in a red frame highlights a (possibly) wrong classification, i.e. a galaxy likely mislabelled as a shred by the CNN.
alogs, i.e. the fact that extended galaxies are often shredded into multiple objects. The ability of quickly recognizing galaxy shreds is fundamental to clean up large galaxy catalogs with million sources. To this end, we trained a 10-layer convolutional neural network (CNN) to classify objects into either genuine galaxies or shreds, starting from three-band (\(g\)-\(r\)-\(z\)) color images from the Legacy surveys. The CNN was able to identify reliably galaxy shreds, reaching an accuracy of \(\sim 98\%\) on the testing dataset. Our trained CNN model is made available to the community at [https://github.com/editeodoro/CNN_shreds](https://github.com/editeodoro/CNN_shreds). We stress that our CNN can be easily modified to be trained and to work with images from any other optical/infrared survey, for example with five-band images from SDSS or Pan-STARSS surveys.
Such a CNN model can be useful for several scientific applications. We exemplify the potentiality of this approach by applying our CNN to galaxy catalogs built with selection criteria analogous to those used for choosing targets for the recent SAGA survey. These criteria are particularly aggressive and should in theory already dismiss many contaminants. In particular, we built three galaxy catalogs, each containing some \(\sim 800\)K objects, starting from general photometric catalogs of the Legacy, SDSS and Pan-STARSS surveys. We used then the trained CNN to classify these photometric sources based on their color images. Our CNN returned a fraction of shreds of \(\simeq 5\%\) in each catalogs, highlighting how a relatively large number of spurious detections still affects these galaxy catalogs. In conclusion, our work demonstrates that CNNs are a powerful and efficient tool to identify contaminants and remove them easily from galaxy catalogs.
EDT was supported by the US National Science Foundation under grant 1616177 and by the European Research Council (ERC) under grant agreement no. 101040751. This work made use of data from the Legacy Surveys, from the Sloan Digital Sky Survey IV (SDSS) and from the Panoramic Survey Telescope and Rapid Response System Survey (Pan-STARSS). The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DE-CaLS; Proposal ID 2014B-0404), the Beijing-Arizona Sky Survey (BASS; NOAO ID 2015A-0801), and the Mayall z-band Legacy Survey (MzLS; ID 2016A-0453). Funding for the SDSS IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. AstroPy (Astropy Collaboration et al., 2013, 2018), matplotlib (Hunter, 2007), TensorFlow (Abadi et al., 2015).
|
2305.03686 | Provable Preimage Under-Approximation for Neural Networks (Full Version) | Neural network verification mainly focuses on local robustness properties,
which can be checked by bounding the image (set of outputs) of a given input
set. However, often it is important to know whether a given property holds
globally for the input domain, and if not then for what proportion of the input
the property is true. To analyze such properties requires computing preimage
abstractions of neural networks. In this work, we propose an efficient anytime
algorithm for generating symbolic under-approximations of the preimage of any
polyhedron output set for neural networks. Our algorithm combines a novel
technique for cheaply computing polytope preimage under-approximations using
linear relaxation, with a carefully-designed refinement procedure that
iteratively partitions the input region into subregions using input and ReLU
splitting in order to improve the approximation. Empirically, we validate the
efficacy of our method across a range of domains, including a high-dimensional
MNIST classification task beyond the reach of existing preimage computation
methods. Finally, as use cases, we showcase the application to quantitative
verification and robustness analysis. We present a sound and complete algorithm
for the former, which exploits our disjoint union of polytopes representation
to provide formal guarantees. For the latter, we find that our method can
provide useful quantitative information even when standard verifiers cannot
verify a robustness property. | Xiyue Zhang, Benjie Wang, Marta Kwiatkowska | 2023-05-05T16:55:27Z | http://arxiv.org/abs/2305.03686v4 | # On Preimage Approximation for Neural Networks
###### Abstract
Neural network verification mainly focuses on local robustness properties. However, often it is important to know whether a given property holds globally for the whole input domain, and if not then for what proportion of the input the property is true. While exact preimage generation can construct an equivalent representation of neural networks that can aid such (quantitative) global robustness verification, it is intractable at scale. In this work, we propose an efficient and practical anytime algorithm for generating symbolic under-approximations of the preimage of neural networks based on linear relaxation. Our algorithm iteratively minimizes the volume approximation error by partitioning the input region into subregions, where the neural network relaxation bounds become tighter. We further employ sampling and differentiable approximations to the volume in order to prioritize regions to split and optimize the parameters of the relaxation, leading to faster improvement and more compact under-approximations. Evaluation results demonstrate that our approach is able to generate preimage approximations significantly faster than exact methods and scales to neural network controllers for which exact preimage generation is intractable. We also demonstrate an application of our approach to quantitative global verification.
Neural networks, abstraction, verification, linear relaxation
## I Introduction
Despite the remarkable empirical success of neural networks, guaranteeing their correctness, especially when using them as decision-making components in safety-critical autonomous systems [1, 2, 3], is an important and challenging task. Towards this aim, various approaches have been developed for the verification of neural networks, with extensive effort devoted to local robustness verification [4, 5, 6, 7, 8, 9, 10, 11, 12]. While local robustness verification focuses on deciding the absence of adversarial examples within an \(\epsilon\)-perturbation neighbourhood, an alternative approach for neural network analysis is to construct the preimage of its predictions [13, 14]. By characterizing the preimage symbolically in an abstract representation, e.g., polyhedra, one can perform more complex analysis for a wider class of properties beyond local robustness.
However, the exact preimage generation method of [13] takes time exponential in the number of neurons in a network. Meanwhile, the approximate preimage generation method proposed in [14] bypasses the intractability of exact preimage generation by leveraging symbolic interpolants [15, 16] for abstraction of neural network layers. However, due to the complexity of interpolation, the time to compute the abstraction also scales exponentially with the number of neurons in hidden layers. Therefore, more efficient computation methods for (symbolic abstraction of) preimages of neural networks are needed.
This paper makes the following novel contributions. We propose an efficient and practical _anytime_ algorithm for generating symbolic under-approximations of the preimage of piecewise linear neural networks as a union of disjoint polytopes. The algorithm assumes a hyperrectangle input domain and relies on linear relaxation based perturbation analysis (LiRPA) algorithms [9, 10, 11], applied backward from a polyhedron output set. Our algorithm partitions the input region into disjoint subregions, which can be approximated independently in parallel in a divide-and-conquer approach. To assess and optimize the volume, our method bypasses the computational cost of exact volume computation by making use of statistical estimation and differentiable approximations. As an application, we show how to soundly and effectively verify _global quantitative_ properties of neural networks. We take advantage of the efficiency of our algorithm to iteratively generate a high-quality under-approximation to the property, while invoking expensive exact computation only at the end of the algorithm. Finally, we conduct an empirical analysis of our method on a range of control and reinforcement learning tasks, showing significant gains in efficiency compared to exact preimage generation, and demonstrating verification of quantitative properties of vehicle parking and aircraft collision avoidance systems.
## II Related Work
Our work is related to a series of works on local robustness verification of neural networks. To address the scalability issues with _complete_ verifiers [4, 5, 8] based on constraint solving, convex relaxation [17] has been used for developing highly efficient _incomplete_ verification methods [6, 9, 10, 18]. Later works employed the branch-and-bound (BaB) framework [7, 19] to achieve completeness, using convex relaxation for the bounding procedure [11, 12, 20]. We adapt convex relaxation for efficient preimage approximation. There are also works that have sought to define a weaker notion of local robustness known as _statistical robustness_[21, 22], which requires that a certain proportion of points under some perturbation distribution around an input point are classified in the same way. This has the advantage of providing quantitative information about when adversarial robustness does not hold, or cannot be proven. Verification of statistical robustness is typically achieved by sampling and statistical guarantees [23, 24, 21, 25]. In this work, we apply our symbolic approximation approach to quantitative analysis of neural networks, while providing exact quantitative rather than statistical guarantees [26].
While neural network verification has primarily focused on _local_ robustness properties in the vicinity of a specific input point, there has been recent interest in _global_ robustness properties [27, 28, 29], which measure the robustness of a neural network over the entire input space/data distribution, rather than the vicinity of a specific point. Our work is within the scope of global property analysis, aiming at analyzing neural network behaviour in a larger input subspace which may contain semantically distinct inputs.
Another line of related works considers neural network abstraction to derive exact or approximate representations of neural networks, which are more amenable to analysis. Abstraction techniques have been widely applied for explanations [30], verification [31, 32], reachability analysis [33], and preimage approximation [14, 34]. [14] leverages symbolic interpolants [16] to compute preimage approximations; however, checking interpolation condition suffers from exponential complexity w.r.t. the number of hidden neurons. [34] uses mixed integer linear programming (MILP) to compute a preimage over-approximation with the intersection of a finite set of (cutting-plane) constraints. In contrast, we construct an under-approximation abstraction of the preimage using a disjoint set of polytopes by employing efficient linear lower/upper bounds on nonlinear neurons of the neural network, and the derived symbolic abstraction can be further applied to provide provable guarantees on quantitative properties.
## III Preliminaries
We use \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\) to denote a feedforward neural network. For layer \(i\), we use \(\textbf{W}^{(i)}\) to denote the weight matrix for the layer, \(\textbf{b}^{(i)}\) the bias, \(h^{(i)}\) the pre-activation neurons, and \(a^{(i)}\) the post-activation neurons, such that we have \(h^{(i)}=\textbf{W}^{(i)}a^{(i-1)}+\textbf{b}^{(i)}\). We focus on ReLU neural networks where \(\text{ReLU}(h):=\max(h,0)\).
**Linear Relaxation of Neural Networks.** Nonlinear activation functions lead to the NP-completeness of the neural network verification problem [5]. To address such intractability, linear relaxation is used to transform the nonconvex constraints into linear programs. As shown in Figure 1, given _concrete_ lower and upper bounds \(\textbf{l}^{(i)}\leq h^{(i)}(x)\leq\textbf{u}^{(i)}\) on the pre-activation values of layer \(i\), the post-activation neurons of layer \(i\), \(a^{(i)}_{j}(x)=ReLU(h^{(i)}_{j}(x))\) can be bounded by the following linear lower and upper bounding functions w.r.t. \(h^{(i)}_{j}(x)\):
\[\left\{\begin{array}{ll}0\cdot h^{(i)}_{j}(x)\leq a^{(i)}_{j}(x)\leq 0 \cdot h^{(i)}_{j}(x),&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
where \(T\) consists of all values of \(x\) satisfying the first-order logic (FOL) formula \(\alpha(x):=\bigwedge_{i=1}^{K}\psi_{i}(x)\). We use the term polytope to refer to a bounded polyhedron, that is, a polyhedron \(T\) such that \(\exists R\in\mathbb{R}^{>0}:\forall x_{1},x_{2}\in T\), \(\left\|x_{1}-x_{2}\right\|_{2}\leq R\) holds. The abstract domain of polyhedra [35, 9, 36] has been widely used for the verification of neural networks and conventional programs. An important type of polytope is the hyperrectangle (box), which is a polytope defined by a closed and bounded interval \([\underline{x}_{i},\overline{x_{i}}]\) for each dimension, where \(\underline{x}_{i},\overline{x_{i}}\in\mathbb{Q}\). More formally, using the linear constraints \(\phi_{i}:=(\overline{x}_{i}\geq\underline{x}_{i})/\land(x_{i}\leq\overline{ x_{i}})\) for each dimension, the hyperrectangle takes the form \(\mathcal{C}=\{x\in\mathbb{R}^{d}|x\models\bigwedge_{i=1}^{d}\phi_{i}\}\).
## IV Problem Formulation
### _Approximate Preimage Generation_
We are interested in the problem of computing preimage abstraction for neural networks. Given a subset \(O\subset\mathbb{R}^{m}\) of the codomain, the preimage of a function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\) is defined to be the set of all inputs \(x\in\mathbb{R}^{d}\) that are mapped to an element of \(O\) by \(f\). For neural networks in particular, the input is typically restricted to some bounded input region \(\mathcal{C}\subset\mathbb{R}^{d}\). In this work, we restrict the output set \(O\) to be a polyhedron, and the input set \(\mathcal{C}\) to be an axis-aligned hyperrectangle region \(\mathcal{C}\subset\mathbb{R}^{d}\). We now define the notion of a restricted preimage:
**Definition 1** (Restricted Preimage).: _Given a neural network \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\), and an input set \(\mathcal{C}\subset\mathbb{R}^{d}\), the restricted preimage of an output set \(O\subset\mathbb{R}^{m}\) is defined to be the set \(f_{\mathcal{C}}^{-1}(O):=\{x\in\mathbb{R}^{d}|f(x)\in O\wedge x\in\mathcal{C}\}\)._
**Example 1**.: _To illustrate our problem formulation and approach, we introduce a vehicle parking task [37] as a running example. In this task, there are four parking lots, located in each quadrant of a \(2\times 2\) grid \([0,2]^{2}\), and a neural network with two hidden layers of 10 ReLU neurons \(f:\mathbb{R}^{2}\rightarrow\mathbb{R}^{4}\) is trained to classify which parking lot an input point belongs to. To analyze the behaviour of the neural network in the input region \([0,1]\times[0,1]\) corresponding to parking lot 1, we set \(\mathcal{C}=\{x\in\mathbb{R}^{2}|(0\leq x_{1}\leq 1)\land(0\leq x_{2}\leq 1)\}\). Then the restricted preimage \(f_{\mathcal{C}}^{-1}(O)\) of the set \(O=\{\boldsymbol{y}\in\mathbb{R}^{4}|\bigwedge_{i\in\{2,3,4\}}y_{1}-y_{i}\geq 0\}\) is then the subspace of the region \([0,1]\times[0,1]\) that is labelled as parking lot \(1\) by the network._
Unfortunately, the (restricted) preimage of an output set of a neural network is expensive to compute or represent exactly. Thus, we resort to deriving approximations, or _abstractions_ of the preimage, that we can efficiently manipulate and analyze. In particular, we focus on _provable under-approximations_ of the preimage. Given a first-order formula \(A\), \(\alpha\) is an under-approximation of \(A\) if it holds that \(\forall x.\alpha(x)\implies A(x)\). In our context, the restricted preimage is defined by the formula \(A(x)=(f(x)\in O)\land(x\in\mathcal{C})\), and we restrict to under-approximations \(\alpha\) that take the form of a disjoint union of polytopes.
**Definition 2** (Disjoint Union of Polytopes).: _A disjoint union of polytopes (DUP) is a FOL formula \(\alpha\) of the form \(\alpha(x):=\bigcup_{i=1}^{D}\alpha_{i}(x)\) where each \(\alpha_{i}\) is a polytope formula (conjunction of linear half-space constraints), with the property that \(\alpha_{i}\land\alpha_{j}\) is unsatisfiable for any \(i\neq j\)._
We will also refer to the set of points \(\mathcal{T}=\{x\in\mathbb{R}^{d}|\alpha(x)\}\) satisfying \(\alpha\) as a disjoint union of polytopes. The goal of our method is to generate a DUP under-approximation \(\mathcal{T}\) that maximizes the _volume_\(\text{vol}(\mathcal{T})\) covered by the under-approximation. In our method, we use the volume both to optimize the under-approximation, and as a stopping criterion when the under-approximation reaches a sufficient coverage. Disjoint unions of polytopes have the advantage that we can often analyze the polytopes independently, allowing us to leverage parallel computation. For example, to compute the volume of a DUP \(\mathcal{T}\), it suffices to compute the sum of volumes of each individual polytope \(T_{i}=\{x\in\mathbb{R}^{d}|\alpha_{i}(x)\}\), which is easier to compute.
### _Quantitative Properties_
One of the most important verification problems for neural networks is that of proving guarantees on the output of a network for a given input set [38, 39, 40].
**Definition 3** (Network Properties).: _Given a neural network \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\), a precondition (input set) \(I\subseteq\mathbb{R}^{d}\) and a postcondition (output set) \(O\subseteq\mathbb{R}^{m}\), we say that the neural network satisfies the property \((I,O)\) if \(x\in I\implies f(x)\in O\)._
This definition is commonly used to encode _safety_ properties, where the "unsafe region" is the complement of \(O\). A notable example of a safety problem is that of _local robustness_, where \(I\) is a perturbation region \(\left\|x^{\prime}-x\right\|_{p}\leq\epsilon\) around a fixed input \(x\), and \(O\) is the output region corresponding to a particular class label. The goal is then to prove whether the property holds, or else find a sufficiently small \(\epsilon\in\mathbb{R}^{>0}\) such that the property holds. This is usually achieved by _forward_ computation methods, such as abstract interpretation [38], which compute over-approximations of the _output_ set corresponding to the given input set \(I\); if the over-approximation is contained within \(O\), then the property is verified. Alternatively, we could verify such a property via _backward_ computation, by computing an _under-approximation_ of the pre-image of \(O\)[14]; if the input set \(I\) is contained within the under-approximation, then the property is verified.
When we consider _global_ properties that cover a larger input region and may consist of semantically distinct inputs, this property formulation often becomes inadequate. Firstly, if we cannot completely verify safety of the whole region, it is often preferable to obtain a quantitative guarantee of what proportion of the inputs satisfy the output condition, rather than restricting the size of the input region [23]. Secondly, many global properties are naturally expressed in a quantitative manner. For example, for an aircraft collision avoidance system, we might expect that the vast majority, but not all of the typical input space (say, \(>90\%\)) should result in a clear of conflict (COC) output.
**Definition 4** (Quantitative Property).: _Given a neural network \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\), a measurable input set with non-zero measure (volume) \(I\subseteq\mathbb{R}^{d}\), a measurable output set \(O\subseteq\mathbb{R}^{m}\), and a rational proportion \(p\in[0,1]\) we say that the neural network satisfies the property \((I,O,p)\) if \(\frac{\text{vol}(f_{I}^{-1}(O))}{\text{vol}(I)}\geq p\). 1_
Footnote 1: In particular, the restricted preimage of a polyhedron under a neural network is Lebesgue measurable since polyhedra (intersection of half-spaces) are Borel measurable and NNs are continuous functions.
If the property \((I,O)\) holds, then the quantitative property \((I,O,1)\) holds with proportion \(1\), while quantitative properties provide more information when \((I,O)\) does not hold. Notice that forward over-approximations of the output cannot be used to directly verify quantitative properties, as the approximation is in the output space. On the other hand, backward under-approximations allow us to approximate \(\text{vol}(f_{I}^{-1}(O))\) using our preimage under-approximation. We will later show how to use our preimage approximation method to verify quantitative properties when the input set is a polytope and the output set is a polyhedron.
Borrowing from the definitions for non-quantitative neural network properties [41], we now define soundness and completeness of verification algorithms for quantitative properties:
**Definition 5** (Soundness).: _A verification algorithm \(QV\) is sound if, whenever \(QV\) outputs \(\text{True}\), the property \((I,O,p)\) holds._
**Definition 6** (Completeness).: _A verification algorithm \(QV\) is complete if (i) \(QV\) never returns \(\text{Unknown}\), and (ii) whenever \(QV\) outputs \(\text{False}\), the property \((I,O,p)\) does not hold._
## V Methodology
**Overview.** In this section, we present our _anytime_ algorithm for approximate preimage generation. The algorithm maintains and iteratively refines/grows a set of polytopes \(\mathcal{T}\) as a guaranteed under-approximation to the restricted preimage \(f_{\mathcal{C}}^{-1}(O)\), such that the volume of the approximation improves after each iteration. This is achieved by iteratively splitting the original input hyperrectangle \(\mathcal{C}\) via axis-aligned cuts into a set of hyperrectangle subregions, and approximating the preimage restricted to each subregion with a single polytope. As the subregions are disjoint, we obtain a DUP approximation after each iteration. The overall method is shown in Algorithm 1.
Our method consists of three main components. Firstly, in Section V-A, we show how to cheaply compute single polytope under-approximations to the restricted preimage, by adapting efficient linear relaxation methods. Next, in Section V-B, we introduce our refinement procedure that improves the approximation by partitioning a (sub)region into two subregions, which can then be approximated more accurately. We show how to choose the region to split and the input feature to cut on to maximize coverage (volume), leveraging parallel computation and Monte Carlo sampling for efficiency. In Section V-C, we further improve the method by optimizing the linear relaxation bounds to maximize volume coverage. Finally, in Section V-D, we show how our preimage generation method can be used to verify quantitative properties.
```
Input: Input region \(\mathcal{C}\), Output region \(O\), Volume threshold \(v\), Maximum iterations \(R\) Output: Disjoint union of polytopes \(\mathcal{T}\)
1\(T\leftarrow\text{GentUnderApprox}(\mathcal{C},O)\); \(\widetilde{\text{vol}_{T}},\widetilde{\text{vol}_{f_{\mathcal{C}}^{-1}(O)}} \leftarrow\text{EstimateVol}(T)\), EstimateVol(\(f_{\mathcal{C}}^{-1}(O)\)) ; \(\widetilde{\text{Dom}}\leftarrow\{(\mathcal{C},T,\text{vol}_{f_{\mathcal{C}}^{-1}(O )}-\widetilde{\text{vol}_{T}})\}\) ; // Priority queue // \(\mathcal{T}\)\(\widetilde{\text{Dom}}\) is the union of polytopes in Dom
2whileEstimateVol\((\mathcal{T}_{\text{Dom}})<v\)and Iterations \(\leq R\)do
3\(\mathcal{C}_{\text{sub}},T,\text{Priority}\leftarrow\text{Pop}(\text{Dom})\) ; // Subregion with highest priority
4\([\mathcal{C}_{\text{sub}}^{1},\mathcal{C}_{\text{sub}}^{1,\text{u}},\)..., \(\mathcal{C}_{\text{sub}}^{d,\text{u}},\mathcal{C}_{\text{sub}}^{d,\text{u}}] \leftarrow\text{SplitOnFeature}(\mathcal{C}_{\text{sub}})\); / Split selected subregion into two subregions w.r.t. all features
5\([T^{1,l},T^{1,u},...,T^{d,l},T^{d,u}]\leftarrow\)
6GenUnderApprox(\([\mathcal{C}_{\text{sub}}^{1,l},\mathcal{C}_{\text{sub}}^{1,\text{u}},\)..., \(\mathcal{C}_{\text{sub}}^{d,l},\mathcal{C}_{\text{sub}}^{d,u}]\), \(O\)) ; // Generate preimage in parallel
7\([\text{vol}_{T^{1,l}},\widetilde{\text{vol}_{T^{1,u}},...,\widetilde{\text{ vol}_{T^{d,l}}},\widetilde{\text{vol}_{T^{d,l}}}}]\leftarrow\) EstimateVol(\([T^{1,l},T^{1,u},...,T^{d,l},T^{d,u}]\)); \(id\leftarrow\text{arg}\max_{i}\left(\widetilde{\text{vol}_{T^{i,l}}}+\widetilde{ \text{vol}_{T^{i,u}}}\right)\) ; // Select the splitting feature
8\(\text{vol}_{T^{i-1}_{\mathcal{C}_{\text{sub}}^{i,l}(O)}}^{-1},\text{vol}_{T^{i -1}_{\mathcal{C}_{\text{sub}}^{i,l}(O)}}^{-1}\leftarrow\) EstimateVol(\(f_{\mathcal{C}_{\text{sub}}^{i,l}(O)}^{-1}\)), EstimateVol(\(f_{\mathcal{C}_{\text{sub}}^{i,l}(O)}^{-1}\)) ; \(\text{Dom}\leftarrow\text{Dom}\cup\{(\mathcal{C}_{\text{sub}}^{id,l},T^{id,u}, \widetilde{\text{vol}_{T^{-1}_{\mathcal{C}_{\text{sub}}^{i,l}(O)}}-\widetilde{ \text{vol}_{T^{id,l}}}})\}\) ; \(\cup\{(\mathcal{C}_{\text{sub}}^{id,u},T^{id,u},\widetilde{\text{vol}_{T^{-1}_{ \mathcal{C}_{\text{sub}}^{id,u}}}}(O)-\widetilde{\text{vol}_{T^{id,u}}})\}\) ; // Disjoint partition of the input
9return\(\mathcal{T}_{\text{Dom}}\)
```
**Algorithm 1**Preimage Under-Approximation
### _Polytope Under-Approximation via Linear Relaxation_
We begin by adapting linear relaxation techniques in order to cheaply generate valid under-approximations to the restricted preimage for a given input region \(\mathcal{C}\). Recall that LiRPA methods enable us to obtain linear lower and upper bounds on the output of a neural network \(f\), that is, \(\underline{\boldsymbol{\Lambda}}x+\underline{\boldsymbol{\mathsf{b}}}\leq f(x) \leq\overline{\boldsymbol{\Lambda}}x+\overline{\boldsymbol{\mathsf{b}}}\), where the linear coefficients depend on the input region \(\mathcal{C}\).
Now, suppose that we are interested in computing an under-approximation to the restricted preimage, given the input hyperrectangle \(\mathcal{C}=\{x\in\mathbb{R}^{d}|x\models\bigwedge_{i=1}^{d}\phi_{i}\}\), and the output polytope specified using the half-space constraints \(\psi_{i}(y)=(c_{i}^{T}y+d_{i}\geq 0)\) for \(i=1,...,K\) over the output space. For each such constraint, we append an additional linear layer at the end of the network \(f\), which maps \(y\mapsto c_{i}^{T}y+d_{i}\), such that the function \(g_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) represented by the new network is \(g_{i}(x)=c_{i}^{T}f(x)+d_{i}\). Then, applying LiRPA bounding to each \(g_{i}\), we obtain lower bounds \(\underline{g_{i}}(x)=\underline{a_{i}^{T}}x+\underline{b_{i}}\) for each \(i\), such that \(\underline{g_{i}}(x)\geq 0\implies g_{i}(x)\geq 0\) for \(x\in\mathcal{C}\). Notice that, for each \(i=1,...,K\), \(\underline{a_{i}^{T}}x+\underline{b_{i}}\geq 0\) is a half-space constraint
in the input space. We conjoin these constraints, along with the restriction to the input region \(\mathcal{C}\), to obtain a polytope
\[T_{\mathcal{C}}(O):=\{x|\bigwedge_{i=1}^{K}(\underline{g_{i}}(x)\geq 0) \wedge\bigwedge_{i=1}^{d}\phi_{i}(x)\} \tag{5}\]
**Proposition 1**.: \(T_{\mathcal{C}}(O)\) _is an under-approximation to the restricted preimage \(f_{\mathcal{C}}^{-1}(O)\)._
In our algorithm, we use CROWN [6] to generate the LiRPA bounds in parallel over the output polytope constraints \(i=1,...,K\), and store the resulting input polytope \(T_{\mathcal{C}}(O)\) as a list of constraints. This procedure is highly efficient, enabling us to employ it as a sub-routine LinearLowerBound in our overall algorithm (Line 4 of Algorithm 2).
### _Global Branching and Refinement_
As LiRPA performs crude linear relaxation, the resulting bounds can be quite loose, meaning that the polytope approximation \(T_{\mathcal{C}}(O)\) is unlikely to constitute a tight under-approximation to the restricted preimage. To address this challenge, we employ a divide-and-conquer approach that iteratively refines our under-approximation of the preimage. Starting from the single initial region \(\mathcal{C}\) represented at the root, our method generates a tree by iteratively partitioning the subregion \(\mathcal{C}_{sub}\) represented at a leaf node into two smaller subregions \(\mathcal{C}_{sub}^{l},\mathcal{C}_{sub}^{u}\) (via an axis-aligned bisection), which are then attached as children to that leaf node. In this way, the subregions represented by all leaves of the tree are disjoint, such that their union is the initial region \(\mathcal{C}\).
After each iteration in Algorithm 1, each leaf subregion \(\mathcal{C}_{sub}\) has an associated polytope, computed using LiRPA bounds in Algorithm 2, that approximates the restricted preimage in \(\mathcal{C}_{sub}\). Thus, the union of the polytopes corresponding to all leaves forms our refined, anytime DUP under-approximation \(\mathcal{T}\) to the preimage in the original region \(\mathcal{C}\). The algorithm terminates after a desired volume threshold \(v\) for \(\text{vol}(\mathcal{T})\) has been reached, or after a fixed maximum number of iterations (polytopes) has been reached.
Unfortunately, even with a moderate number of input dimensions \(d\), naively splitting along all dimensions quickly becomes computationally infeasible. For example, splitting a \(d\)-dimensional hyperrectangle using bisections along each dimension results in \(2^{d}\) subdomains to approximate. It thus becomes crucial to identify the subregion splits that have the most impact on the quality of the under-approximation. At each iteration, given the subregion tree, our algorithm must first select which leaf subregion to split, and then which input feature/dimesion to bisect on, both of which will impact the refinement quality.
**Subregion Selection.** Searching through all leaf subregions at each iteration is too computationally expensive. Thus, we propose a subregion selection strategy that prioritizes splitting subregions according to the difference in volume between the exact restricted preimage \(f_{C_{sub}}^{-1}(O)\) and the (already computed) polytope approximation \(T_{\mathcal{C}_{sub}}(O)\) on that subdomain, that is:
\[\text{Priority}(\mathcal{C}_{sub})=\text{vol}(f_{C_{sub}}^{-1}(O))-\text{ vol}(T_{\mathcal{C}_{sub}}(O)) \tag{6}\]
This measures the gap between the polytope under-approximation and the optimal approximation, namely, the preimage itself. A larger value suggests that, by refining the approximation of the preimage in that subregion, we can improve the volume of the under-approximation in this subregion significantly. Crucially, since the polytopes in separate subregions are all disjoint, this corresponds directly to improving the volume of the overall DUP under-approximation.
Suppose that a particular leaf subdomain attains the maximum of this metric among all leaves, and we split it into two subregions \(\mathcal{C}_{sub}^{l},\mathcal{C}_{sub}^{u}\), which we approximate with polytopes \(T_{\mathcal{C}_{sub}^{l}}(O),T_{\mathcal{C}_{sub}^{u}}(O)\). As tighter intermediate concrete bounds, and thus linear bounding functions, can be computed on the partitioned subregions, the polytope approximation on each subregion will be refined compared with the single polytope restricted to that subregion.
**Proposition 2**.: _Given any subregion \(\mathcal{C}_{sub}\) with polytope approximation \(T_{\mathcal{C}_{sub}}(O)\), and its children \(\mathcal{C}_{sub}^{l},\mathcal{C}_{sub}^{u}\) with polytope approximations \(T_{\mathcal{C}_{sub}^{l}}(O),T_{\mathcal{C}_{sub}^{u}}(O)\) respectively, it holds that:_
\[\text{vol}(T_{\mathcal{C}_{sub}^{l}}(O))+\text{vol}(T_{\mathcal{ C}_{sub}^{u}}(O))\geq\text{vol}(T_{\mathcal{C}_{sub}}(O)) \tag{7}\] \[\text{Priority}(\mathcal{C}_{sub})\geq\text{Priority}(\mathcal{ C}_{sub}^{l})+\text{Priority}(\mathcal{C}_{sub}^{u}) \tag{8}\]
That is, we replace the leaf with maximal priority with two leaves with lower priority, which sum to at most the priority of the original leaf. In the next iteration, we may choose either of these leaves to split, or another leaf in the tree which now has maximal priority. Further, given the volume guarantee in Equation 7, we have the following Corollary:
**Corollary 1**.: _In each iteration of Algorithm 1, the volume of the polytope approximation \(\mathcal{T}_{Dom}\) does not decrease._
In practice, we may not be able to compute the volumes in Equation 6 exactly. On the one hand, computing the volume of a polytope is a computationally expensive task
requiring specialized tools [42], and we do not want to add such runtime cost during each iteration; more pertinently, we cannot compute (the volume of) the exact restricted preimage corresponding to the output specifications. Thus, we resort to statistical estimation of the volumes. Since subdomains \(\mathcal{C}_{sub}\) are hyperrectangles, it is easy to sample \(N\) points \(x_{1},...,x_{N}\) uniformly from the subdomain. Then we can employ Monte Carlo estimation for both the preimage and the generated polytope approximation:
\[\widehat{\text{vol}}(f_{\mathcal{C}_{sub}}^{-1}(O))=\frac{\sum_{i=1}^{N} \mathds{1}_{x_{i}\in\mathcal{I}\bar{c}_{sub}^{-1}(O)}}{N} \tag{9}\]
\[\widehat{\text{vol}}(T_{\mathcal{C}_{sub}}(O))=\frac{\sum_{i=1}^{N}\mathds{1} _{x_{i}\in\mathcal{I}\bar{c}_{sub}(O)}}{N} \tag{10}\]
**Splitting Feature.** Once we have selected a subregion to refine, we then decide how to split the subregion. Given a subregion (hyperrectangle) defined by lower and upper bounds \(x_{i}\in[\underline{z}_{i},\overline{z}_{i}]\) for all dimensions \(i=1,...,d\), we split it into two subregions by bisecting along some feature \(i\). This splitting procedure will produce two subregions which are similar to the original subregion, but have updated bounds \([\underline{z}_{i},\frac{\underline{z}_{i}+\overline{z}_{i}}{2}],[\frac{ \underline{z}_{i}+\overline{z}_{i}}{2},\overline{z}_{i}]\) for feature \(i\) instead. In order to determine the weight feature/dimension to split on, we propose a greedy strategy. Specifically, for each feature, we generate a pair of polytopes via LiRPA for the two subregions resulting from the split, and choose the feature that results in the greatest total volume of the polytope pair. In practice, another commonly-adopted splitting heuristic is to select the dimension with the longest edge; that is to select feature \(i\) with the largest range: \(\arg\max_{i}(\overline{z}_{i}-\underline{z}_{i})\). Performance evaluation of these two splitting methods is presented in Section VI-B.
**Example 2**.: _We revisit the vehicle parking problem in Example 1. Figure 1(b) shows the polytope under-approximation computed on the input region \(\mathcal{C}\) before refinement, where each solid line represents the cutting plane for each output specification (\(y_{1}-y_{i}\geq 0\)). Figure 1(c) depicts the refined approximation by splitting the input region along the vertical axis, where the solid and dashed lines represent the cutting planes for the two resulting subregions. It can be seen that the total volume of the under-approximation has improved significantly._
### _Local Optimization_
One of the key components behind the effectiveness of LiRPA-based bounds is the ability to efficiently improve the tightness of the bounding function by optimizing the relaxation parameters \(\mathbf{\alpha}\), via projected gradient descent. In the context of local robustness verification, the goal is to optimize the concrete lower or upper bounds over the (sub)region \(\mathcal{C}\)[10], i.e., \(\min_{x\in\mathcal{C}}\underline{\mathbf{\Lambda}}(\mathbf{\alpha})x+\underline{ \textbf{b}}(\mathbf{\alpha})\), where we explicitly note the dependence of the linear coefficients on \(\mathbf{\alpha}\). In our case, we are instead interested in optimizing \(\mathbf{\alpha}\) to refine the polytope under-approximation, that is, increase its volume. Unfortunately, as before, computing the volume of a polytope exactly is computationally expensive, and further does not allow for computation of the gradients with respect to \(\mathbf{\alpha}\).
To address this challenge, we propose a loss function to encode a differentiable relaxation of the polytope volume. We have seen that we can estimate the volume of a polytope using a set of samples \(x_{1},...x_{N}\) as \(\widehat{\text{vol}}(T_{\mathcal{C}_{sub},\mathbf{\alpha}}(O))=\frac{1}{N}\sum_{i =1}^{N}\mathds{1}_{x_{i}\in T_{\mathcal{C}_{sub},\mathbf{\alpha}}}(O)\), where we have highlighted the dependence of the polytope \(T_{\mathcal{C}_{sub}}(O)=\{x|\bigwedge_{i=1}^{K}\underline{g}_{i}(x,\mathbf{\alpha} _{i})\geq 0\wedge\bigwedge_{i=1}^{d}\phi_{i}(x)\}\) on \(\mathbf{\alpha}=(\mathbf{\alpha}_{1},...,\mathbf{\alpha}_{K})\), and \(\mathbf{\alpha}_{i}\) are the \(\alpha\)-parameters for the linear relaxation of the neural network \(g_{i}\) corresponding to the \(i^{\text{th}}\) half-space constraint in \(O\). However, this is still non-differentiable w.r.t. \(\mathbf{\alpha}\) due to the identity function. We now show how to derive a differentiable relaxation which is amenable to gradient-based optimization:
\[\widehat{\text{vol}}(T_{\mathcal{C}_{sub}}(O))=\frac{1}{N}\sum_{j=1}^{N} \mathds{1}_{x_{j}\in T_{\mathcal{C}_{sub},\mathbf{\alpha}}(O)} \tag{11}\]
\[=\frac{1}{N}\sum_{j=1}^{N}\mathds{1}_{\min_{i=1,...K}\underline{g}_{i}(x_{j}, \mathbf{\alpha}_{i})\geq 0} \tag{12}\]
\[\approx\frac{1}{N}\sum_{j=1}^{N}\sigma\left(\min_{i=1,...K}\underline{g}_{i}(x_ {j},\mathbf{\alpha}_{i})\right) \tag{13}\]
\[\approx\frac{1}{N}\sum_{j=1}^{N}\sigma\left(-\text{LSE}(-\underline{g}_{1}(x_ {j},\mathbf{\alpha}_{1}),...,\underline{g}_{K}(x_{j},\mathbf{\alpha}_{K}))\right) \tag{14}\]
The second equality follows from the definition of the polytope \(T_{\mathcal{C}_{sub},\mathbf{\alpha}}(O)\); namely that a point is in the polytope if it satisfies \(\underline{g}_{i}(x_{j},\mathbf{\alpha}_{i})\geq 0\) for all \(i=1,...,K\), or equivalently, \(\min_{i=1,...K}\underline{g}_{i}(x_{j},\mathbf{\alpha}_{i})\geq 0\). After this, we approximate the
Fig. 2: Refinement and optimization for preimage approximation.
identity function using a sigmoid relaxation, where \(\sigma(y):=\frac{1}{1+e^{-y}}\), as is commonly done in machine learning to define classification losses. Finally, we approximate the minimum over specifications using the log-sum-exp (LSE) function. The log-sum-exp function is defined by \(LSE(y_{1},...,y_{K}):=\log(\sum_{i=1,...,K}e^{y_{i}})\), and is a differentiable approximation to the maximum function; we employ it to approximate the minimization by adding the appropriate sign changes. The final expression is now a differentiable function of \(\mathbf{\alpha}\). We employ this as the loss function in Algorithm 2 (Line 6), and optimize using projected gradient descent.
It is worth noting that employing \(\mathbf{\alpha}\)-optimization breaks the guarantee in Proposition 2, for two reasons. Firstly, due to the sampling and differentiable relaxation, the optimization objective is not exact volume. Secondly, gradient-based optimization cannot guarantee improvement in its objective after each update. Nonetheless, \(\mathbf{\alpha}\)-optimization can significantly improve the approximation in practice, and so we employ it in our method.
**Example 3**.: _We revisit the vehicle parking problem in Example 1. Figure 1(a) and 1(b) show the computed under-approximations before and after local optimization. We can see that the cutting lines for all three specifications are optimized, which effectively improves the approximation quality._
```
Input: Property \((I,O,p)\), Maximum iterations \(R\) Output: Verification Result \(\in\) {True, False, Unknown} \(\mathsf{vol}(I)\leftarrow\text{ExactVolume}(I)\); \(\mathcal{C}\leftarrow\text{QuerBox}(I)\); // For general polytopes \(I\)\(\mathcal{T}\leftarrow\text{InitialRun}(\mathcal{C},O)\); while Iterations \(\leq R\)do
1\(\mathcal{T}\leftarrow\text{Refine}(\mathcal{T},\mathcal{C},O)\);
2if\(\text{EstimateVolume}(\mathcal{T})\geq p\times\text{vol}(I)\)then
3if\(\text{ExactVolume}(\mathcal{T})\geq p\times\text{vol}(I)\)thenreturn True return Unknown
```
**Algorithm 3**Quantitative Verification
### _Quantitative Verification_
Given a quantitative property \((I,O,p)\), where \(O\) is a polyhedron and \(I\) a polytope, we now show how to use our efficient preimage under-approximation method to verify the property. Assume for now that \(I\) is a hyperrectangle, so that we can take \(\mathcal{C}=I\) (the case of a general polytope is discussed in the supplementary material).
In order to verify the property, we can set the volume threshold for Algorithm 1 to be \(p\times\text{vol}(I)\), such that we have \(\frac{\overline{\text{vol}(\mathcal{T})}}{\text{vol}(I)}\geq p\) if the algorithm terminates before reaching the maximum iterations. However, our Monte Carlo estimates of volume (Line 4) cannot provide a sound guarantee that \(\frac{\text{vol}(\mathcal{T})}{\text{vol}(I)}\geq p\). To resolve this problem, we propose to run exact volume computation [43] only when the Monte Carlo estimate reaches the threshold. If the exact volume \(\text{vol}(\mathcal{T})\geq p\times\text{vol}(I)\), then the property is verified. Otherwise, we continue running the preimage refinement. This procedure is shown in Algorithm 3, where InitialRun generates an initial approximation to the restricted preimage as in Lines 1-3 of Algorithm 1, and Refine performs one iteration of approximation refinement as in Lines 5-11 of Algorithm 1. Termination occurs when we have verified the quantitative property, or when the maximum number of iterations has been exceeded.
We see that the algorithm outputs _True_ only if the _exact_ volume of the DUP approximation \(\mathcal{T}\) exceeds the threshold, even though sampling is used during the approximation refinement.
**Theorem 1**.: _Algorithm 3 is sound for quantitative verification._
Algorithm 3 is not complete as it outputs Unknown when the iterations budget is exceeded. A possible modification to the algorithm is to remove the threshold, and terminate when the subregions are "sufficiently small". By sufficiently small, we mean that, for a subregion, the derived concrete bounds for each neuron in the neural network are such that the neuron is stable (Equation 1); such "sufficiently small" subregions exist by continuity. If this is the case, the linear lower (and upper) bound \(\underline{\mathbf{\Delta}x}+\underline{\mathbf{b}}\) is exact over that subregion. This means the polytope approximation to the restricted preimage will also be exact. Thus, we can split our original region until all subregions satisfy the condition.
Unfortunately, this is still not enough to guarantee completeness, because our choice of subregion to split is probabilistic (uses sampling). That is, there exists a run of the algorithm where it splits within the interval \([0,0.5]\) (say) in every iteration, but never splits on \([0.5,1]\). In practice, for even moderately sized neural networks, it is computationally infeasible to obtain sufficiently small subregions, and so terminating according to a time/iterations budget is more appropriate. Complete proofs for propositions and theorems are provided in the supplementary material.
## VI Experiments
We perform experimental evaluation of the proposed approach on a set of benchmark tasks and demonstrate its effectiveness in approximation precision and efficiency, as well as an application to quantitative analysis of global properties. We aim to answer the following research questions (RQs): \(\mathbf{\Theta}\) How effective is our approach in preimage approximation? \(\mathbf{\Theta}\) How effective are our global refinement and local optimization methods and how do the parameter configurations affect the performance of our approach? \(\mathbf{\Theta}\) Can our approach be applied to quantitative verification of neural networks?
### _Benchmark_
We focus on preimage analysis for neural networks on a benchmark of control and reinforcement learning (RL) tasks, where global analysis on the entire input space or a critical input subspace is especially needed. Besides the vehicle parking task [37] shown in the example, we use the following benchmarks:
* Aircraft collision avoidance system (VCAS) [44]: The VCAS system is used to provide advisory for collision avoidance between the ownship aircraft and the intruder. VCAS uses four input features \((h,h_{A},h_{B},t)\) representing the relative altitude of the aircrafts, vertical climbing rates of the ownship and intruder aircrafts, respectively, and time to the loss of horizontal separation. VCAS is implemented by nine feed-forward neural networks built in with a hidden layer of 21 neurons and an output layer of nine neurons corresponding to different advisories.
* Neural network controllers: We use the trained neural network controllers from VNN-COMP 2022 [45] for three RL tasks: Cartpole, Lunarlander, and Dubinsrejoin [46]. The neural networks for Cartpole and Lunarlander tasks have two hidden layers with 64 neurons, and the neural network for Dubinsrejoin has two hidden layers with 256 neurons. (1) The neural network controller for Cartpole aims to balance a pole atop a cart by controlling the movement of the cart, which has four input variables representing the position and velocity of the cart and the angle and angular velocity of the pole, and has two output actions. (2) The controller for Lunarlander tackles the task of correct landing of a moon lander on a pad, which has eight input features and four output actions. (3) The controller for Dubinsrejoin targets guiding a wingman craft to a certain radius around a lead aircraft, where the input and output spaces are both eight dimensional.
### _Evaluation_
#### Iv-B1 Evaluation Metric
To evaluate the quality of the preimage approximation generated from our approach, we define the _coverage ratio_ to be the ratio of volume covered to the volume of the restricted preimage, i.e. \(\text{cov}(\mathcal{T},f_{\mathcal{C}}^{-1}(O)):=\frac{\text{vol}(\mathcal{ T})}{\text{vol}(f_{\mathcal{C}}^{-1}(O))}\). Since \(\mathcal{T}\) is an under-approximation to \(f_{\mathcal{C}}^{-1}(O)\), we have \(\text{cov}(\mathcal{T},f_{\mathcal{C}}^{-1}(O))\in[0,1]\). This gives us a normalized measure of the quality of the approximation, which can be compared across preimages for different output sets (e.g., the preimage of different labels), where \(\text{vol}(f_{\mathcal{C}}^{-1}(O))\) may vary. In practice, before running preimage generation, we estimate \(\text{vol}(f_{\mathcal{C}}^{-1}(O))\) as \(\text{vol}(\widehat{f_{\mathcal{C}}^{-1}(O)})=\frac{1}{N}\sum_{i=1}^{N}\mathds {1}_{f(x_{i})\in O}\), where \(x_{1},...x_{N}\) are samples from \(\mathcal{C}\), while the estimate of \(\text{vol}(\mathcal{T})\) is updated in each iteration in Algorithm 1. We can set the target volume threshold (stopping criterion) to be \(v=r\times\text{vol}(\widehat{f_{\mathcal{C}}^{-1}}(O)\), where \(r\) is a _target coverage ratio_.
#### Iv-B2 RQ1. Effectiveness in Preimage Approximation
To investigate the efficiency and scalability of our approach, we perform comparison experiments with the exact preimage generation method [13]. We also evaluate the scalability of our approach w.r.t. network layer widths and depths.
Aircraft Collision AvoidanceIn our experiment, we use the following input region for the ownship and intruder aircraft as in [13]: \(h\in[-8000,8000]\), \(h_{A}\in[-100,100]\), \(h_{B}=30\), and \(t\in[0,40]\). We consider the output property \(O=\{y\in\mathbb{R}^{9}\mid\wedge_{i\in[2,9]}\;y_{1}\geq y_{i}\}\) and generate the preimage approximation for the nine neural networks VCAS 1 to 9. We set the target coverage ratio as 90% and the termination criterion - the iteration limit of our algorithm - as 500.
The experimental results are summarized in Table I. We compare our method with exact preimage generation, showing the number of polytopes (#Poly) in the under-approximation and exact preimage, respectively, and time in seconds (Time(s)). Column "PolyCov" shows the approximate coverage ratio of our approach when the algorithm terminates. For all VCAS networks, our approach effectively generates the preimage under-approximations with the polytope number varying from 6 to 18. Compared with the exact method, our approach realizes an average reduction of 91.1% (131 vs 12). Further, the computation time of our approach for all neural networks is less than 20s, demonstrating orders-of-magnitude improvement in efficiency (\(564\times\) faster on average).
Neural Network ControllersIn this experiment, we consider neural network controllers for the reinforcement learning tasks. The exact preimage generation method fails to scale to neural networks of these sizes due to exponential complexity. Table II summarizes the experimental results.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c|}{**Exact**} & \multicolumn{2}{c}{**Our**} \\ \cline{2-5} & \#**Poly** & **Time(s)** & **\#Poly** & **Time(s)** & **PolyCov(\%)** \\ \hline VCAS 1 & 49 & 6348.937 & 13 & 13.619 & 90.5 \\ VCAS 2 & 120 & 6325.712 & 11 & 10.431 & 90.4 \\ VCAS 3 & 253 & 6327.981 & 18 & 18.05 & 90.0 \\ VCAS 4 & 165 & 6435.46 & 11 & 10.384 & 90.4 \\ VCAS 5 & 122 & 6366.877 & 11 & 10.855 & 91.1 \\ VCAS 6 & 162 & 6382.198 & 10 & 9.202 & 92.0 \\ VCAS 7 & 62 & 6374.165 & 6 & 4.526 & 92.8 \\ VCAS 8 & 120 & 6341.173 & 14 & 13.677 & 91.3 \\ VCAS 9 & 125 & 6366.941 & 11 & 10.782 & 90.3 \\ \hline
**Average** & 131 & 6363.272 & 12 & 11.281 & 91.0 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Performance of preimage generation for VCAS.
Fig. 3: Effect of different configurations on preimage approximation.
Firstly, we note that our approach can still effectively generate preimage under-approximations for the neural network controllers. Second, we see that, even with the same input region under analysis, there can be large differences in generation time and polytope number for different output sets to reach the target coverage ratio (e.g., 154 and 32 for Cartpole). This is because different output sets lead to differences in linear relaxation errors, as well as differences in the polytope tree/subregion splitting procedure.
Network SizeWe investigate the scalability of our approach by training neural networks of varying sizes for VCAS and evaluating the preimage approximation quality and runtime cost. Figure 2(a) and 2(b) depict the evaluation results w.r.t. layer width and layer depth. Regarding layer width, as the number of hidden nodes increases, more refinement iterations are needed to reach the target approximation quality, resulting in a greater number of polytopes and higher runtime cost. We observe that our approach is more sensitive to depth than width due to the accumulated convex relaxation errors. The approximation coverage decreases to 62.4% for 6 hidden layers when the algorithm reaches an iteration limit of 5000. Still, our method offers an advantage over the approximate method in [14], for which layer width (e.g., over 32) is computationally hard due to the exponential complexity.
Our approach can effectively generate approximations with orders-of-magnitude improvement in efficiency and scale beyond existing exact and approximate methods to neural network controllers for RL tasks.
#### Iv-B3 RQ2. Refinement Methods and Parameters
We examine the effectiveness of our global refinement and local optimization in improving the approximation quality, as well as the impact of parameter configurations.
Subregion SelectionAs a baseline for comparison, we conduct experiments with random region selection. The random strategy differs from our approach in selecting the next region to split randomly without prioritization. We perform comparison experiments on the benchmark tasks. Table III summarizes the evaluation results of random strategy (Column "Rand") and our method (Column "Our") for under-approximation refinement. We set the same target coverage ratio and iteration limit for both strategies. Note that, for _Dubinsrejoin_, random selection method hits the iteration limit and fails to reach the target coverage ratio. The results confirm the effectiveness of our region selection method in that _fewer_ iterations of the approximation refinement are required to reach the target coverage ratio, leading to (i) a smaller number of polytopes (#Polytope), reduced by 79.1% on average, and (ii) a 76.2% average runtime reduction.
Splitting FeatureWe compare our (greedy) splitting method with the heuristic splitting method, which chooses to split the selected subregion along the input index with the largest value range. We present the comparison results with the heuristic method (Column "Heuristic") in Table III. Our method requires splitting on all input features, computing the preimage approximations for all splits, and then choosing the dimension that refines the under-approximation the most. We find that, even with parallelization of the computation over input features, our approach leads to larger runtime overhead per-iteration compared with the heuristic method. Despite this, we find that our strategy actually requires _fewer_ refinement iterations to reach the target coverage, leading to a smaller number of polytopes (43.3% reduction on average) for the same approximation quality, demonstrating the per-iteration improvement in volume of the greedy vs heuristic strategy.
\begin{table}
\begin{tabular}{c|c c|c c|c c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Property**} & \multicolumn{3}{c|}{**Optim.**} & \multicolumn{3}{c}{**No Optim.**} \\ \cline{3-8} & & **\#Poly** & **PolyCov(\%)** & **Time(s)** & **\#Poly** & **PolyCov(\%)** & **Time(s)** \\ \hline \multirow{2}{*}{Cartpole} & \(\{y\in\mathbb{R}^{2}|\ y_{1}\geq y_{2}\}\) & 154 & 75.2 & 98.839 & 5002 & 59.7 & 1139.853 \\ & \(\{y\in\mathbb{R}^{2}|\ y_{2}\geq y_{1}\}\) & 32 & 75.4 & 17.156 & 40 & 75.1 & 13.210 \\ \hline \multirow{2}{*}{Lunarlander} & \(\{y\in\mathbb{R}^{4}|\wedge_{i\in\{1,3,4\}}y_{2}\geq y_{i}\}\) & 130 & 75.0 & 154.882 & 5002 & 63.5 & 5023.098 \\ \hline \multirow{2}{*}{Dubinsrejoin} & \(\{y\in\mathbb{R}^{8}|\ \wedge_{i\in[2,4]}\ y_{1}\geq y_{i}\ \bigwedge_{i\in[6,8]}\ y_{5}\geq y_{i}\}\) & 78 & 75.2 & 102.673 & 5002 & 51.4 & 3985.861 \\ \hline \multicolumn{8}{c|}{**Average**} & 99 & 75.2 & 93.388 & 3762 & 62.4 & 2540.506 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Performance of preimage generation for reinforcement learning tasks.
\begin{table}
\begin{tabular}{c c c|c c c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**\#Polytope**} & \multicolumn{3}{c}{**PolyCov(\%)**} & \multicolumn{3}{c}{**Time(s)**} \\ \cline{2-10} & Rand & Heuristic & Our & Rand & Heuristic & Our & Rand & Heuristic & Our \\ \hline Vehicle Parking & 41 & 4 & 4 & 90.5 & 95.2 & 91.3 & 8.699 & 0.967 & 0.661 \\ \hline VCAS & 56 & 72 & 13 & 92.3 & 90.0 & 90.5 & 29.15 & 30.829 & 13.619 \\ \hline Cartpole & 151 & 34 & 32 & 75.3 & 75.3 & 75.4 & 84.639 & 12.313 & 17.156 \\ \hline Lunarlander & 481 & 238 & 130 & 75.1 & 75.1 & 75.0 & 505.747 & 119.744 & 154.882 \\ \hline Dubinsrejoin & 502 & 105 & 78 & 61.8 & 75.3 & 75.2 & 587.908 & 65.869 & 102.673 \\ \hline
**Average** & 246 & 91 & 51 & 79.0 & 82.2 & 81.5 & 243.229 & 45.944 & 57.798 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Effectiveness of the refinement method.
Local OptimizationWe conduct an ablation analysis to assess the effectiveness of our local optimization method. We perform experiments on the RL tasks. We set the target coverage ratio as 75% and the iteration limit of our algorithm as 5000. Table II (Column "No Optim") summarizes the evaluation results when removing local optimization from the algorithm. The results reveal that removing local optimization leads to a substantial increase in polytope number (3670.9% on average) and runtime cost (2554.3%), while failing to reach the approximation quality when the algorithm terminates.
Parameter ConfigurationsWe investigate the impact of two parameters: the sample size \(N\) used in the Monte-Carlo volume estimates, and the targeted coverage for the tradeoff between approximation quality and efficiency.
Figure 2(c) shows the impact of sample size on the number of polytopes and time required to reach a fixed target coverage on the Cartpole task. It can be seen that both the number of polytopes and time required decrease as the sample size increases, up to a sample size of 10000. This reflects that a larger sample size leads to more accurate estimation of the volumes, improving the refinement procedure and resulting in fewer iterations to reach the target coverage. On the other hand, increasing the sample size further leads to diminishing returns in terms of estimation error, and an increase to per-iteration time when the volume estimation becomes the bottleneck. In our experiments, we use a sample size of 10000 to balance between the estimation accuracy and runtime efficiency.
The target coverage controls the preimage approximation quality of our approach. Figure 2(d) shows the relation between target coverage and the number of polytopes required (similar relation between target coverage and runtime). Overall, computing preimage approximation with a higher target coverage (better approximation quality) requires more refinement iterations on the input region, leading to more polytopes in the DUP approximation and higher runtime cost.
The proposed global refinement and local optimization methods are essential to improving preimage approximation quality and runtime efficiency. The sample size is a trade-off between the estimation accuracy and runtime overhead. The target coverage allows flexibility to control approximation precision based on available budget.
#### Vi-B4 RQ3. Quantitative Verification
Given a neural network, input set \(I\), and output set \(O\), we use our framework to perform quantitative verification of the property \((I,O,p)\); that is, to check whether \(f(x)\in O\) is true for a specified proportion \(p\) of input values \(x\in I\).
Vehicle Parking.We first perform quantitative verification on neural networks in the vehicle parking task. Consider the quantitative property with input set \(I=\{x\in\mathbb{R}^{2}\mid x\in[0,1]^{2}\}\), output set \(O=\{y\in\mathbb{R}^{4}|\bigwedge_{i=2}^{4}y_{1}-\ y_{i}\geq 0\}\), and quantitative proportion \(p=0.95\). We use Algorithm 3 to verify this property, with iteration limit 500. The computed under-approximation is a union of two polytopes, which takes 0.942s to reach the target coverage. We then compute the exact volume ratio of the under-approximation against the input region. The final quantitative proportion reached by our under-approximation is 95.2%, and thus verifies the quantitative property.
Aircraft Collision Avoidance.In this example, we consider the VCAS system and a scenario where the two aircrafts have negative relative altitude from intruder to ownship (\(h\in[-8000,0]\)), the ownship aircraft has a positive climbing rate \(h_{A}\in[0,100]\) and the intruder has a stable negative climbing rate \(h_{B}=-30\), and time to the loss of horizontal separation \(t\in[0,40]\), which defines the input region \(I\). For this scenario, it is clear that "COC" should be the advisory. Then we apply Algorithm 3 to verify the quantitative property where \(O=\{y\in\mathbb{R}^{9}|\bigwedge_{i=2}^{9}y_{1}-y_{i}\geq 0\}\) and the proportion \(p=0.9\), and we set an iteration limit of 500. The under-approximation computed is a union of 6 polytopes, which takes 5.620s to reach the target coverage. The exact quantitative proportion reached by the generated under-approximation is 90.8%, which thus verifies the quantitative property.
The proposed method can be effectively applied to provide provable guarantees for quantitative properties by enabling arithmetic computation over the input space.
## VII Threats to Validity
We discuss potential threats to the validity of our approach and possible ways to address them.
The _external_ threat mainly comes from the application domain. Our approach is designed for neural networks in the context of safety-critical control tasks, which have low input dimensionality. High-dimensional input spaces such as those found in computer vision tasks are challenging for the proposed method (and any preimage approximation method in general), as one can easily construct preimages that cannot be well-approximated by any set of polytopes. However, we believe that our approach can be adapted to investigate semantically meaningful input subspaces, e.g., a patch containing some visual attribute. To mitigate potential _internal_ threats, we perform a thorough analysis of our methodological choices and parameter configurations to evaluate their impact on the results.
## VIII Conclusion
We present an efficient and practical algorithm for preimage approximation of neural networks. Our algorithm is the first _anytime_ approach, with the output after each iteration being a valid under-approximation that is guaranteed to continually improve. This allows the user to freely configure the approximation according to a time budget or target coverage. As an application, we demonstrate verification of global quantitative properties of neural networks. Our evaluation on a range of benchmark tasks shows significant advantage in runtime efficiency compared with exact preimage generation. One promising direction for future work is to explore multi-neuron relaxation techniques to reduce accumulated errors through deep layers, further improving scalability.
## Acknowledgments
This project received funding from the ERC under the European Union's Horizon 2020 research and innovation programme (FUN2MODEL, grant agreement No. 834115) and ELSA: European Lighthouse on Secure and Safe AI project (grant agreement No. 101070617 under UK guarantee).
|
2308.13441 | Mesh-Wise Prediction of Demographic Composition from Satellite Images
Using Multi-Head Convolutional Neural Network | Population aging is one of the most serious problems in certain countries. In
order to implement its countermeasures, understanding its rapid progress is of
urgency with a granular resolution. However, a detailed and rigorous survey
with high frequency is not feasible due to the constraints of financial and
human resources. Nowadays, Deep Learning is prevalent for pattern recognition
with significant accuracy, with its application to remote sensing. This paper
proposes a multi-head Convolutional Neural Network model with transfer learning
from pre-trained ResNet50 for estimating mesh-wise demographics of Japan as one
of the most aged countries in the world, with satellite images from
Landsat-8/OLI and Suomi NPP/VIIRS-DNS as inputs and census demographics as
labels. The trained model was performed on a testing dataset with a test score
of at least 0.8914 in $\text{R}^2$ for all the demographic composition groups,
and the estimated demographic composition was generated and visualised for 2022
as a non-census year. | Yuta Sato | 2023-08-25T15:41:05Z | http://arxiv.org/abs/2308.13441v1 | Mesh-Wise Prediction of Demographic Composition from Satellite Images Using Multi-Head Convolutional Neural Network
###### Abstract
Population aging is one of the most serious problems in certain countries. In order to implement its countermeasures, understanding its rapid progress is of urgency with a granular resolution. However, a detailed and rigorous survey with high frequency is not feasible due to the constraints of financial and human resources. Nowadays, Deep Learning is prevalent for pattern recognition with significant accuracy, with its application to remote sensing. This paper proposes a multi-head Convolutional Neural Network model with transfer learning from pre-trained ResNet50 for estimating mesh-wise demographics of Japan as one of the most aged countries in the world, with satellite images from Landsat-8/OLI and Suomi NPP/VIIRS-DNS as inputs and census demographics as labels. The trained model was performed on a testing dataset with a test score of at least 0.8914 in R\({}^{2}\) for all the demographic composition groups, and the estimated demographic composition was generated and visualised for 2022 as a non-census year.
demographic composition estimation, population aging, remote sensing, deep learning, convolutional neural network, transfer learning
## I Introduction
While the total population of human beings has increased radically, some countries and regions are being faced with unprecedented aging of society. As a forecast, 1 in 6 people in the world will be more than 60 years old, whose number will be doubled from 1 billion in 2020 to 2.1 billion in 2050 [1]. The radical change of demographic composition in countries and regions would affect the feasibility of social welfare and urban planning policies of governments such as pensions, medical care, transportation, and other physical and social infrastructure [2, 3]. In order to adjust to the transition of societies, obtaining up-to-date demographic composition is of urgent importance. As seen in the case of Italy [4], a low fertility rate stems from multiple factors including identity towards family, employability, and composition of family, but the importance of each factor highly varies depending on sub-regions. In terms of making data-driven decisions on the population aging with high geographic heterogeneity, it is not sufficient to only obtain the total population, but instead crucial to gain metrics of the demographic composition. One of the most accurate and detailed data sources for demographic composition is a census of each country. Although the granularity varies depending on countries and regions, a census provides precise attributes of residents in each tract or mesh at some timing of "census year", such as population by age, number of households, and average income in the area. However, due to its thorough methodology and financial or administrative constraints, a census is usually conducted every several years, such as ten years in the U.S. and the U.K. [5, 6]. Thus, it is not feasible to take place similar national surveys more frequently, such as on an annual basis, and approximation is required by estimation from other data sources.
Among others, Japan is now being suffered from the unprecedented population aging in the history of human beings. In estimation [7], the ratio of people with ages over 64 years old occupied 30.0 percent of the whole population in Japan, which is the second highest among all the other countries next to Monaco. In parallel, the polarisation of the population is gaining momentum with Tokyo Metropolitan Area enjoying the population influx from the other regions since the 1970s [8]. Under the circumstances, there are an increasing number of "marginal settlements" (_genkai sharaku_) in the rural areas, where lives became infeasible due to the lack of young labours and/or infrastructure and the residents would have to abandon their housings to migrate to the urban area [9]. Considering the fast pace of population aging with an estimation of the population ratio over 64 years old as 38.8% by 2050 [10], it is crucial to understand the up-to-date demographic composition in a granular manner, from the perspective of urban and regional planning. Although the Statistics Bureau of Japan is providing the mesh-wise demographic composition as the result of the census, the interval of its provision is every five years [11]. To fulfill the gap between the census years, precise estimation is of importance.
Nowadays, pattern recognition of non-linear relationships between different types of datasets is prevalent with potential application to problems in geography, due to the increase of data volume and new machine learning algorithms proposed [12]. In particular, Deep Learning models as extensions of ANN have shown remarkable performances in the area of remote sensing, utilising satellite images (i.e. land-use classification [13]). Since some well-known satellites such as Landsat-8 [14], Sentinel-2 [15], and Suomi NPP [16] periodically observe the surface of the Earth globally, they constantly provide a rich amount of data source possibly enough to approximate the other socio-economic variables. When it comes to the estimation of population, previous studies have already tried to estimate the population density from satellite images using Convolutional Neural Network (CNN) such as
in India [17] and China [18, 19]. However, the estimation of demographic composition rather than the total population is understudied, especially in the context of population aging. This is mainly because the estimation of the population tended to focus on capturing the fast pace of urbanisation around cities in 'developing' countries, where rigid data source like census is radically not available. Moreover, some models from the previous studies relied on a complex mixture of multiple data sources, including POI data (i.e. [18]). Although better prediction scores could be expected as the variety and amount of input data are increased, this approach implicitly lowers the generalizability of models because of data availability depending on the area of interest. Thus, it is important to develop prediction models by intentionally limiting the input data source only to those available commonly among countries and regions which are (and will be) suffering from population aging.
Being aware of the issues mentioned above, this paper proposes a CNN model of predicting the demographic composition of Japan on an annual basis, from the satellite images of Landsat-8/OLI and Suomi NPP/VIIRS-DNS, which are publicly available at any location on the Earth. Since the mesh-wise census demographic composition is available in Japan every five years, this paper employs demographic composition from the census conducted in 2015 and 2020 as labels for training, validation, and testing. At the end of the paper, a new dataset is synthesised from the satellite images in 2022, which is not a census year. Due to the public data availability in any country and region, this paper shall be significant in the sense that it has generalisability of granular estimation of demographic composition not only in Japan but also the other countries and regions.
## II Related Works
### _Remote sensing for demographics estimation before Deep Learning_
In the field of remote sensing, satellite images have been utilised for a variety of purposes such as tracking descertification, aerosol pollution, and urbanisation [20]. Since its first launch in 1972, Landsat Missions operated by the U.S. Geological Survey (USGS) and National Aeronautics and Space Administration (NASA) have been employed as a major source of surface reflectance of the whole globe [14]. Although the history is relatively shallower, Sentinel Mission from European Space Agency (ESA) followed Landsat as another source of satellite images since the first launch in 2014 with several applications including cropland monitoring, forest ecology, and urbanisation [15, 21]. When it comes to demographics estimation, the urban population of Ohio in the U.S. was estimated from the impervious surface pixel value of Landsat-7 Enhanced Thematic Mapper (ETM+) images, by Ordinary Least Square (OLS) regression on a pixel basis and Spatial Autoregressive model (SAR) on a zonal basis [22]. However, as the authors stated, the accuracy of the zonal estimation was "unacceptable" due to the lack of capturing non-linearity in the pattern of pixels with a linear regression model.
While Landsat and Sentinel are observing the earth's surface reflectance in the daytime, other satellites focus on night-time light (NTL). With the global coverage, the Defence Meteorological Satellite Program's Operational Linescan System (DMSP-OLS) by the U.S. Department of Defence and Suomi National Polar-Orbiting Partnership's Visible Infrared Imaging Radiometer Suite Day-Night Band (Suomi-NPP/VIIRS-DNB) are ones of the major sources for NTL [23]. Previous studies pointed out that NTL could approximate socio-economic indicators such as land-use boundary [24] and GDP [25, 26]. For the estimation of population, the correlations between NTL from DMSP-OLS and 1km mesh-wise population density were clarified in Hokkaido Prefecture, Japan by geographically weighted regression (GWR), with a prediction score of 0.8833 in \(R^{2}\)[27]. In another study [28], 100m grid-wise population density was estimated in the extent of China using multiple sources including NTL from Suomi-NPP/VIIRS-DNB, Normalized Difference Vegetation Index (NDVI) from Moderate Resolution Imaging Spectroradiometer (MODIS), Digital Elevation Model (DEM), and the other POI data by Random Forest regression, whose accuracy score varied \(R^{2}\) between 0.3 and 0.8 depending on provinces.
### _Deep Learning and transfer learning with application to remote sensing_
The foundational concept of Artificial Neural Network (ANN) was initially proposed as a model of mimicking the human brain's biological neurons as multi-layer perceptron (MLP) in the sense that input information propagates in a non-linear manner for pattern recognition of objects such as images, sounds, and sentences [29]. ANN introduced an activation function for each neuron, which enables the model to capture the non-linearity of input data sources. ANN is composed of multiple layers of neurons with weights, and each of the weights is trained through a chain of partial derivatives on a loss function called backpropagation. Although the concept was established earlier, it was computationally expensive at the moment when Graphics Processing Unit (GPU) was not adjusted for the computation of backpropagation [30]. With the advance of GPU, an ANN model called AlexNet was proposed for image recognition tasks with the introduction of a convolution layer, which captures patterns of neighbouring pixels efficiently by sharing parameters as filters [31]. This 8-layer CNN model scored a remarkable top-5 classification error of 15.3% on the ImageNet dataset consisting of 1.2 million 224 \(\times\) 224 pixels with red-green-blue (RGB) colours with 1,000 image categories. AlexNet also introduced a new activation function called Rectified Linear Unit (ReLU) as \(\sigma(x)=\max(0,x)\) for inputs \(x\). Compared with previous activations functions such as sigmoid and tanh, ReLU performed better for capturing non-linearity in the CNN model. Since the success of AlexNet, a variety of CNN models have been proposed with increased accuracy and the number of convolution layers such as GoogleNet with 22 layers [32] and VGGNet with up to 19 layers [33]. However, the increase of layer numbers did not simply reflect on better prediction, as the gradient in backpropagation would be unstable to be explosive or vanished. To solve this problem and make the layers "deeper", ResNet was proposed by adopting residual layers
which introduced skip connections from previous convolution layers [34]. With this improvement, ResNet could augment layers up to 152 layers, with 3.57% top-5 classification error on the ImageNet dataset, which outperformed the error of human beings with 5.1% [35].
As the accuracy of CNN models was improved, well-known pre-trained CNN models were fine-tuned and re-utilised for multiple domains other than the image classification of the ImageNet dataset, namely transfer learning [36]. While default CNN models require a huge amount of datasets (i.e. 1,000 categories classification of 1.2 million images from ImageNet), transferred ones can achieve a considerable level of prediction with fewer samples if the task domain is similar. One of the simplest ways of transfer learning is to append or modify the last layer of models with holding the others the same at the beginning of training. For instance, ImageNet-based CNN models including AlexNet, GoogLeNet, and VGGNet could be transferred via fine-tuning into two medical image detection tasks from CT-scanned slices [37].
In the field of remote sensing, CNN models have been mainly used for image segmentation tasks, where the pixel-wise classification output size is the same as the one from input images, such as cloud detection [38], deforestation monitoring [39], and land-use classification [40, 41]. In the case of demographics estimation, previous studies proposed CNN models for regression tasks of population density. As an example, ImageNet-pre-trained CNN models including VGGNet and ResNet were employed for estimating the total population in 30 arc-second grids (approximately 1 km on the equator) in the U.S., using Sentinel-2 images in RGB and near-infrared (NIR) [19]. The test error of logged population in the Metro Dallas area scored \(R^{2}\) of 0.875 and 0.906 for VGGNet and ResNet, respectively. Others developed an original 6-layer CNN model with feature extraction of MLP estimated the 100m mesh-wise population density in Shenzhen, China, using multiple data sources including NTL from VIIRS-DNS and NDVI from the Chinese Academy of Sciences, and social sensing POI data [18]. Although daytime multispectral images like Landsat and Sentinel were not used, the prediction on the testing dataset counted 0.77 in \(R^{2}\). While these previous studies showed the success of CNN models in the task of population estimation, its more detailed breakdown into the demographic composition is understudied. Also, some of the data sources employed included local dataset which was only available in specific countries or regions, causing the limitation in generalisability of trained models into other geographic locations.
## III Datasets and study areas
To proceed with the development of a CNN model for estimating demographic composition in Japan, this paper employed Landsat-8/OLI Collection 2 Level-2 [42] and Suomi NPP/VIIRS-DNS [16] as inputs, and mesh-wise population census data provided by Statistics Bureau of Japan via e-Stat database [43] as ground truth labels. Samples were obtained from 2015 and 2020 with labels of demographic composition and 2022 without labels as a non-census year for generating estimated labels. The geographic extent of the samples was defined as the whole meshes available in Japan either from the census of 2015 or 2020. Each of the mesh cells was geographically defined as "Basic Grid Square" with a geodesic system of WGS 84, which spans 30 arc-seconds and 45 arc-seconds (approximately 1km square in the territory of Japan) for latitude and longitude, respectively [44]. While the population census provides the number of residents in the mesh cell by age as of October 1st in the census year, aggregated number of three demographic composition groups of residents below 14 years old, between 15 and 64 years old, and over 64 years old were used as labels for a CNN model. This paper utilised satellite images both from daytime (Landsat-8/OLI) and night-time (Suomi NPP/VIIRS-DNS) so that the input could include more detailed information about land-use and human activities. Assumedly, the existence of only either building structures or NTL may not necessarily indicate the settlement of people (i.e. abandoned housings in rural areas; 24-hour factories in industrial areas). Thus, stacking multiple images as a single tensor could support a CNN model to recognise the demographic composition in the area from multiple aspects in the inputs. Landsat-8/OLI consists of nine multispectral images depending on the wavelength of light. Out of the nine bands, band 4, 3, and 2 were used as RGB images of natural colour. In addition to it, band 7, 6, and 5 were also used for another RGB image as false colour, so that wavelengths out of human eyes could also be recognised by a CNN model. Moreover, three synthesised indices were appended as the third RGB image from the original Landsat/OLI multispectral images as follows; NDVI, Normalized Difference Built-up Index (NDBI), and Normalized Difference Water Index (NDWI). While NDVI as an index for vegetation was already used for population estimation (i.e. [28]), this paper additionally employed the indices for buildings and water to explicitly delineate the different characteristics of lands. By referring to [45, 46], these three indices were calculated as follows:
\[\text{NDBI}=\frac{\text{MIR}-\text{NIR}}{\text{MIR}+\text{NIR}}=\frac{\text {Band 6}-\text{Band 5}}{\text{Band 6}+\text{Band 5}} \tag{1}\]
\[\text{NDVI}=\frac{\text{NIR}-\text{Red}}{\text{NIR}+\text{Red}}=\frac{\text {Band 5}-\text{Band 4}}{\text{Band 5}+\text{Band 4}} \tag{2}\]
\[\text{NDWI}=\frac{\text{Green}-\text{NIR}}{\text{Green}+\text{NIR}}=\frac{ \text{Band 3}-\text{Band 5}}{\text{Band 3}+\text{Band 5}} \tag{3}\]
where NIR and MIR stand for near-infra-red and middle-infra-red, respectively.
Figure 1 illustrates the overview of processing images from the three data sources for a CNN model. Since satellite images are periodically collected from the space as snapshots of the Earth, it is inevitable to encounter cloud coverage in the images. To avoid biases from cloud coverage, this paper employed adjusted images where each pixel is an annual average of those without clouds. For Landsat-8/OLI, Google Earth Engine API was used for obtaining the adjusted images of "USGS Landsat 8 Level 2, Collection 2, Tier 1" [47]. In the case of Suomi NPP/VIIRS-DNS, adjusted raster
files were acquired from Annual VNL V2.1 maintained by the Earth Observation Group [48, 49, 50]. The spatial resolutions of Landsat-8/OLI and Suomi NPP/VIIRS-DNS are approximately 30m and 750m, respectively. To fulfill the gap, the pixels of Suomi NPP/VIIRS-DNS were adjusted to those from Landsat-8/OLI. Due to the coarse spatial resolution of satellite images for the size of original meshes from the census, mesh cells were firstly aggregated from 30 arc-seconds \(\times\) 45 arc-seconds (1km square) to 5 arc-minutes \(\times\) 7 arc-minutes and 30 arc-seconds (10km square). After the cloud coverage removal by annual averaging, two types of satellite images were cropped by the aggregated mesh cells. The cropped images were pre-processed for resizing the shape into 334 \(\times\) 334 pixels square so that the following CNN model can deal with the images as inputs with 30m spatial resolution for the 10km square extent. In the pre-processing, the values of Suomi NPP/VIIRS-DNS were re-scaled with a natural logarithm. The output of this flow was the stacked images of Band 2 to 7 from Landsat-8/OLI, their synthesised three indices (NDBI, NDVI, NDWI), and the NTL from Suomi NPP/VIIRS-DNS.
Based on the processing per sample explained in Figure 1, Figure 2 illustrates the flow of sample management as a framework of a supervised learning algorithm. In order to train a CNN model for the task of demographic composition
Fig. 1: Flow chart of data processing per one single sample.
Fig. 2: Flow chart of data splitting for the supervised learning framework.
estimation, all the processed images and labels from the census year of 2015 and 2020 were split into those for training, validation, and testing. In order to ensure even distribution among the dataset groups, stratified sampling was employed by mesh cell ID and total population. In other words, unique values of strings for mesh cell IDs were extracted from all the samples, and averaged values of the total population between 2015 and 2020 were obtained. It is noteworthy to mention that there was no geographical duplication between the training, validation, and testing dataset since it is already rare to observe a radical urbanisation process in Japan and geographical duplication might cause data leakage problems [51] in the supervised learning framework. The mesh cell IDs were assigned to 10 partitions by ten deciles of the averaged total population in the histogram. Out of the 10 partitions, samples were evenly split into training, validation, and testing with a ratio of 8:1:1. The split samples were imported to PyTorch Dataloader objects [52] for mini-batch learning of a CNN model. Images from 2022 were directly imported to PyTorch Dataloader objects without labels. Figure 3 shows the spatial distribution of samples from the training, validation, and testing dataset from the years 2015 and 2020. For the mesh cells where there was no data was omitted from the output since it does not necessarily mean the non-existence of residents, but also implies the lack of conducting a population census (i.e. Japanese government is claiming the sovereignty of the Kuril Islands without conducting a population census since they are under the effective control of Russian government). Table I describes the summary statistics of samples by the type of dataset and sample year. As seen on the left in Figure 4, the distribution of the population is heavily skewed. On the other hand, the logarithm with base 10 was applied for the original number of the population, which is shown on the right in Figure 4. Thus, for the labels of the CNN model, this paper employs the logarithm of the original population with base 10, However, some samples contain 0 for certain demographic composition groups, where the logarithm is not defined. To avoid errors in the following computation, 0 was replaced with 1 (\(10^{0}\)) in advance. Figure 5 shows three randomly sampled images and labels. As seen in the figure, the images with RGB of Band 4, 3, and 2 were close to natural colour, while those with Band 7, 6, 5 and NDBI, NDVI, and NDWI were expressed in false colour. The combination of NDBI, NDVI, and NDWI could explicitly delineate water, forest, and others.
Fig. 3: Spatial distribution of training, validation, and test dataset by two census years. JGD2011 was applied for the coordinate reference system. The map is based on OpenStreetMap.
NTL from Suomi NPP/VIIRS-DNS highlighted the location of human activities at night.
## IV Methodology
Let the CNN model \(f:=\mathcal{X}\rightarrow\mathcal{Y}\), where \(\mathcal{X}:=\{\mathbf{X}_{i}|\mathbf{X}_{i}\in\mathbb{R}^{334\times 334\times 1 2}\}\) a feature tensor space consisting of four types of RGB-based images and \(\mathcal{Y}:=\{\mathbf{y}_{i}|\mathbf{y}_{i}\in\mathbb{R}^{3}\}\) a label vector space consisting of three number of demographic composition groups. Then, the predicted demographic composition vector \(\hat{\mathbf{y}}_{it}\in\mathcal{Y}\) is expressed as following:
\[\hat{\mathbf{y}}_{it}=f(\mathbf{X}_{i};\mathbf{\Theta}_{t}) \tag{4}\]
where \(\mathbf{\Theta}_{t}\) is an element of \(\mathbf{\Theta}\) as a set of layer weights in \(f\) such that \(\mathbf{\Theta}=\{\mathbf{\Theta}_{t}\}\) and \(t\in\mathbb{N}\) denotes the order of training. The CNN model \(f\) was trained through the gradient descent method as backpropagation such that
\[\boldsymbol{\theta}_{t}^{l}=\boldsymbol{\theta}_{t-1}^{l}-\text{ Optim}(\alpha,\nabla\boldsymbol{\theta}_{t-1}^{l}) \tag{5}\]
\[\nabla\boldsymbol{\theta}_{t-1}^{l}=\frac{\partial\mathcal{L}(\hat{\mathbf{y} }_{it-1},\mathbf{y}_{i})}{\partial\boldsymbol{\theta}_{t-1}^{l}} \tag{6}\]
where \(\boldsymbol{\theta}_{t}^{l}\in\mathbf{\Theta}_{t}\) and \(\text{Optim}(\alpha,\nabla\theta_{t}^{l})\) denote the \(l\)-th layer of \(\mathbf{\Theta}_{t}\) and the optimiser with the learning rate \(\alpha\in\mathbb{R}^{+}\) and gradient \(\nabla\theta_{t}^{l}\), respectively. For the loss function \(\mathcal{L}(\hat{\mathbf{y}}_{it},\mathbf{y}_{i})\), this paper adopts the formula of following:
\[\mathcal{L}(\hat{\mathbf{y}}_{it},\mathbf{y}_{i})=\frac{1}{N}\sum_{i}^{N}|| \hat{\mathbf{y}}_{it}-\mathbf{y}_{i}|| \tag{7}\]
where \(N\in\mathbb{N}\) denotes the number of samples in a dataset. For the optimiser, Adam [53] was applied with following:
\[\text{Optim}(\alpha,\nabla\boldsymbol{\theta}_{t}^{l};\beta_{1},\beta_{2})= \alpha\frac{\hat{m}_{t}}{\sqrt{v_{t}+\epsilon}}\nabla\boldsymbol{\theta}_{t-1} ^{l} \tag{8}\]
\[m_{t}=\beta_{1}m_{t-1}-(1-\beta_{1})\nabla\boldsymbol{\theta}_{t-1}^{l} \tag{9}\]
\[v_{t}=\beta_{2}v_{t-1}-(1-\beta_{2})(\nabla\boldsymbol{\theta}_{t-1}^{l})^{2} \tag{10}\]
\[\hat{m}_{t}=\frac{m_{t}}{1-\beta_{1}^{l}} \tag{11}\]
\[\hat{v}_{t}=\frac{v_{t}}{1-\beta_{2}^{l}} \tag{12}\]
where \(\beta_{1}\) and \(\beta_{2}\) denote the hyperparameters of the first and second moment vectors. After every training of all samples in the training dataset as one epoch, the trained CNN model \(f\) was validated with samples in the validation dataset by calculating \(\mathcal{L}\). This one cycle of training and validation was iterated over 250 epochs, and a model with the least validation error was selected as the best model in terms of generalisability. The chosen best model was tested by samples in the testing dataset
Fig. 4: Distribution of population in all the samples from the training, validation, and testing dataset by demographic composition groups. Scales on the y-axis are original and logarithm with base 10 for the left and right figure, respectively. The blue lines across ages indicate that the values were from the same sample in a mesh cell either in 2015 or 2020.
and \(R^{2}_{(g)}\) was calculated for all the demographic composition groups \(g\) as follows:
\[R^{2}_{(g)}=\frac{\sum(\hat{y}^{(g)}_{i}-\bar{y}^{(g)}_{i})^{2}}{\sum(y^{(g)}_{i} -\bar{y}^{(g)}_{i})^{2}} \tag{13}\]
Figure 6 displays the architecture of the CNN model \(f\). The model employed multi-head CNN based on ResNet50 [34], so that multi-layered images could be processed as multiple RGB-coloured tensors. Four images were set as inputs of RGB-coloured tensors, where NTL was duplicately stacked for each colour. These inputs were passed through the first convolution layer and concatenated along the dimension of channels. The concatenated tensor was convoluted with 1 \(\times\) 1 filter and aggregated by max pooling layer for down-sizing, which only keeps a maximum value in the window. The pooled tensor was further convoluted 48 times with skip connections. The output from all the convolutions was pooled by 1 \(\times\) 1 average pooling layer as an operation of taking an average in the window, leading to 1 \(\times\) 1 \(\times\) 2048 tensor. The tensor was flattened to a 2048-dimension vector and passed through two fully-connected layers with the final output of a 3-dimension vector \(\hat{\mathbf{y}}_{it}\), estimating the demographic composition in the sample. The output size of each convolution and pooling layer was described in Figure 6 by the formula of following:
\[n_{\text{out}}=\lfloor\frac{n_{\text{in}}+2p-k}{s}\rfloor+1 \tag{14}\]
where \(n_{\text{in}}\) and \(n_{\text{out}}\) denote the size of input and output, respectively. \(p\), \(k\), and \(s\) stand for the size of zero-padding, filter, and strides, respectively. \(\lfloor.\rfloor\) operation indicates obtaining an integer number from the float number. For instance, the output size of the first convolution for Landsat-8/OLI Band 4,3, and 2 was \(\lfloor\frac{334+2*3-7}{2}\rfloor+1=167\). As shown in Figure 6, all the parameters of this CNN model were transferred from ResNet50 being pre-trained by the ImageNet dataset, except the second convolution layer and the last fully-connected layer. Moreover, batch normalisation was implemented after each of the convolution layers, where each batch in the forward propagation was normalised with the average and variance, so that the training process could be stabilised [54]. ReLU was used as an activation function for all the convolution layers after batch normalisation. In addition to batch normalisation, dropout [31] was implemented for avoiding overfitting. Dropout is an operation for removing a certain amount of neurons within a layer during training so that the CNN model would be robust to the heterogeneity in the dataset as if it behaves as ensemble learning. However, as a previous study reported [55], the co-existence of dropout and batch normalisation in the convolution layer would worsen the performance of CNN models due to variance shifts. Thus, dropout was applied only to the last two fully-connected layers, where batch normalisation was not implemented in the original ResNet50. Since all the labels were biased to \(10^{0}\) and their values were always non-negative, the last layer was also passed through ReLU. For the hyperparameters of this machine learning framework, the learning rate \(\alpha\) and coefficients of the first and second moment \(\beta_{1}\) and \(\beta_{2}\) for Adam were set as 0.001, 0.9, and 0.999, respectively. The ratio of dropout was set as 0.25 for both of the two fully-connected layers. As an environment of model training, AWS Sagemaker Studio was used with an instance type of g4dn.xlarge consisting of 4 vCPUs, 16 GB memory, and 1 GPU of NVIDIA Tesla T4 [56].
## V Result
As a result of the training and validation process, Figure 7 illustrates the training and validation loss function over 250
Fig. 5: Images and labels from three random samples from the training dataset.
epochs. In the early epochs, the validation loss function had a volatility of up to 1.2220. While the training loss steadily decreased throughout the epochs, validation loss reached its lower bound of 0.35 at around epoch 50 and little improvement was observed in the following epochs. The training loss recorded 0.3186 as the minima at epoch 238, and the model at the epoch was employed as the one with the most generalisability for the testing and dataset generation.
With the trained model chosen above, the testing dataset was utilised for calculating the testing score in \(R^{2}_{(g)}\), as well as the loss function. To do so, all the images in the testing dataset were passed through the CNN model as forward propagation, and the outputs were used for calculating the loss function as a test error. As seen in Table I, the loss function in the testing dataset was 0.3215. This value was close to 0.3186 in the validation dataset, while 0.1032 in the training dataset
Fig. 6: Architecture of ResNet50-based multi-head CNN model.
Fig. 7: Training and validation progress of the CNN model. The grey horizontal and vertical dotted lines indicate the lowest value of the loss function in the validation dataset throughout the epochs and its epoch, respectively. The intersection of the dotted lines indicates the loss function of the validation dataset at epoch 238.
was relatively smaller than the other two. The outputs obtained were matched with the labels in the testing dataset and \(R^{2}_{(g)}\) was calculated for all the demographic composition groups \(g\). As Table II and Figure 8 illustrate, the \(R^{2}_{(g)}\) reached at least 0.8914 for all the demographic composition groups. Although mesh cells with a high population had slight overestimation, the regression lines were aligned along the diagonal of references. Based on the trained CNN model, Figure 9 shows the estimated ratio of the population over 64 years old in 2022, which is not a census year. The mesh cells were employed from those where the population was counted either in 2015 or 2020.
## VI Discussion
As population aging is in progress among an increasing number of countries and regions, prompt understanding of demographic composition is of urgent importance. This paper demonstrated the mesh-wise estimation of demographic composition in Japan as one of the countries with the most aged society in the world, from the dataset of population census in 2015 and 2020 as labels and satellite images of Landsat-8/OLI and Suomi NPP/VIIRS-DNS. The chosen ResNet50-based CNN model after the training and validation process was tested by the testing dataset with metrics of \(R^{2}_{(g)}\) between original and estimated values for each of the demographic composition groups. In the end, the trained model was used for estimating the demographic composition of Japan in 2022 as a non-census year, and the aged population rate was visualised. The result showed a remarkable test score of \(R^{2}_{(g)}\) with at least 0.8914 for all the demographic composition groups, indicating three contributions of following; Firstly, the result displays the effective usage of pre-trained ResNet50 [34] from the ImageNet dataset consisting of more than 1.2 million images as a 1,000-objects classification task for the demographic composition estimation task with only 6,724 training samples. This result corresponds to previous studies (i.e. [19]) in the sense that the remote sensing tasks share a similar domain with the ImageNet object classification task as image recognition. This performance was seen even with the introduction of multi-head inputs in the first convolution layer, where multiple RGB-colour images were accommodated. Secondly, this paper proposes the feasibility of estimating demographic composition rather than total population from satellite images, so that the progress of population aging could be observed without conducting formal surveys such as census. As the values of the loss function both for the validation and testing dataset were not deviated from around 0.32, it is reasonable to confirm that the model held generalisability enough to estimate other geographical locations which were not included in this dataset. Lastly, the proposed CNN model only employed publicly available satellite images as inputs for enduring generalisability in any other geographic location. Since Landsat-8/OLI and Suomi NPP/VIIRS-DNS cover the surface of the earth holistically and periodically, the trained CNN model could be further utilised in other countries in any year. Considering the performance and data availability, the proposed model could help fulfilling the gap between census years not only in Japan but also the other countries and regions for the purposes of urban planning policymaking, especially where rigorous population surveys are limited due to a lack of financial and human resources.
However, there are certain limitations in the proposed CNN model when it comes to enduring generalisability for further usage. First and foremost, biases in the dataset could result in overfitting in other datasets out of this paper. In terms of the spatial biases stemming from the dataset, the proposed CNN model was trained only by the satellite images in the realm of the population census held by the Japanese government. Since this paper did not conduct testing by a dataset from other countries and regions, the generalisability of the trained CNN model is still uncertain if the condition of morphological landscape in satellite images is different from the one in Japan. To consider the temporal biases in the proposed CNN model, the dataset was obtained from the years 2015 and 2020, where each sample was independently treated and did not explicitly consider the temporal relationship in the same geographical location. Also, the satellite images were pre-processed by annually averaging pixel values from those without clouds, which lacked temporal transition within a sample year. From the model architecture point of view, one way to enhance the proposed CNN model for overcoming these problems is
Fig. 8: Prediction performance of the CNN model in the testing dataset by demographic composition groups. The real lines were drawn from the linear regression of the predicted values on the original ones. The dotted lines were drawn as references which indicate the symmetric normal lines which could have been realised if all the samples would have been predicted precisely.
to implement spatio-temporal CNN with the introduction of Long Short-Term Memory (LSTM) blocks [57] or Attention mechanism [58]. Moreover, rather simply and naturally, data argumentation by expanding the geographical locations and sample years could improve the proposed CNN model. Another limitation is the precision of the proposed CNN model. Although the test score reached 0.8914 for the \(R_{(g)}^{2}\) of all the demographic composition groups \(g\) in the testing dataset, Figure 8 indicates "heteroscedasty" in the correlation between the original and estimated values. In particular, samples with small population tended to be overestimated with poor precision and higher variance than the others. This could be due to the spatial resolution of the satellite images in the input, where 30m for Landsat-8/OLI and 750m for Suomi NPP/VIIRS-DNS were not granular enough to detect the shape of individual buildings in the extent. Not surprisingly, replacement of Landsat-8/OLI and Suomi NPP/VIIRS-DNS with finer satellite images and stacking of other data sources like spatially interpolated POI data would solve this problem. However, this approach would limit the data availability for geographical locations where the data source cannot be obtained. One potential method without employing new data sources is to synthesise satellite images with finer spatial resolution using Generative Adversarial Network (GAN) for super-resolution [59]. In the future study, the proposed CNN model could be applied to other countries and regions with super-resolution to enhance the generalisability and precision for further trustworthy estimation.
## VII Conclusion
This paper proposed a multi-head CNN model based on pre-trained ResNet50 to estimate demographic composition in Japan as one of the most aged societies from the dataset of satellite images Landsat-8/OLI as daytime and Suomi NPP/VIIRS-DNS as night-time in the year 2015 and 2020. The trained model showed at least 0.8914 in \(R_{(g)}^{2}\) for all the demographic composition groups \(g\) consisting of people below 14 years old, between 15 and 64 years old, and over 64 years old in the testing dataset. The trained model estimated the demographic composition in 2022 as a fulfillment of a dataset in one of the non-census years of Japan. Since the proposed CNN model requires inputs only from publicly available satellite images at any location on the earth, it holds generalisability to other countries and regions in terms of data availability. Although further studies are expected for evaluating the performance in different geographical locations and enhancing the coarse spatial resolution in the inputs, this paper shall contribute to policymaking for solving problems amongst population aging without conducting a rigorous population survey such as a census.
## Acknowledgment
The author would like to thank Dr. Elisabetta Pietrostefani as his supervisor in his Master's program at the London School of Economics and Political Science for her advice provision
Fig. 9: Estimated rate of aged population in Japan as of 2022. JGD2011 was applied for the coordinate reference system. The map is based on OpenStreetMap.
to this paper as the author's dissertation, including the data collection and visualisation methods.
|
2303.05151 | Provable Data Subset Selection For Efficient Neural Network Training | Radial basis function neural networks (\emph{RBFNN}) are {well-known} for
their capability to approximate any continuous function on a closed bounded set
with arbitrary precision given enough hidden neurons. In this paper, we
introduce the first algorithm to construct coresets for \emph{RBFNNs}, i.e.,
small weighted subsets that approximate the loss of the input data on any
radial basis function network and thus approximate any function defined by an
\emph{RBFNN} on the larger input data. In particular, we construct coresets for
radial basis and Laplacian loss functions. We then use our coresets to obtain a
provable data subset selection algorithm for training deep neural networks.
Since our coresets approximate every function, they also approximate the
gradient of each weight in a neural network, which is a particular function on
the input. We then perform empirical evaluations on function approximation and
dataset subset selection on popular network architectures and data sets,
demonstrating the efficacy and accuracy of our coreset construction. | Murad Tukan, Samson Zhou, Alaa Maalouf, Daniela Rus, Vladimir Braverman, Dan Feldman | 2023-03-09T10:08:34Z | http://arxiv.org/abs/2303.05151v1 | # Provable Data Subset Selection For Efficient Neural Network Training
###### Abstract
Radial basis function neural networks (_RBFNN_) are well-known for their capability to approximate any continuous function on a closed bounded set with arbitrary precision given enough hidden neurons. In this paper, we introduce the first algorithm to construct coresets for _RBFNNs_, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network and thus approximate any function defined by an _RBFNN_ on the larger input data. In particular, we construct coresets for radial basis and Laplacian loss functions. We then use our coresets to obtain a provable data subset selection algorithm for training deep neural networks. Since our coresets approximate every function, they also approximate the gradient of each weight in a neural network, which is a particular function on the input. We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets, demonstrating the efficacy and accuracy of our coreset construction.
Machine Learning, ICML
## 1 Introduction
Radial basis function neural networks (_RBFNNs_) are artificial neural networks that generally have three layers: an input layer, a hidden layer with a radial basis function (RBF) as an activation function, and a linear output layer. In this paper, the input layer receives a \(d\)-dimensional vector \(x\in\mathbb{R}^{d}\) of real numbers. The hidden layer then consists of various nodes representing _RBFs_, to compute \(\rho(\|x-c_{i}\|_{2}):=\exp\left(-\left\|x-c_{i}\right\|_{2}^{2}\right)\), where \(c_{i}\in\mathbb{R}^{d}\) is the center vector for neuron \(i\) across, say, \(N\) neurons in the hidden layer. The linear output layer then computes \(\sum_{i=1}^{N}\alpha_{i}\rho(\|x-c_{i}\|_{2})\), where \(\alpha_{i}\) is the weight of neuron \(i\) in the linear output neuron. Therefore, _RBFNNs_ are feed-forward neural networks because the edges between the nodes do not form a cycle, and enjoy advantages such as simplicity of analysis, faster training time, and interpretability, compared to alternatives such as convolutional neural networks (_CNNs_) and even multi-layer perceptrons (_MLPs_) (Padmavati, 2011).
**Function approximation via _RBFNNs_. _RBFNNs_ are universal approximators in the sense that an _RBFNN_ with a sufficient number of hidden neurons (large \(N\)) can approximate any continuous function on a closed, bounded subset of \(\mathbb{R}^{d}\) with arbitrary precision (Park & Sandberg, 1991), i.e., given a sufficiently large input set \(P\) of \(n\) points in \(\mathbb{R}^{d}\) and given its corresponding label function \(y:P\to\mathbb{R}\), an _RBFNN_, can be trained to approximate the function \(y\). Therefore, _RBFNNs_ are commonly used across a wide range of applications, such as function approximation (Park & Sandberg, 1991; 1993; Lu et al., 1997), time series prediction (Whitehead & Choate, 1996; Leung et al., 2001; Harpham & Dawson, 2006), classification (Leonard & Kramer, 1991; Wuxing et al., 2004; Babu & Suresh, 2012), and system control (Yu et al., 2011; Liu, 2013), due to their faster learning speed.
For a given RBFNN size, i.e., the number of neurons in the hidden layer, and an input set, the aim of this paper is to compute a small weighted subset that approximates the loss of the input data on any radial basis function neural network of this size and thus approximates any function defined (approximated) by such an _RBFNN_ on the big input data. This small weighted subset is called a coreset.
**Coresets.** Consider a prototypical machine/deep learning problem in which we are given an input set \(P\subseteq\mathbb{R}^{d}\) of \(n\) points, its corresponding weights function \(w:P\to\mathbb{R}\), a set of queries \(X\) (a set of candidate solutions for the involved optimization problem), and a loss function \(f:P\times X\to[0,\infty)\). The tuple \((P,w,X,f)\) is called the _query space_, and it defines the optimization problem at hand -- where usually, the goal is to find \(x^{*}\in\operatorname*{arg\,min}_{x\in X}\sum_{p\in P}w(p)f(p,x)\). Given a query space \((P,w,X,f)\), a coreset is a small weighted subset of the input \(P\) that can provably approximate the cost of every query \(x\in X\) on \(P\)(Feldman, 2020; Jubran et al., 2021); see Definition 2.1. In particular, a coreset for a _RBFNN_ can approximate the cost of an _RBFNN_ on the original training data for every set of centers and weights that define the _RBFNN_ (see Section 4). Hence, the
coreset approximates also the centers and weights that form the optimal solution of the _RBFNN_ (the solution that approximates the desired function). Thus a coreset for a _RBFNN_ would facilitate training data for function approximation without reading the full training data and more generally, a strong coreset for an RBFNN with enough hidden neurons would give a strong coreset for any function that can be approximated to some precision using the RBFNN.
**To this end, in this paper, we aim to provide a coreset for _RBFNN_s, and thus provably approximating (providing a coreset to) any function that can be approximated by a given _RBFNN_.**
Furthermore, we can use this small weighted subset (coreset) to suggest a provable data subset selection algorithm for training deep neural networks efficiently (on the small subset). Since our coreset approximates every function that can be approximated by an RBFNN of this size, it also approximates the gradient of each weight in a neural network (if it can be approximated by the RBFNN).
**Training neural networks on data subset.** Although deep learning has become widely successful with the increasing availability of data (Krizhevsky et al., 2017; Devlin et al., 2019), modern deep learning systems have correspondingly increased in their computational resources, resulting in significantly larger training times, financial costs (Sharir et al., 2020), energy costs (Strubell et al., 2019), and carbon footprints (Strubell et al., 2019; Schwartz et al., 2020). Data subset selection (coresets) allows for efficient learning at several levels (Wei et al., 2014; Kaushal et al., 2019; Coleman et al., 2019; Har-Peled & Mazumdar, 2004; Clarkson, 2010). By employing a significantly smaller subset of the big dataset, (i) we enable learning on relatively low resource computing settings without requiring a huge number of GPU and CPU servers, (ii) we may greatly optimize the end-to-end turnaround time, which frequently necessitates many training runs for hyper-parameter tweaking, and (iii) because a large number of deep learning trials must be done in practice, we allow for considerable reductions in deep learning energy usage and CO\({}_{2}\) emissions (Strubell et al., 2019). Multiple efforts have recently been made to improve the efficiency of machine learning models using data subset selection (Mirzasoleiman et al., 2020; Killamsetty et al., 2021;a). However, existing techniques either (i) employ proxy functions to choose data points, (ii) are specialized to specific machine learning models, (iii) use approximations of parameters such as gradient error or generalization errors, (iv) lack provable guarantees on the approximation error, or (v) require an inefficient gradient computation of the whole data. Most importantly, all of these methods are model/network dependent, and thus computing the desired subset of the data after several training epochs (for the same network) takes a lot of time and must be repeated each time the network changes.
**To this end, in this paper, we introduce a provable and efficient model-independent subset selection algorithm for training neural networks. This will allow us to compute a subset of the training data, that is guaranteed to be a coreset for training multiple neural network architectures/models.**
### Our Contributions
In this paper, we introduce a coreset that approximates any function can be represented by an _RBFNN_ architecture. Specifically:
1. We provide a coreset for the _RBF_ and Laplacian cost functions; see Algorithm 1, and Section 3.1.2.
2. We generate a coreset for any _RBFNN_ model, in turn, approximating any function that can be represented by the _RBFNN_; see Figure 1 for illustration, and Section 4 for more details.
3. We then exploit the properties of _RBFNNs_, to approximate the gradients of any deep neural networks (_DNNs_), leading towards provable subset selection for learning/training _DNNs_. We also show the advantages of using our coreset against previous subset selection techniques; see Section 5 and Section 6.
4. Finally, we provide an open-source code implementation of our algorithm for reproducing our results and future research (ope, 2023).
### Related Work
A long line of active work has studied efficient coreset constructions for various problems in computational geometry and machine learning, such as \(k\)-means and \(k\)-median clustering (Har-Peled & Mazumdar, 2004; Chen, 2009; Braverman et al., 2016; Huang & Vishnoi, 2020; Jubran et al., 2020; Cohen-Addad et al., 2022), regression (Dasgupta et al., 2008; Chhaya et al., 2020; Tolochinsky et al., 2022; Meyer et al., 2022; Maalouf et al., 2019, 2022), low-rank approximation (Cohen et al., 2017; Braverman et al., 2020; Maalouf et al., 2020, 2021), volume maximization (Indyk et al., 2020; Mahabadi et al., 2020; Woodruff & Yasuda, 2022), projective clustering (Feldman et al., 2020; Tukan et al., 2022), support vector machines (SVMs) (Clarkson, 2010; Tukan et al., 2021; Maalouf et al., 2022), Bayesian inference (Campbell & Broderick, 2018), and sine wave fitting (Maalouf et al., 2022).(Baykal et al., 2022) suggested coreset-based algorithms for compressing the parameters of a trained fully-connected neural network by using sensitivity sampling on the weights of neurons after training, though without pruning full neurons. (Mussay et al., 2020;
Liebenwein et al., 2019; Tukan et al., 2022a) sidestepped this issue by identifying the neurons that can be compressed regardless of their weights, due to the choice of the activation functions, thereby achieving coreset-based algorithms for neural pruning.
These approaches use coresets to achieve an orthogonal goal to data subset selection in the context of deep learning - they greatly reduce the number of neurons in the network while we greatly reduce the number of samples in the dataset that need to be read by the neural network. Correspondingly, we reduce the effective size of the data that needs to be stored or even measured prior to the training stage. Moreover, we remark that even if the number of inputs to the input layer was greatly reduced by these neural compression approaches, the union of the inputs can still consist of the entire input dataset and so these approaches generally cannot guarantee any form of data distillation.
Toward the goal of data subset selection, (Mirzasoleiman et al., 2020a;b) introduced algorithms for selecting representative subsets of the training data to accurately estimate the full gradient for tasks in both deep learning and classical machine learning models such as logistic regression and these approaches were subsequently refined by (Killamsetty et al., 2021a;b). Data distillation has also received a lot of attention in image classification (Bohdal et al., 2020; Nguyen et al., 2021; Dosovitskiy et al., 2021), natural language processing (Devlin et al., 2019; Brown et al., 2020), and federated learning (Ozkara et al., 2021; Zhu et al., 2021).
**On coresets for any function.** To the best of our knowledge, the only other coresets eligible for handling a wide family of functions without the need to devise a problem-dependent sensitivity are (Claici & Solomon, 2018; Claici et al., 2018). While such coresets are interesting and related, (i) both works provide coreset constructions resulting in an additive approximation, (ii) the coresets' theoretical applications seem quite a bit restrictive as they intend to handle mainly a family of functions that are either \(k\)-Lipschitz (functions with bounded gradient, usually \(k\)), having a bounded Dual-Sobolev distance, or functions satisfying the properties of reproducing kernel Hilbert space (RKHS). In addition, the running time in the worst-case scenario is not practical, i.e., exponential in the dimension of the points.
On the other hand, our coreset under mild assumptions can satisfy any function approximated by RBFNN (Wu et al., 2012), in time that is polynomial in the dimension of the points and linear in the number of nonzero entries of the points (Clarkson & Woodruff, 2017).
## 2 Preliminaries
For an integer \(n>0\), we use \([n]\) to denote the set \(\{1,2,\ldots,n\}\). A weighted set of points is a pair \((P,w)\), where \(P\subseteq\mathbb{R}^{d}\) is a set of points and \(w:P\rightarrow[0,\infty)\) is a weight function.
We now formally provide the notion of \(\varepsilon\)-coreset for the _RBF_ loss. This will be later extended to a coreset for _RBFNN_.
**Definition 2.1** (_RBF_\(\varepsilon\)-coreset).: Let \((P,w)\) be a weighted of \(n\) points in \(\mathbb{R}^{d}\), \(X\subseteq\mathbb{R}^{d}\) be a set of queries, \(\varepsilon\in(0,1)\). For every \(x\in X\) and \(p\in P\) let \(f(p,x):=\exp\left(-\left\|p-x\right\|_{2}^{2}\right)\) denote the _RBF_ loss function between \(p\) and \(x\). An \(\varepsilon\)-coreset for \((P,w)\) with respect to \(f\), is a pair \((S,v)\) where \(S\subseteq P\), \(v:S\rightarrow(0,\infty)\) is a weight function, such that for every \(x\in X\), \(\left|1-\frac{\sum_{q\in S}v(q)f(q,x)}{\sum_{p\in P}w(p)f(p,x)}\right|\leq\varepsilon\).
We say the _RBF_ coreset is _strong_ if it guarantees correctness over all \(x\in X\). Otherwise, we say the coreset is _weak_ if it only provides guarantees for all \(x\) only in some subset of \(X\).
**Sensitivity sampling.** To compute our _RBF_\(\varepsilon\)-coreset, we utilize the sensitivity sampling framework (Braverman et al., 2016). In short, the sensitivity of a point \(p\in P\) corre
Figure 1: Our contribution in a nutshell.
sponds to the "importance" of this point with respect to the other points and the problem at hand. In our context (with respect to the _RBF_ loss), the sensitivity is defined as \(s(p)=\sup_{x\in X}\frac{w(p)f(p,x)}{\sum\nolimits_{c\in P}w(q)f(q,x)}\), where the denominator is nonzero. Once we bound the sensitivities for every \(p\in P\), we can sample points from \(P\) according to their corresponding sensitivity bounds, and re-weight the sampled points to obtain an RBF \(\varepsilon\)-coreset as in Definition 2.1. The size of the sample (coreset) is proportional to the sum of these bounds - the tighter (smaller) these bounds, the smaller the coreset size; we refer the reader to Section A in the appendix.
**Sensitivity bounding.** We now present our main tool for bounding the sensitivity of each input point with respect to the _RBF_ and _Laplacian_ loss functions.
**Definition 2.2** (Special case of Definition 4 (Tukan et al., 2020)).: Let \(\left(P,w,\mathbb{R}^{d},f\right)\) be a query space (see Definition A.1) where for every \(p\in P\) and \(x\in\mathbb{R}^{d}\), \(f(p,x)=\left|p^{T}x\right|\). Let \(D\in[0,\infty)^{d\times d}\) be a diagonal matrix of full rank and let \(V\in\mathbb{R}^{d\times d}\) be an orthogonal matrix, such that for every \(x\in\mathbb{R}^{d}\), \(\left\|DV^{T}x\right\|_{2}\leq\sum\limits_{p\in P}w(p)\left|p^{T}x\right|\leq \sqrt{d}\left\|DV^{T}x\right\|_{2}.\) Define \(U:P\rightarrow\mathbb{R}^{d}\) such that \(U(p)=p\left(DV^{T}\right)^{-1}\) for every \(p\in P\). The tuple \((U,D,V)\) is the \(\left\|\cdot\right\|_{1}\)-SVD of \(P\).
Using the above tool, the sensitivity with respect to the RBF loss function can be bounded using the following.
**Lemma 2.3** (Special case of Lemma 35, (Tukan et al., 2020)).: _Let \(\left(P,w,\mathbb{R}^{d},f\right)\) be query space as in Definition A.1 where for every \(p\in P\) and \(x\in\mathbb{R}^{d}\), \(f(p,x)=\left|p^{T}x\right|\). Let \((U,D,V)\) be the \(\left\|\cdot\right\|_{1}\)-SVD of \((P,w)\) with respect to \(\left|\cdot\right|\) (see Definition 2.2). Then: (i) for every \(p\in P\), the sensitivity of \(p\) with respect to the query space \((P,w,\mathbb{R}^{d},\left|\cdot\right|)\) is bounded by \(s(p)\leq\left\|U(p)\right\|_{1}\), and (ii) the total sensitivity is bounded by \(\sum\limits_{p\in P}s(p)\leq d^{1.5}\)._
## 3 Method
In this section, we provide coresets for the Gaussian and Laplacian loss functions. We detail our coreset construction for the Gaussian loss function and Laplacian loss function in Section 3.1.2.
**Overview of Algorithm 1.** Algorithm 1 receives as input, a set \(P\) of \(n\) points in \(\mathbb{R}^{d}\), a weight function \(w:P\rightarrow[0,\infty)\), a bound \(R\) on the radius of the ball containing query space \(X\), and a sample size \(m>0\). If the sample size \(m\) is sufficiently large, then Algorithm 1 outputs a pair \((S,v)\) that is an \(\varepsilon\)-coreset for RBF cost function; see Theorem 3.2.
First, \(d^{\prime}\) is set to be the VC dimension of the quadruple \((P,w,X,\rho\left(\cdot\right))\); see Definition A.2. The heart of our algorithm lies in formalizing the RBF loss function as a variant of the regression problem, specifically, a variant of the \(\ell_{1}\)-regression problem. The conversion requires manipulation of the input data as presented at Line 2. We then compute the \(f\)-SVD of the new input data with respect to the \(\ell_{1}\)-regression problem followed by bounded the sensitivity of such points (Lines 3-5). Now we have all the needed ingredients to obtain an \(\varepsilon\)-coreset (see Theorem A.3), i.e., we sample i.i.d m points from P based on their sensitivity bounds (see Line 9), followed by assigning a new weight for every sampled point at Line 10.
```
0: A set \(P\subseteq\mathbb{R}^{d}\) of \(n\) points, a weight function \(w:P\rightarrow[0,\infty)\), a bound on radius of the query space \(X\), and a sample size \(m\geq 1\)
0: A pair \((S,v)\) that satisfies Theorem 3.2
1: Set \(d^{\prime}:=\) the VC dimension of quadruple \((P,w,X,\rho\left(\cdot\right))\) {See Definition A.2}
2:\(P^{\prime}:=\left\{q_{p}=\left[\left\|p\right\|_{2}^{2},-2p^{T},1\right]^{T} \left|\forall p\in P\right\}\right.\)
3:\((U,D,V):=\) the \(f\)-SVD of \((P^{\prime},w,\left|\cdot\right|)\) {See Definition 2.2}
4:for every \(p\in P\)do
5:\(s(p):=e^{12R^{2}}\left(1+8R^{2}\right)\left(\frac{w(p)}{\sum\limits_{q\in P}w( q)}+w(p)\left\|U(q_{p})\right\|_{1}\right)\) {bound on the sensitivity of \(p\) as in Lemma B.1 in the appendix}
6:endfor
7:\(t:=\sum_{p\in P}s(p)\)
8: Set \(\tilde{c}\geq 1\) to be a sufficiently large constant {Can be determined from Theorem 3.2}
9: Pick an i.i.d sample \(S\) of \(m\) points from \(P\), where each \(p\in P\) is sampled with probability \(\frac{s(p)}{t}\)
10: set \(v:\mathbb{R}^{d}\rightarrow[0,\infty]\) to be a weight function such that for every \(q\in S\), \(v(q)=\frac{t}{s(q)\cdot m}\). return\((S,v)\)
```
**Algorithm 1**Coreset\((P,w,R,m)\)
### Analysis
#### 3.1.1 Lower bound on the coreset size for the Gaussian loss function
We first show the lower bound on the size of coresets, to emphasize the need for assumptions on the data and the query space.
**Theorem 3.1**.: _There exists a set of \(n\) points \(P\subseteq\mathbb{R}^{d}\) such that \(\sum_{p\in P}s(p)=\Omega(n)\)._
Proof.: Let \(d\geq 3\) and let \(P\subseteq\mathbb{R}^{d}\) be a set of \(n\) points distributed evenly on a \(2\) dimensional sphere of radius \(\sqrt{\frac{\ln n}{2\cos\left(\frac{n}{\pi}\right)}}\). In other words, using the law of cosines, every \(p\in P\), \(\sqrt{\ln n}=\min_{q\in P\setminus\left\{p\right\}}\left\|p-q\right\|_{2}\); see Figure C.
Observe that for every \(p\in P\),
\[\begin{split}& s(p):=\max_{x\in\mathbb{R}^{d}}\frac{e^{-\left\|p-x \right\|_{2}^{2}}}{\sum\limits_{q\in P}e^{-\left\|q-x\right\|_{2}^{2}}}\geq \frac{e^{-\left\|p-p\right\|_{2}^{2}}}{\sum\limits_{q\in P}e^{-\left\|p-q \right\|_{2}^{2}}}\\ &=\frac{1}{1+\sum\limits_{q\in P\setminus\{p\}}e^{-\left\|p-q \right\|_{2}^{2}}}\geq\frac{1}{1+\sum\limits_{q\in P\setminus\{p\}}\frac{1}{n} }\geq\frac{1}{2},\end{split} \tag{1}\]
where the first equality holds by definition of the sensitivity, the first inequality and second equality hold trivially, the second inequality follows from the assumption that \(\sqrt{\ln n}\leq\min_{q\in P\setminus\{p\}}\left\|p-q\right\|_{2}\), and finally the last inequality holds since \(\sum_{q\in P\setminus\{p\}}\frac{1}{n}\leq 1\).
#### 3.1.2 Reasonable assumptions lead to existence of coresets
Unfortunately, it is not immediately straightforward to bound the sensitivities of either the Gaussian loss function or the Laplacian loss function. Therefore, we first require the following structural properties in order to relate the Gaussian and Laplacian loss functions into more manageable quantities. We shall ultimately relate the function \(e^{-\left\|p^{T}x\right\|}\) to both the Gaussian and Laplacian loss functions. Thus, we first relate the function \(e^{-\left\|p^{T}x\right\|}\) to the function \(\left|p^{T}x\right|+1\).
Let \(p\in\mathbb{R}^{d}\) such that \(\left\|p\right\|_{2}\leq 1\), and let \(R>0\) be positive real number. Then for every \(x\in\left\{x\in\mathbb{R}^{d}\big{|}\|x\|_{2}\leq R\right\}\), \(\frac{1}{e^{R}(1+R)}\left(1+\left|p^{T}x\right|\right)\leq e^{-\left\|p^{T}x \right\|}\leq\left|p^{T}x\right|+1\).
In what follows, we provide the analysis of coreset construction for the RBF and Laplacian loss functions, considering an input set of points lying in the unit ball. We refer the reader to the supplementary material for generalization of our approaches towards general input set of points.
**Theorem 3.2** (Coreset for _RBF_).: _Let \(R\geq 1\) be a positive real number, \(X=\left\{x\in\mathbb{R}^{d}\big{|}\|x\|_{2}\leq R\right\}\), and let \(\varepsilon,\delta\in(0,1)\). Let \((P,w,X,f)\) be query space as in Definition A.1 such that every \(p\in P\) satisfies \(\left\|p\right\|_{2}\leq 1\). For every \(x\in X\) and \(p\in P\), let \(f(p,x):=\rho\left(\left\|p-x\right\|_{2}\right)\). Let \((S,v)\) be a call to \(\textsc{Coreset}(P,w,R,m)\) where \(S\subseteq P\) and \(v:S\to[0,\infty)\). Then \((S,v)\)\(\varepsilon\)-coreset of \((P,w)\) with probability at least \(1-\delta\), if \(m=O\left(\frac{e^{12R^{2}}R^{2}d^{1.5}}{e^{2}}\left(R^{2}+\log d+\log\frac{1}{ \delta}\right)\right)\)._
Coreset for Laplacian loss function.In what follows, we provide a coreset for the Laplacian loss function. Intuitively speaking, leveraging the properties of the Laplacian loss function, we were able to construct a coreset that holds for every vector \(x\in\mathbb{R}^{d}\) unlike the _RBF_ case where the coreset holds for a ball of radius \(R\). We emphasize that the reason for this is due to the fact that the Laplacian loss function is less sensitive than the _RBF_.
**Theorem 3.3** (Coreset for the Laplacian loss function).: _Let \(\left(P,w,\mathbb{R}^{d},f\right)\) be query space as in Definition A.1 such that every \(p\in P\) satisfies \(\left\|p\right\|_{2}\leq 1\). For \(x\in\mathbb{R}^{d}\) and \(p\in P\), let \(f(p,x):=e^{-\left\|p-x\right\|_{2}}\). Let \(\varepsilon,\delta\in(0,1)\). Then there exists an algorithm which given \(P,w,\varepsilon,\delta\) return a weighted set \((S,v)\) where \(S\subseteq P\) of size \(O\left(\frac{\sqrt{\ln d+1.5}}{\varepsilon^{2}}\left(\log n+\log d+\log\frac{ 1}{\delta}\right)\right)\) and a weight function \(v:S\rightarrow[0,\infty)\) such that \((S,v)\) is an \(\varepsilon\)-coreset of \((P,w)\) with probability at least \(1-\delta\)._
## 4 Radial Basis Function Networks
In this section, we consider coresets for _RBFNNs_. Consider an _RBFNN_ with \(L\) neurons in the hidden layer and a single output neuron. First note that the hidden layer uses radial basis functions as activation functions so that the output is a scalar function of the input layer, \(\phi:\mathbb{R}^{d}\rightarrow\mathbb{R}\) defined by \(\phi(x)=\sum_{i=1}^{L}\alpha_{i}\rho(\|x-c^{(i)}\|_{2})\), where \(c^{(i)}\in\mathbb{R}^{n}\) for each \(i\in[d]\).
For an input dataset \(P\) and a corresponding desired output function \(y:P\rightarrow\mathbb{R}\), RBFNNs aim to optimize \(\sum\limits_{p\in P}\left(y(p)-\sum_{i=1}^{L}\alpha_{i}e^{-\left\|p-c^{(i)} \right\|_{2}^{2}}\right)^{2}.\) Expanding the cost function, we obtain that RBFNNs aim to optimize
\[\begin{split}\sum\limits_{p\in P}y(p)^{2}&-2\sum \limits_{i=1}^{L}\alpha_{i}\overbrace{\left(\sum\limits_{p\in P}y(p)e^{- \left\|p-c^{(i)}\right\|_{2}^{2}}\right)}^{\alpha}\\ &+\underbrace{\sum\limits_{p\in P}\left(\sum\limits_{i=1}^{L} \alpha_{i}e^{-\left\|p-c^{(i)}\right\|_{2}^{2}}\right)^{2}}_{\beta}.\end{split} \tag{2}\]
Bounding the \(\alpha\) term in equation 2.We first define for every \(x\in\mathbb{R}^{d}\):
\[\begin{split}\phi^{+}(x)&=\sum\limits_{p\in P,y(p)> 0}y(p)e^{-\left\|p-x\right\|_{2}^{2}}\\ \phi^{-}(x)&=\sum\limits_{p\in P,y(p)<0}\left|y(p) \right|e^{-\left\|p-x\right\|_{2}^{2}}.\end{split}\]
Observe that \(\sum_{p\in P}y(p)\rho(\|p-c^{(i)}\|_{2})=\phi^{+}\left(c^{(i)}\right)-\phi^{-} \left(c^{(i)}\right)\). Thus the \(\alpha\) term in equation 2 can be approximated using the following.
**Theorem 4.1**.: _There exists an algorithm that samples \(O\left(\frac{e^{8R^{2}}R^{2}d^{1.5}}{\varepsilon^{2}}\left(R^{2}+\log d+\log \frac{2}{\delta}\right)\right)\) points to form weighted sets \((S_{1},w_{1})\) and \((S_{2},w_{2})\) such that with probability at least \(1-2\delta\),_
\[\frac{\left|\sum\limits_{p\in P}y(p)\phi(p)-\left(\sum\limits_{ \begin{subarray}{c}i\in[L]\\ \alpha_{i}>0\end{subarray}}\alpha_{i}\gamma_{S_{1}}+\sum\limits_{\begin{subarray}{ c}j\in[L]\\ \alpha_{j}<0\end{subarray}}\alpha_{j}\gamma_{S_{2}}\right)\right|}{\sum\limits_{i\in[L]} \left|\alpha_{i}\right|\left(\phi^{+}\left(c^{(i)}\right)+\phi^{-}\left(c^{(i) }\right)\right)\right)}\leq\varepsilon,\]
_where \(\gamma_{S_{1}}:=\sum\limits_{p\in S_{1}}w_{1}(p)e^{-\left\|p-c^{(i)}\right\|_{2}^{2}}\) and \(\gamma_{S_{2}}:=\sum\limits_{q\in S_{2}}w_{2}(q)e^{-\left\|q-c^{(i)}\right\|_{2}^{ 2}}\)._
**Bounding the \(\beta\) term in equation 2.** By Cauchy's inequality, it holds that
\[\sum\limits_{p\in P}\left(\sum\limits_{i=1}^{L}\alpha_{i}e^{- \left\|p-c^{(i)}\right\|_{2}^{2}}\right)^{2} \leq L\sum\limits_{p\in P}\sum\limits_{i=1}^{L}\alpha_{i}^{2}e^{- 2\left\|p-c^{(i)}\right\|_{2}^{2}}\] \[=L\sum\limits_{i=1}^{L}\alpha_{i}^{2}\sum\limits_{p\in P}e^{-2 \left\|p-c^{(i)}\right\|_{2}^{2}},\]
where the equality holds by simple rearrangement.
Using Theorem 3.2, we can approximate the upper bound on \(\beta\) with an approximation of \(L(1+\varepsilon)\). However, if for every \(i\in[L]\) it holds that \(\alpha_{i}\geq 0\), then we also have the lower bound \(\sum\limits_{i=1}^{L}\alpha_{i}^{2}\sum\limits_{p\in P}e^{-2\left\|p-c^{(i)} \right\|_{2}^{2}}\leq\sum\limits_{p\in P}\left(\sum\nolimits_{i=1}^{L}\alpha_ {i}e^{-\left\|p-c^{(i)}\right\|_{2}^{2}}\right)^{2}\).
Since we can generate a multiplicative coreset for the left-hand side of the above inequality, then we obtain also a multiplicative coreset in a sense for \(\beta\) as well.
## 5 Advantages of our Methods
**One coreset for all networks.** Our coreset is model-independent, i.e., we aim at improving the running time of multiple neural networks. Contrary to other methods that need to compute the coreset after each gradient update to support their theoretical proofs, our method gives the advantage of computing the sensitivity (or the coreset) only once, for all of the required networks. This is because our coreset can approximate any function that can be defined (approximated) using a _RBFNN_ model.
**Efficient coreset per epoch.** Practically, our competing methods for data selection are not applied before each epoch, but every \(R\) epochs. This is since the competing methods require a lot of time to compute a new coreset since they compute the gradients of the network with respect to each input training data. However, our coreset can be computed before each epoch in a negligible time (\(\sim 0\) seconds), since we compute the sensitivity of each point (image) in the data once at the beginning, and then whenever we need to create a new coreset, we simply sample from the input data according to the sensitivity distribution.
## 6 Experimental Results
In this section, we practically demonstrate the efficiency and stability of our _RBFNN_ coreset approach for training deep neural networks via data subset selection. We mainly study the trade-off between accuracy and efficiency.
**Competing methods.** We compare our method against many variants of the proposed algorithms in (Killamsetty et al., 2021) (denoted by, _GRAD-MATCH_), in (Mirzasoleiman et al., 2020) (denoted by _CRAIG_), and in (Killamsetty et al., 2021) (denoted by _GLISTER_). For each of these methods, we report the results for \(4\) variants: (i) the "vanilla" method, denoted by its original name, (ii) applying a warm start i.e., training on the whole data for \(50\%\) of the training time before training the other \(50\%\) on the coreset, where such methods are denoted by adding the suffix -WARM. (iii) a more efficient version of each of the competing methods denoted by adding the suffix PB (more details are given at (Killamsetty et al., 2021)), and finally, a combination of both (ii) and (iii). In other words, the competing methods are GRAD-MATCH, GRAD-MATCHPB, GRAD-MATCH-WARM, GRAD-MATCHPB-WARM, CRAIG, CRAIGPB, CRAIG-WARM, CRAIGPB-WARM, and GLISTER-WARM. We also compare against randomly selecting points (denoted by RANDOM).
**Datasets and model architecture.** We performed our experiments for training CIFAR10 and CIFAR100 (Krizhevsky et al., 2009) on ResNet18 (He et al., 2016), MNIST (LeCun et al., 1998) on LeNet, and ImageNet-2012 (Deng et al., 2009) on Resnet18 (He et al., 2016).
**The setting.** We adapted the same setting of (Killamsetty et al., 2021), where we used SGD optimizer for training initial learning rate equal to \(0.01\), a momentum of \(0.9\), and a weight decay of \(5e-4\). We decay the learning rate using cosine annealing (Loshchilov and Hutter, 2016) for each epoch. For MNIST, we trained the LeNet model for \(200\) epochs. For CIFAR10 and CIFAR100, we trained the ResNet18 for \(300\) epochs - all on batches of size \(20\) for the subset selection training versions. We train the data selection methods and the entire data training with the same number of epochs; the main difference is the number of samples used for training a single epoch. All experiments were executed on V100 GPUs. The reported test accuracy in the results is after averaging across five runs.
**Subset sizes and the \(R\) parameter.** For MNIST, we use sizes of \(\{1\%,3\%,5\%,10\%\}\), while for CIFAR10 and CIFAR100, we use \(\{5\%,10\%,20\%,30\%\}\), and for ImageNet we use \(5\%\).Since the competing methods require a lot of time to compute the gradients, we set \(R=20\). We note that for our coreset we can test it with \(R=1\) without adding run-time since once the sensitivity vector is defined, commuting a new coreset requires \(\sim 0\) seconds. However, we test it with \(R=20\), to show its robustness.
**Discussion.** Tables 1-4 report the results for CIFAR10 and CIFAR100. It is clear from Tables 1 and 2 that our method
achieves the best accuracy, with and without warm start, for \(5\%\), \(20\%\), and \(30\%\) subset selection on CIFAR10. For CIFAR100, our method drastically outperforms all of the methods that do not apply a warm start. When applying a warm start, we can achieve the best accuracy, with and without warm start, for \(5\%\), \(20\%\), and \(30\%\) subset selection on CIFAR10. For CIFAR100, our method drastically outperforms all of the methods that do not apply a warm start. When applying a warm start, we can achieve the best accuracy, with and without warm start, for \(5\%\), \(20\%\), and \(30\%\) subset selection on CIFAR10.
warm start, we still win in half of the cases. Note that, we outperform all of the other methods in terms of accuracy vs time. The same phenomenon is witnessed in the ImageNet experiment (Table 5) as our coreset achieves the highest accuracy. We refer the reader to the MNIST experiment (Table 6 in the appendix). We note that our sensitivity sampling vector is computed once during our experiments for each dataset. This vector can be used to sample coresets of different sizes, for different networks, at different epochs of training, in a time that is close to zero seconds. In all tables, the best results are highlighted in bold.
**Function Approximations.** We now compare our coreset to uniform for function approximation. Specifically, we generate around \(10,000\) points in \(3D\), while setting the third entry of each point to be a function of the first \(2\) entries, \(f(x)=e^{-\left\|x\right\|_{2}^{2}}+0.2\cos\left(4\left\|x\right\|_{2}\right)\). We train an RBFNN to reproduce the function using only \(400\) points, where we saw that our coreset (Figure 2(c)) is closer visually to the true function (Figure 2(a)) using uniform sampling for reproducing the image (Figure 2(b)). Furthermore, we show for the \(RBF\) fitting task on CovType dataset (Dua et al., 2017), where, our coreset is better than uniform sampling by a multiplicative factor of \(1.5\) at max (Figure 2(d)).
## 7 Conclusion and Future Work
In this paper, we have introduced a coreset that provably approximates any function that can be represented by RBFNN architectures. Our coreset construction can be used to approximate the gradients of any deep neural networks (_DNNs_), leading towards provable subset selection for learning/training _DNNs_. We also empirically demonstrate the value of our work by showing significantly better performances over various datasets and model architectures. As the first work on using coresets for data subset selection with respect to _RBFNNs_, our results lead to a number of interesting possible future directions. It is natural to ask whether there exist smaller coreset constructions that also provably give the same worst-case approximation guarantees. Furthermore, RKHS methods (Claici and Solomon, 2018; Claici et al., 2018) may be investigated in this context either by boosting their implementation or by merging ideas with this work. In addition, can our results be extended to more general classes of loss functions? Finally, we remark that although our empirical results significantly beat state-of-the-art, they nevertheless only serve as a proof-of-concept and have not been fully optimized with additional heuristics.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & Top-1 Test accuracy of the Model(\%) & Model Training time(in hrs) \\ Budget(\%) & 5\% & 5\% \\ \hline Full (skyline for test accuracy) & 70.36 & 276.28 \\ Random (skyline for training time) & 21.124 & 14.12 \\ \hline CraigPB & 44.28 & 22.24 \\ GradMatch & 47.24 & 18.24 \\ GradMatchPB & 45.15 & 16.12 \\ \hline RBFNN Coreset (OURS) & **47.26** & **15.24** \\ \hline \end{tabular}
\end{table}
Table 5: Data Selection Results for ImageNet2012 using ResNet-18
Figure 2: **(a) is the function we wish to approximate by training an RBFNN on: (b) a uniformly sampled subset, and (c) our coreset.** |
2305.19190 | Inverse Approximation Theory for Nonlinear Recurrent Neural Networks | We prove an inverse approximation theorem for the approximation of nonlinear
sequence-to-sequence relationships using recurrent neural networks (RNNs). This
is a so-called Bernstein-type result in approximation theory, which deduces
properties of a target function under the assumption that it can be effectively
approximated by a hypothesis space. In particular, we show that nonlinear
sequence relationships that can be stably approximated by nonlinear RNNs must
have an exponential decaying memory structure - a notion that can be made
precise. This extends the previously identified curse of memory in linear RNNs
into the general nonlinear setting, and quantifies the essential limitations of
the RNN architecture for learning sequential relationships with long-term
memory. Based on the analysis, we propose a principled reparameterization
method to overcome the limitations. Our theoretical results are confirmed by
numerical experiments. The code has been released in
https://github.com/radarFudan/Curse-of-memory | Shida Wang, Zhong Li, Qianxiao Li | 2023-05-30T16:34:28Z | http://arxiv.org/abs/2305.19190v4 | # Inverse Approximation Theory for Nonlinear Recurrent Neural Networks
###### Abstract
We prove an inverse approximation theorem for the approximation of nonlinear sequence-to-sequence relationships using RNNs. This is a so-called Bernstein-type result in approximation theory, which deduces properties of a target function under the assumption that it can be effectively approximated by a hypothesis space. In particular, we show that nonlinear sequence relationships, viewed as functional sequences, that can be stably approximated by RNNs with hardtanh/tanh activations must have an exponential decaying memory structure - a notion that can be made precise. This extends the previously identified curse of memory in linear RNNs into the general nonlinear setting, and quantifies the essential limitations of the RNN architecture for learning sequential relationships with long-term memory. Based on the analysis, we propose a principled reparameterization method to overcome the limitations. Our theoretical results are confirmed by numerical experiments.
## 1 Introduction
Recurrent neural networks (RNNs) [1] are one of the most basic machine learning models to learn the relationship between sequential or temporal data. They have wide applications from time series prediction [2], text generation [3], speech recognition [4] to sentiment classification [5]. However, when there are long-term dependencies in the data, empirical results [6] show that RNN may encounter difficulties in learning. In this paper, we investigate this problem from the view of approximation theory.
From approximation perspective, there are various types of theorems characterizing the connections between target relationships and model architectures
for learning them. Universal approximation (7, p. 32) and Jackson-type theorem (7, p. 187) provide basic guarantees of approximation and error estimates of sufficiently regular target functions by a particular hypothesis space. A number of such results are available for sequence modelling, including the RNN [8, 9]. On the other hand, a relatively under-investigated domain in the machine learning literature are Bernstein-type theorems [10, 9], which are also known as inverse approximation theorems. These results aims to characterize the regularity of target relationships, assuming that they can be approximated efficiently with a hypothesis space. These regularity notions intimately depends on, thus gives important insights into, the structure of the hypothesis space under study.
This paper establishes a Bernstein-type result for the approximation of nonlinear functionals via RNNs. Previous works [8, 9] indicate that linear functionals that can be universally approximated by linear RNNs must have exponential decaying memory. This phenomena was coined the _curse of memory_ for linear RNNs. A natural question is whether the nonlinear recurrent activation used in practical RNNs changes the situation. This is important since a bigger hypothesis space may lift restrictions on the target functions. Moreover, it is known that nonlinear activation is crucial for feed-forward networks to achieve universality [11]. Thus, it is worthwhile to investigate whether the linear Bernstein result generalizes to the case of approximating nonlinear sequence relationships with nonlinear RNNs. In this paper, we prove that nonlinear RNNs still suffer from a curse of memory in approximation. In particular, we show that nonlinear functionals that can be stably approximated by RNNs with hardtanh/tanh activations must have an exponential decaying memory structure. The notions of stable approximation and memory structure can be concretely defined. Our results make precise the empirical observation that the RNN architecture has inherent limitations when modelling long-time dependencies.
In summary, our main contributions are:
1. We extends the concept of memory function in the linear settings [8, 9] to the nonlinear setting. This memory function can be numerically quantified in sequence modelling applications.
2. We introduce a notion of stable approximation, which ensures that an approximant has the possibility to be found by a gradient-based optimisation algorithm.
3. We prove, to the best of our knowledge, the first Bernstein-type approximation theorem for nonlinear functional sequences through nonlinear RNNs. Our results characterize the essential limit of nonlinear RNNs in learning long-term relationship. Our analysis also suggests that appropriate parameterization can alleviate the 'curse of memory' phenomenon in learning targets with long memory. The theoretical result is corroborated with numerical experiments.
Notation.For a sequence of \(d\)-dimensional vectors indexed by \(\mathbb{R}\), \(\mathbf{x}=\{x_{t}\in\mathbb{R}^{d}:t\in\mathbb{R}\}\), we denote the supremum norm by \(\|\mathbf{x}\|_{\infty}:=\sup_{t\in\mathbb{R}}|x_{t}|_{\infty}\). Here
\(|x|_{\infty}:=\max_{i}|x_{i}|,|x|_{2}:=\sqrt{\sum_{i}x_{i}^{2}},|x|_{1}:=\sum_{i }|x_{i}|\) are the usual max (\(L_{\infty}\)) norm, \(L_{2}\) norm and \(L_{1}\) norm. Notice that the bold face represents sequence while the normal letters are scalars, vectors or functions. Throughout this paper we use \(\|\cdot\|\) to denote norms over sequences of vectors, or function(al)s, while \(|\cdot|\) (with subscripts) is used to represent the norm of number, vector or weights tuple. The hat notation in this paper refers to the hypothesis space (functional) while the original symbol is referring to the target space (functional).
## 2 Related work
Various results have been established in RNNs approximation theory, see Sontag [12], Hanson et al. [13] and references therein. For unbounded input index sets, \(L_{p}\) approximation is established by Gonon and Ortega [14]. In Gonon and Ortega [15], the universal approximation theorem is constructed for functionals with fading memory in the discrete time setting. In Li et al. [8], the universal approximation theorem and Jackson-type approximation theorem characterize the density and speed of linear RNNs applied to linear functionals. Most existing results are forward (Jackson-type) approximation theorems, which upper bound the optimal approximation error. Of most relevance is the Bernstein-type result proved in Li et al. [9], where it has been proved that the linear functional sequences that can be efficiently approximated by linear RNNs must have an exponential decaying memory. However, the main limitation of the above result is the linear setting for both models and targets. In general sequence modelling with other architectures (see Jiang et al. [16]), forward (Jackson-type) approximation results have been obtained in a number of settings, including dilated convolutional neural networks [17] and encoder-decoder models [18].
The notion of approximation stability is one of the central concepts we exploit in this paper. We note that in classical approximation theory, stable approximation has numerous definitions depending on the setting [19]. For example, in nonlinear approximation [20], a stably approximating sequence \(\{H_{m}\}\) of \(H\) is one that satisfies \(|H_{m}|\leq C|H|\) for some absolute constant \(C>0\) and all \(m\). This approach is taken to show the non-existence of stable procedure to approximating functions from equally-spaced samples with exponential convergence on analytic functions [21]. This notion of stability is on the conditioning of the approximation problem. In contrast, our notion of stability introduced in Section 4.2 is more similar to a uniform continuity requirement. Pertaining to sequence modelling, a related but different notion of dynamic stability [22] was used to prove a Jackson-type results for universal simulation of dynamical systems. There, the stability is akin to requiring the uniform (in inputs) continuity of the flow-map of the RNN hidden dynamics. This is again a different notion of stability.
Problem formulation and prior results on linear RNNs
In this section, we introduce the problem formulation of sequence modelling as a functional sequence approximation problem. We pay particular attention to distinguish two types of results: forward (Jackson-type) and inverse (Bernstein-type) approximation theorems. For approximation theory in machine learning, most existing results focus on forward theorems. However, inverse approximation theorems are of significant importance in revealing the fundamental limitations of an approximation approach. The present paper focuses on establishing such results in the general, non-linear setting. We conclude this section with a review of known Bernstein-type estimates, which is currently restricted to the linear case. In so doing, we highlight the definition of memory in the linear case, which motivates our general definition of memory for nonlinear functional sequences in Section 4.1. The relationship between memory and approximation is central to our results.
### The approximation problem for sequence modelling
The goal of sequential modeling is to learn a relationship between an input sequence \(\mathbf{x}=\{x_{t}\}\) and a corresponding output sequence \(\mathbf{y}=\{y_{t}\}\). For ease of analysis, we adopt the continuous-time setting in [8], where \(t\in\mathbb{R}\). This is also a natural setting for irregularly sampled time series [23]. The input sequences belong to the bounded input sequence \(\mathcal{X}=C_{0}(\mathbb{R}^{d})\). We assume the input and output sequences are related by a sequence of functionals \(\mathbf{H}=\{H_{t}:\mathcal{X}\mapsto\mathbb{R};t\in\mathbb{R}\}\) via \(y_{t}=H_{t}(\mathbf{x}),t\in\mathbb{R}\). The sequential approximation problem can be formulated as the approximation of the target functional sequence \(\mathbf{H}\) by a functional sequence \(\widehat{\mathbf{H}}\) from a model hypothesis space such as RNNs.
Forward and inverse approximation theorems.Given a hypothesis space \(\widehat{\mathcal{H}}^{(m)}\) of complexity \(m\geq 1\) (e.g. width-\(m\) RNNs), forward approximation theorems, also called Jackson-type theorems, bound the optimal approximation error \(\inf_{\widehat{\mathbf{H}}\in\widehat{\mathcal{H}}^{(m)}}\|\mathbf{H}- \widehat{\mathbf{H}}\|\leq C(\mathbf{H},m)\).
Inverse approximation (Bernstein-type) results are "converse" statements to Jackson-type results. From the starting assumption of the existence of efficient approximation for a given target \(\mathbf{H}\), Bernstein-type results deduce the approximation spaces that \(\mathbf{H}\) ought to belong to, i.e. it identifies a complexity or regularity measure \(C(\cdot)\) and show that \(C(\mathbf{H})\) is necessarily finite. Take the trigonometric polynomial approximation as an example, Bernstein [7, p. 187] proved that if \(\inf_{\widehat{H}\in\widehat{\mathcal{H}}^{(m)}}\|H-\widehat{H}\|\leq\frac{c} {m^{\alpha+\delta}}\), for all \(m\geq 1\) and some \(\delta>0\), \(c>0\), then \(C(H)=|H^{(\alpha)}|<\infty\), i.e. \(H\) must be \(\alpha\)-times differentiable with \(\delta\)-Holder continuous derivatives.
Bernstein-type inverse approximation results are important in characterizing the approximation capabilities for hypothesis spaces. For trigonometric polyno
mial example, it says that _only_ smooth functions can be efficiently approximated, thereby placing a concrete limitation on the approximation capabilities of these models. Our goal in this paper is to deduce analogues of this result, but for the approximation of general nonlinear functional sequences by RNNs. Unlike the classical case where the notion of regularity enters in the form of smoothness, here we shall investigate the concept of memory as a quantifier of regularity - a notion that we will make precise subsequently.
### The RNN architecture and prior results
The continuous-time RNN architecture parameterizes functional sequences by the introduction of a hidden dynamical system
\[\begin{split}\frac{dh_{t}}{dt}&=\sigma(Wh_{t}+Ux_{t }+b),\\ \hat{y}_{t}&=c^{\top}h_{t}.\end{split} \tag{1}\]
Here, \(\hat{y}_{t}\in\mathbb{R}\) is the predicted output sequence value, and \(h_{t}\in\mathbb{R}^{m}\) denotes the hidden state. As a common practice, we set the boundary condition \(h_{-\infty}=0\).1 The hyper-parameter \(m\) is also known as the hidden dimension, or width, of recurrent neural networks. For different hidden dimensions \(m\), the RNN is parameterized by trainable weights \((W,U,b,c)\), where \(W\in\mathbb{R}^{m\times m}\) is the recurrent kernel, \(U\in\mathbb{R}^{m\times d}\) is the input kernel, \(b\in\mathbb{R}^{m}\) is the bias and \(c\in\mathbb{R}^{m}\) is the readout. The complexity of the RNN hypothesis space is characterized by the hidden dimension \(m\). The nonlinearity arises from the activation function \(\sigma(\cdot)\), which is a scalar function performed element-wise, such as _tanh_, _hardtanh_, _sigmoid_ or _ReLU_. In this paper, we shall focus on the hardtanh and tanh activations as they are the most commonly used activations in RNNs. The hypothesis space of RNNs is thus the following functional sequence space
Footnote 1: This is consistent with practical implementations such as TensorFlow and PyTorch, where the initial value of hidden state is set to be zero by default.
\[\begin{split}\widehat{\mathcal{H}}_{\text{RNN}}^{(m)}& =\{\mathbf{x}\mapsto\mathbf{\hat{y}}\text{ via Equation }(\ref{eq:kernel}):\\ &\quad\quad W\in\mathbb{R}^{m\times m},U\in\mathbb{R}^{m\times d}, b\in\mathbb{R}^{m},c\in\mathbb{R}^{m}\}.\end{split} \tag{2}\]
Before presenting our main results, we review known Jackson and Bernstein-type results established for linear RNNs, corresponding to setting \(\sigma(z)=z\) and \(b=0\) in (1). We shall pay attention to the definition of memory for a target functional sequence, and how it relates to approximation properties under the RNN hypothesis space.
We begin with some definitions on (sequences of) functionals as introduced in [8].
**Definition 3.1**.: Let \(\mathbf{H}=\{H_{t}:\mathcal{X}\mapsto\mathbb{R};t\in\mathbb{R}\}\) be a sequence of functionals.
1. (**Linear**) \(H_{t}\) is linear if for any \(\lambda,\lambda^{\prime}\in\mathbb{R}\) and \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}\), \(H_{t}(\lambda\mathbf{x}+\lambda^{\prime}\mathbf{x}^{\prime})=\lambda H_{t}( \mathbf{x})+\lambda^{\prime}H_{t}(\mathbf{x}^{\prime})\).
2. (**Continuous**) \(H_{t}\) is continuous if for any \(\mathbf{x},^{\prime}\mathbf{x}\in\mathcal{X}\), \(\lim_{\mathbf{x}^{\prime}\rightarrow\mathbf{x}}|H_{t}(\mathbf{x}^{\prime})-H_{ t}(\mathbf{x})|=0\).
3. (**Bounded**) \(H_{t}\) is bounded if \(\sup_{\{\mathbf{x}\in\mathcal{X},\|\mathbf{x}\|_{\mathcal{X}}\leq 1\}}|H_{t}( \mathbf{x})|<\infty\).
4. (**Time-homogeneous**) \(\mathbf{H}=\{H_{t}:t\in\mathbb{R}\}\) is time-homogeneous (or shift-equivariant) if the input-output relationship commutes with time shift: let \([S_{\tau}(\mathbf{x})]_{t}=x_{t-\tau}\) be a shift operator, then \(\mathbf{H}(S_{\tau}\mathbf{x})=S_{\tau}\mathbf{H}(\mathbf{x})\)
5. (**Causal**) \(H_{t}\) is causal if it does not depend on future values of the input. That is, if \(\mathbf{x},\mathbf{x}^{\prime}\) satisfy \(x_{t}=x^{\prime}_{t}\) for any \(t\leq t_{0}\), then \(H_{t}(\mathbf{x})=H_{t}(\mathbf{x}^{\prime})\) for any \(t\leq t_{0}\).
6. (**Regular**) \(H_{t}\) is regular if for any sequence \(\{\mathbf{x}^{(n)}:n\in\mathbb{N}\}\) such that \(x_{s}^{(n)}\to 0\) for almost every \(s\in\mathbb{R}\), then \(\lim_{n\rightarrow\infty}H_{t}(\mathbf{x}^{(n)})=0\).
The works in Li et al. [8, 9] study the approximation of functional sequences satisfying Definition 3.1 by linear RNNs. A key idea is showing that any such functional sequence \(\mathbf{H}\) admits a Riesz representation (see Appendix A.1 and Appendix A.2)
\[H_{t}(\mathbf{x})=\int_{0}^{\infty}\rho(s)^{\top}x_{t-s}ds,\qquad t\in\mathbb{ R}. \tag{3}\]
In this sense, \(\rho\) completely determines \(\mathbf{H}\), and its approximation using linear RNNs can be reduced to the study of the approximation of \(\rho\in L^{1}([0,\infty),\mathbb{R}^{d})\) by exponential sums of the form \((c^{\top}e^{Ws}U)^{\top}\). An important observation here is that \(\rho\) captures the memory pattern of the target linear functional sequence: if \(\rho\) decays rapidly, then the target has short memory, and vice versa. The forward approximation theorem simply says that a target with exponentially decaying memory admits an efficient approximation using RNNs.
A natural question is whether linear RNNs can efficiently approximate targets without an exponential decaying memory. The Bernstein-type result in [9] answers this question. By assuming that a target functional sequence \(\mathbf{H}\) can be approximated uniformly by stable RNNs, then the memory of the target functional sequence must satisfy
\[e^{\beta_{0}t}|\rho(t)|_{2}=o(1)\text{ as }t\rightarrow\infty \tag{4}\]
for some \(\beta_{0}>0\). Just like the classical Bernstein theorem for trigonometric polynomials, this result effectively constrains the approximation space of linear RNNs to linear functional sequences with exponentially decaying memory. Together with the Jackson-type estimate, one concludes that efficient approximation by linear RNN is possible if and only if the target linear functional sequence has exponential decaying memory in the sense defined above. This was coined the "curse of memory" [8, 9] and reveals fundamental limitations of the RNN architecture to capture long-term memory structures.
The focus of this paper is to investigate whether the addition of nonlinear activation changes this result. In other words, would the curse of memory
hold for nonlinear RNNs in the approximation of suitably general nonlinear functionals? This is a meaningful question, since Bernstein-type results essentially constrain approximation spaces, and so a larger hypothesis space may relax such constraints. A significant challenge in the nonlinear setting is the lack of a Riesz representation result, and thus one needs to carefully define a notion of memory that is consistent with \(\rho\) in the linear case, but can still be used in the nonlinear setting to prove inverse approximation theorems. Moreover, we will need to introduce a general notion of approximation stability, which together with the generalized memory definition allows us to derive a Bernstein-type result that holds beyond the linear case.
## 4 Main results
In this section, we establish a Bernstein-type approximation result for nonlinear functional sequences using nonlinear RNNs.
We first give a definition of memory function for nonlinear functionals. It is compatible with the memory definition in the linear functionals and it can be queried and verified in applications. Next, we propose the framework of stable approximation. It is a mild requirement from the perspective of approximation, but a desirable one from the view of optimization. Moreover, we show that any linear functional with an exponential decaying memory can be stably approximated.
Based on the memory function definition and stable approximation framework, we prove a Bernstein-type theorem. The theorem shows that any nonlinear functionals that can be stably approximated by general hardtanh/tanh RNNs must have an exponentially decaying memory, which confirms that the curse-of-memory phenomenon is not limited to the linear case.
Numerical verifications are included to demonstrate the result.
### Memory function for nonlinear functionals
Recall that the memory for a linear functional sequence is defined by its Riesz representation in Equation (3). While there are no known general analogues of Riesz representation for nonlinear functionals, we may consider other means to extract an effective memory function from \(\mathbf{H}\).
Let \(x\in\mathbb{R}^{d}\) and consider the following Heaviside input sequence \(\mathbf{u}_{t}^{x}=x\cdot\mathbf{1}_{[0,\infty)}(t)=\begin{cases}x&t\geq 0,\\ 0&t<0.\end{cases}\) In the linear case, notice that according to Equation (3)
\[\sup_{|x|_{2}\leq B}\frac{1}{B}\left|\frac{d}{dt}H_{t}(\mathbf{u}^{x})\right| =\sup_{|x|_{2}\leq B}\frac{1}{B}|x^{\top}\rho(t)|=|\rho(t)|_{2}. \tag{5}\]
Hence, conditions on \(|\rho(t)|_{2}\) may be replaced by conditions on the left hand side, which is well-defined also for nonlinear functionals. This motivates the following definition of memory function for nonlinear functional sequences.
**Definition 4.1** (Memory function of nonlinear functional sequences).: For continuous, causal, regular and time-homogeneous functional sequences \(\mathbf{H}=\{H_{t}(\mathbf{x}):t\in\mathbb{R}\}\) on \(\mathcal{X}\), define the following function as the _memory function_ of \(\mathbf{H}\) over bounded Heaviside input \(\mathbf{x}\):
\[\mathcal{M}_{B}(\mathbf{H})(t):=\sup_{|x|_{2}\leq B}\frac{1}{B}\left|\frac{d}{ dt}H_{t}(\mathbf{u}^{x})\right|,\quad B>0. \tag{6}\]
If the oracle of the target functional is avaiable, the memory function can be queried directly and the result is named queried memory. Without target functional oracle, we may approximate the target functional with the learned model and still evaluate the memory function. If the queried memory are decaying for all \(B>0\), then we say the corresponding nonlinear functional sequence has a decaying memory. We demonstrate in Appendix B that the memory querying shows the memory pattern of LSTM and bidirectional LSTM sequence-to-sequence models in sentiment analysis on IMDB movie reviews.
**Definition 4.2** (Decaying memory).: For continuous, causal, regular and time-homogeneous functional sequences \(\mathbf{H}=\{H_{t}(\mathbf{x}):t\in\mathbb{R}\}\) on \(\mathcal{X}\), we say it has a _decaying memory_ if:
\[\lim_{t\to\infty}\mathcal{M}_{B}(\mathbf{H})(t)=0,\quad\forall B>0. \tag{7}\]
In particular, we say a functional sequence \(\mathbf{H}\) has an _exponential decaying memory_ if for some \(\beta>0\),
\[\lim_{t\to\infty}e^{\beta t}\mathcal{M}_{B}(\mathbf{H})(t)=0,\quad\forall B>0. \tag{8}\]
We say a family of functional sequences \(\{\mathbf{H}_{m}\}\) has an _uniformly decaying memory_ if the memory functions for these functional sequences are uniformly converging to \(0\):
\[\lim_{t\to\infty}\sup_{m}\mathcal{M}_{B}(\mathbf{H}_{m})(t)=0,\quad\forall B>0. \tag{9}\]
_Remark 4.3_.: The requirement of decaying memory on time-homogeneous functionals is mild since it is satisfied if \(\frac{dH_{t}}{dt}\) is continuous at Heaviside input, under the topology of point-wise convergence. Another notion of fading memory is discussed in the Appendix A.3.
### Stable approximation
We now introduce the stable approximation framework. Let us write the hypothesis space \(\widehat{\mathcal{H}}^{(m)}\) as a parametric space
\[\widehat{\mathcal{H}}^{(m)}=\{\widehat{\mathbf{H}}(\cdot;\theta):\theta\in \Theta_{m}\} \tag{10}\]
where for each \(m\), \(\Theta_{m}\) is a subset of a Euclidean space with dimension depending on \(m\), representing the parameter space defining the hypothesis and \(\widehat{\mathbf{H}}\) is a
parametric model. For example, in the case of RNNs, the parameter \(\theta\) is \((W,U,b,c)\in\Theta_{m}:=\{\mathbb{R}^{m\times m}\times\mathbb{R}^{m\times d} \times\mathbb{R}^{m}\times\mathbb{R}^{m}\}\) and \(m\) is the hidden dimension of the RNN.
Let us consider a collection of functional sequences \(\{\widehat{\mathbf{H}}_{m}=\widehat{\mathbf{H}}(\cdot;\theta_{m}):m\geq 1\}\) serving to approximate a target functional sequence \(\mathbf{H}\). Stable approximation requires that, if one were to perturb each parameter \(\theta_{m}\) by a small amount, the resulting approximant sequence should still have a continuous perturbed error. For the gradient-based optimization, this condition is necessary for one to find such an approximant sequence, as small perturbations of parameters should keep perturbed error continuous for gradients to be computed. We now define this notion of stability precisely.
**Definition 4.4**.: For target \(\mathbf{H}\) and parameterized model \(\widehat{\mathbf{H}}(\cdot,\theta_{m})\), we define the perturbed error for hidden dimension \(m\) to be:
\[E_{m}(\beta):=\sup_{\tilde{\theta}_{m}\in\{\theta:|\theta-\theta_{m}|\leq\beta \}}\|\mathbf{H}-\widehat{\mathbf{H}}(\cdot;\tilde{\theta}_{m})\| \tag{11}\]
Moreover, \(E(\beta):=\limsup_{m\to\infty}E_{m}(\beta)\) is the perturbed error for a sequence of parameterized models with increasing hidden dimensions.
**Definition 4.5** (Stable approximation via parameterized models).: Let \(\beta_{0}>0\). We say a target functional sequence \(\mathbf{H}\) admits a \(\beta_{0}\)-stable approximation under \(\{\widehat{\mathcal{H}}^{(m)}\}\), if there exists a sequence of parameterized approximants \(\widehat{\mathbf{H}}_{m}=\widehat{\mathbf{H}}(\cdot,\theta_{m})\), \(\theta_{m}\in\Theta_{m}\) which are continuous (in input \(\mathbf{x}\)) in point-wise topology such that
\[\lim_{m\to\infty}\|\mathbf{H}-\widehat{\mathbf{H}}_{m}\|\to 0, \tag{12}\]
and for all \(0\leq\beta\leq\beta_{0}\), the perturbed error satisfies that \(E(\beta)\) is continuous in \(\beta\) for \(0\leq\beta\leq\beta_{0}\).
_Remark 4.6_.: It can be seen that approximation only requires \(E(0)=0\). Therefore the stable approximation condition generalizes the approximation by requiring the continuity of \(E\) around \(\beta=0\). If an approximation is unstable (\(E(0)=0,\lim_{\beta\to 0}E(\beta)>0\)), it is difficult to be found by gradient-based optimizations.
Next, we demonstrate that the stable approximation condition is not too stringent in the sense that, for linear functional sequence with exponential decaying memory (Equation (8)) admits a stable approximation. We show the numerical verification of this result in Figure 1. The approximation of linear functional with exponential decay can be seen in the left panel at \(\beta=0\) since increasing the hidden dimension \(m\) will make the estimated error decrease to \(0\) over \(\beta\in[0,\beta_{0}]\). Stable approximation can be verified that for positive perturbation \(\beta\), adding the hidden dimension does not increase the perturbed error \(E(\beta)\). In contrast, for linear functional with polynomial decaying memory, the perturbed error \(E(\beta)\) is not continuous at \(\beta=0\).
### Bernstein-type approximation result for nonlinear RNNs
We now present the main result of this paper, which is a Bernstein-type approximation result for nonlinear functional sequences using nonlinear RNNs. The key question is whether the addition of nonlinearity alleviates the curse of memory limitation and allows an efficient approximation of functionals with slow decay. In the following, we show that the answer is negative, and a similar Bernstein-type approximation result holds for nonlinear functionals and RNNs with hardtanh/tanh activations.
**Definition 4.7**.: Next we consider the Sobolev norm defined over \(W^{1,\infty}(\mathbb{R})\):
\[\left\|\mathbf{H}-\widehat{\mathbf{H}}\right\|_{W^{1}}=\sup_{t}\left(\|H_{t}- \widehat{H}_{t}\|_{\infty}+\left\|\frac{dH_{t}}{dt}-\frac{d\widehat{H}_{t}}{ dt}\right\|_{\infty}\right). \tag{13}\]
**Theorem 4.8**.: _Assume \(\mathbf{H}\) is a sequence of continuous, causal, regular and time-homogeneous functionals on \(\mathcal{X}\) with decaying memory. Suppose there exists a sequence of **hardtanh** RNNs \(\widehat{\mathbf{H}}=\{\widehat{\mathbf{H}}(\cdot,\theta_{m})\}_{m=1}^{\infty}\)\(\beta_{0}\)-stably approximating \(\mathbf{H}\) in the Sobolev norm defined in Equation (13)._
_Then, the memory function \(\mathcal{M}_{B}(\mathbf{H})(t)\) of the nonlinear functional decays exponentially for any \(\beta<\beta_{0}\):_
\[\lim_{t\rightarrow\infty}e^{\beta t}\mathcal{M}_{B}(\mathbf{H})(t)=0,\quad \forall B>0. \tag{14}\]
Figure 1: Perturbation errors for linear functionals with different decaying memory. The anticipated limiting curve \(E(\beta)\) is marked with a black dashed line. (a) For linear functional sequences with exponential decaying memory, there exists a perturbation radius \(\beta_{0}\) such that the perturbed error \(E(\beta)\) for \(0\leq\beta<\beta_{0}\) is continuous. (b) Approximation of linear functional sequences with polynomial decaying memory. As hidden dimension \(m\) increases, the perturbation radius where the error remains small decreases, suggesting that there may not exist a \(\beta_{0}\) achieving the stable approximation condition. The intersections of lines are shifting left as the hidden dimension \(m\) increases. The anticipated limiting curve \(E(\beta)\) is not continous for the polynomial decaying memory target.
The proofs are included in the Appendix A.4. A similar result can be proved for \(\tanh\) activation under more assumptions in Theorem A.6. We will briefly summarize the idea of the proof. Since the approximations are stable, consider \(\tilde{v}_{t}=\frac{d\tilde{h}_{t}}{dt}\), we know \(\lim_{t\to\infty}\tilde{v}_{t}=0\) over constant inputs. Furthermore, by Hartman-Grobman theorem we can get a bound on the eigenvalues of matrices \(W_{m}\) (with a sequence of perturbation \(\Delta W\)). Finally, since the models with uniformly exponential decaying memory can only approximate targets with exponential decay, we prove the memory function of the nonlinear target functionals must also be decaying exponentially.
Interpretation of Theorem 4.8 and Theorem A.6.Our main result (Theorem 4.8 and Theorem A.6) extends the previous result from Li et al. [9]. Instead of smoothness (measured by the Sobolev norm) as a regularity measure, the RNN Bernstein-type result identifies exponential decaying memory (\(e^{\beta t}\mathcal{M}_{B}(\mathbf{H})(t)\to 0\)) as the right regularity measure. If we can approximate some target functionals stably using hardtanh/tanh RNN, then that target must have exponential decaying memory. Previously this was only known for linear case, but for nonlinear case, even addition of nonlinearity substantially increases model complexity, it does fix the essential memory limitation of RNN.
From the numerical perspective, the theorem implies the following two statements, and we provide numerical verification for each of them. First, if the memory function of a target functional sequence decays slower than exponential (e.g. \(\mathcal{M}_{B}(\mathbf{H})(t)=\frac{C}{(t+1)^{1.5}}\)), the optimization is difficult and the approximation in Figure 2 is achieved at 1000 epochs while typically exponential decaying memory achieves the approximation at 10 epochs. When the approximation is achieved, it can be seen in Figure 2 that, for larger perturbation scale \(\beta\), there is no perturbation stability. Second, if a target functional sequence can be well-approximated and the approximation's stability radius \(\beta_{0}\) can be shown to be positive, then the target functional sequence should have exponential decaying memory. See Figure 3 for the approximation filtered with perturbation stability requirement. (See Figure 5 in Appendix B for the validation of memory over general sentiment classification task.)
### Suitable parametrization enables stable approximation
The key insight of Theorem 4.8 and Theorem A.6 can be summarized as follow: In order to approximate targets with non-exponential decaying memory, the recurrent weights of RNNs have to have eigenvalue real part approaching 0. If the eigenvalues are bounded away from zero, the target must be exponentially decreasing. However, if the largest eigenvalue real part are approaching zero, then its stability under perturbation will decrease, then we'll not have a finite limit. This is why the approximation and stability cannot be achieved at the same time if the target's memory does not decay exponentially.
If we can reparameterize the recurrent weights so that it can both approach zero and remain stable (i.e., eigenvalue real part being non-positive) under
perturbations, then this architecture will maintain stability while having the possibility of approximation. In general, we can achieve this by replacing recurrent weight by a continuous matrix function
\[g:\mathbb{R}^{m\times m}\rightarrow\mathbb{R}^{m\times m,-},\quad g(M)=W. \tag{15}\]
This reparameterized RNN is always stable as the eigenvalues' real part are always negative.
We show there are several methods to achieve this reparameterization: Take exponential function \(g(M)=-e^{M}\) or softplus function \(g(M)=-\log(1+e^{M})\) maps the eigenvalues of \(M\) to positive range (see Figure 4 for the stable
Figure 3: Approximation + stable \(\rightarrow\) Exponential decaying memory. We construct several randomly-initialized RNN models as teacher models with large hidden dimension (\(m=256\)). When approximating the teacher model with a series of student RNN models, we can numerically verify the approximation’s stability (left panel). We can apply a filtering: we only select those teacher models which both can be approximated, and the approximations are stable (with perturbed error \(E_{m}(\beta)\) having a positive stability radius). We found that the only teachers that remain are those with exponential decaying memory functions. An example of corresponding is shown in the right panel.
Figure 2: Polynomial decaying memory + approximation (achieved at 1000 epochs) \(\rightarrow\) no stability. Similar to the linear functional case, when approximating nonlinear functionals with polynomial decaying memory by tanh RNN, the intersections of curves are shifting left as the hidden dimension \(m\) increases.
approximation of linear functional with polynomial decay memory). Project map \(W=g(M)=\arg\min_{W\leq 0}\|W-M\|_{2}\) is another option to keep the recurrent matrix stable. A slightly different exponential parameterization idea is empirically investigated in Lezcano-Casado and Martinez-Rubio [24].
## 5 Conclusion
In summary, we derive the first known Bernstein-type result in the setting of sequence modelling using nonlinear RNNs. We show that, assuming that a given target sequence relationship (mathematically understood as a nonlinear functional sequence) can be stably approximated by RNNs with hardtanh/tanh activations, then the target functional sequence's memory structure must be exponentially decreasing. This places a priori limitations on the ability of RNNs to learn long-term memory in long sequence modelling problems, and makes precise the empirical observation that RNNs do not perform well for such problems. From the approximation viewpoint, our results show that this failure is not only due to learning algorithms (e.g. explosion of gradients), but also due to fundamental limitations of the RNN hypothesis space. This points to a principled approach to achieve stable approximation. At the same time, our analysis points to reparameterization as a principled methodology to remedy the limitations of RNN when it comes to long-term memory and we demonstrate it's effectiveness in by learning linear functionals with polynomial memory.
## 6 Acknowledgements
This research is supported by the National Research Foundation, Singapore, under the NRF fellowship (project No. NRF-NRFF13-2021-0005).
Figure 4: Stable approximation of linear functionals with polynomial decay memory via exponential reparameterization and softplus reparameterization. It can be seen the limiting curve \(E(\beta)\) shall be continuous. |
2302.13368 | Phase-Field DeepONet: Physics-informed deep operator neural network for
fast simulations of pattern formation governed by gradient flows of
free-energy functionals | Recent advances in scientific machine learning have shed light on the
modeling of pattern-forming systems. However, simulations of real patterns
still incur significant computational costs, which could be alleviated by
leveraging large image datasets. Physics-informed machine learning and operator
learning are two new emerging and promising concepts for this application.
Here, we propose "Phase-Field DeepONet", a physics-informed operator neural
network framework that predicts the dynamic responses of systems governed by
gradient flows of free-energy functionals. Examples used to validate the
feasibility and accuracy of the method include the Allen-Cahn and Cahn-Hilliard
equations, as special cases of reactive phase-field models for nonequilibrium
thermodynamics of chemical mixtures. This is achieved by incorporating the
minimizing movement scheme into the framework, which optimizes and controls how
the total free energy of a system evolves, instead of solving the governing
equations directly. The trained operator neural networks can work as explicit
time-steppers that take the current state as the input and output the next
state. This could potentially facilitate fast real-time predictions of
pattern-forming dynamical systems, such as phase-separating Li-ion batteries,
emulsions, colloidal displays, or biological patterns. | Wei Li, Martin Z. Bazant, Juner Zhu | 2023-02-26T17:52:28Z | http://arxiv.org/abs/2302.13368v1 | # Phase-Field DeepONet: Physics-informed Deep
###### Abstract
Recent advances in scientific machine learning have shed light on the modeling of pattern-forming systems. However, simulations of real patterns still incur significant computational costs, which could be alleviated by leveraging large image datasets. Physics-informed machine learning and operator learning are two new emerging and promising concepts for this application. Here, we propose "Phase-Field DeepONet", a physics-informed operator neural network framework that predicts the dynamic responses of systems governed by gradient flows of free-energy functionals. Examples used to validate the feasibility and accuracy of the method include the Allen-Cahn and Cahn-Hilliard equations, as special cases of reactive phase-field models for nonequilibrium thermodynamics of chemical mixtures. This is achieved by incorporating the minimizing movement scheme into the framework, which optimizes and controls how the total free energy of a system evolves, instead of solving the governing equations directly. The trained operator neural networks can work as explicit time-steppers that take the current state as the input and output the next state. This could potentially facilitate fast real-time predictions of pattern-forming dynamical systems, such as phase-separating Li-ion batteries, emulsions, colloidal displays, or biological patterns.
Physics-informed machine learning deep operator neural network phase-field method minimizing movement scheme Allen-Cahn and Cahn-Hilliard equations
## 1 Introduction
Machine learning (ML) has achieved enormous success in many disciplines, especially computer vision [1] and natural language processing [2]. However, when it comes to scientific problems, ML algorithms are often thought of as black boxes that can hardly be interpreted and lack rigorous justifications by physical laws. Recently, a concept, scientific machine learning (SciML), emerged and has quickly attracted wide attention [3]. Its aim is to make the traditional ML domain-aware, interpretable, and robust. In some sense, SciML is now revolutionizing the area of computational science and has been applied in various scientific disciplines, such as ML-enhanced multiphysics and multiscale modeling [4, 5, 6], ML-assisted fast online prediction and guided data acquisition [7], and optimal decisions for complex systems [8, 7]. Among various SciML architectures, physics-informed machine learning (PIML) and operator learning are the two representative examples. PIML introduces known physics into an ML algorithm and as a result, requires a smaller dataset or sometimes even works without experimental data [9]. PIML is a general concept and can be implemented in
various strategies [10]. Considering the three ingredients of a general ML algorithm, physical laws can be accordingly incorporated into 1) the training data, e.g., data generated from first-principle-based simulations, 2) the ML models, e.g., neural networks designed to reflect certain physical principles (symmetry, positive definiteness, hierarchical relations, etc.), and 3) the training strategies, e.g., loss functions formulated to include physical laws. Among these different approaches, one of the most extensively studied is the physics-informed neural network (PINN) [11, 12, 13], where the known physics, namely, the governing equations, initial conditions (ICs), and boundary conditions (BCs), are incorporated into the loss function in the form of residuals. It has also been demonstrated that PINNs are able to deal with both forward (solving equations) and inverse (identifying parameters) problems for fluid dynamics governed by Navier-Stokes equations [11]. Since proposed, PINNs have been widely adopted in various applications. Interested readers can refer to [9] for a comprehensive review. In addition to these successful applications, PINNs have also been extended to accommodate irregular [14] and multiple domains [15, 16], enforce hard constraints with modified NNs [17], incorporate adaptive activation [18], introduce gradient-enhanced [19] or energy-based terms into loss function [20, 21], etc.
Despite the great success, a prominent challenge of PINNs is to efficiently determine or optimize the hyperparameters in the loss function. Most existing studies applied a trial-and-error scheme which makes the training time-consuming. To tackle this challenge, Psaros et al. [22] recently proposed a meta-learning framework to optimize the loss function offline. Meanwhile, it is also possible to avoid this issue by reducing the total number of loss terms. One common way is to enforce the hard constraints [17], which entails modifying the neural networks such that the outputs always satisfy certain BCs and ICs. Another approach is to make use of the energy or variational principles that intrinsically include the PDEs and BCs. For example, solving the Laplacian equation \(\nabla^{2}u=0\) with a Neumann-type BC \(\partial_{\mathbf{n}}u=0\) using PINN needs at least two loss terms. Alternatively, this problem is equivalent to finding the minimum of the functional \(\mathcal{J}=\int_{\Omega}0.5(\nabla u)^{2}\,\mathrm{d}V\) (i.e. \(\delta\mathcal{J}=0\)), which can be treated as the only loss term. In this way, the number of loss terms can be reduced. Mathematically, this example is just a special case of the more general Euler-Lagrangian equation (see A.1), and note that the order of derivatives in the functional is lower than that in the PDEs, which further improves the training efficiency.
A few existing studies have explored the strategy of using energy as the loss function. E et al. [23] proposed a DeepRiz neural network to solve the Poisson's equation (\(-\nabla^{2}u(x)=f(x)\)) with homogeneous essential BC (\(u(x)=0,x\in\partial\Omega\)) by minimizing the functional \(\mathcal{J}=\int_{\Omega}\left[0.5\cdot(\nabla u)^{2}-f(x)\cdot u(x)\right] \mathrm{d}x\), which is also a special case of the Euler-Lagrangian equation. Later, Wang et al. extended this framework to consider inhomogeneous essential BCs at complex boundary geometries and multiple domains [15]. Another case explored is the principle of minimal potential energy in solid mechanics, which states that the deformation of a solid domain will follow the path that minimizes the total potential energy under external loads [21, 20]. For quasi-static elastic responses, the total potential energy \(\mathcal{T}\) consists of the elastic strain energy \(U=\int_{\Omega}0.5\sigma:\varepsilon\,\mathrm{d}V\) and the work potential \(W=-\int_{\partial\Omega}f\cdot u\,\mathrm{d}S\). Minimizing \(\mathcal{T}=U+W\) (\(\delta\mathcal{T}=0\)) is equivalent to solving the corresponding Euler-Lagrangian equation (force equilibrium) \(\nabla\cdot\sigma=0\) with BC \(\sigma\cdot\mathbf{n}=0\). In the authors' previous work, we implemented this principle by introducing the potential energy into the loss function and predicted the deformation of elastic plates [20]. We also compared this energy-based framework with the vanilla residual-based PINN and found that the energy-based one is more efficient in terms of training time due to the lower order derivatives and fewer hyperparameters in the loss function, though the accuracy of both is comparable. It should be mentioned that the residual-based PINN is a more universal framework, while the energy-based one is only limited to systems governed by energy or variational principles. To the authors' best knowledge, the above existing studies only explored simple linear systems under quasi-static conditions. In this study, we aim to look into the dynamics of highly nonlinear and coupled energy storage systems and make use of the variational principles to construct a PIML framework.
Operator learning is another concept that has emerged as a promising SciML technique; it learns the mapping from one function to another function, such as the sequence-to-sequence and image-to-image mappings. As a comparison, many widely-used network architectures, such as fully-connected neural networks (FNNs) and convolutional neural networks (CNNs), are finite-dimensional operators that map one discretized signal or image to another. Recently, some novel architectures have been proposed to learn the infinite-dimensional mappings, such as Deep Operator Networks (DeepNets) [24] and Fourier Operator Networks (FNOs) [25]. A comprehensive review of the operator learning with neural networks could be found in [25]. In this study, we will focus on the DeepONet approach developed by Lu et al. [24]. So far, DeepONets have been proven to have better approximation and generalization abilities than FNNs and, therefore, has been used by many applications. One of the advantages of DeepONets is their ability to take the BCs or ICs as inputs, making it theoretically possible to train one network for all scenarios. This means that once the network is trained, it can be used to solve new problems with different boundary and/or initial conditions without additional training. This can be particularly useful in applications where the boundary and/or initial conditions may vary or come with significant uncertainty.
These new advances in SciML have shed light on the modeling of energy-storage systems (ESSs), especially Li-ion batteries. Due to the high-dimensional (e.g., multiple materials, scales, and physical fields) nature of this type of systems, it is cumbersome to develop a complete physics-based model or fully interpret a big dataset. Developing a unified physics-informed machine learning computational framework that can combine the partially-known physics and a small-size dataset is very appealing and necessary. The fundamental challenge here is the trade-off between the abundance of data and the adequacy of physical laws. At the electrode or cell level, experimental data is relatively easy to be obtained but physics is mostly hidden behind the data. Therefore, purely data-driven machine learning algorithms can be applied to predict the performance [7] and lifetime [26] of batteries. When data is expensive or limited, for example, battery degradation data dirome thousands of cycles taking years to complete, some studies proposed frameworks based on PINNs to estimate the states of battery cells [27, 28], identify battery parameters [29], predict the lifetime [30], and recognize degradation patterns [31, 32, 33] at the electrode and cell levels. On the other hand, experimental data is expensive and difficult to collect at the active particle level although many fundamental electro-chemo-mechanical physical theories have been developed at the micro-scales [34, 35]. Physics-based models is often used to describe the single-particle pattern formation and extrapolate to porous electrodes [36, 37, 38], but it is often very time-consuming to solve the models. As explained previously, PIML has a potential advantage to produce efficient surrogates or reduced-order models due to the fast inference speed of machine learning algorithms after training. For example, several studies used PINNs to solve the two equations for the phase-field method, namely Allen-Cahn and Cahn-Hilliard [39, 40, 41], which will be elaborated on in Section 2.
Another reason for applying PIML in ESSs is that the determination of constitutive relations and material constants is challenging. Many advanced algorithms have been developed for the interpretation of large datasets of full-field image data, in the context of phase-field models for electrochemical nonequilibrium thermodynamics [36]. For example, Zhao et al. used PDE-constrained optimization to learn the physics of driven phase separation in lithium iron phosphate nanoparticles from operando images of scanning tunneling x-ray microscopy [42]. Deng et al. used similar methods to learn the constitutive law of the eigen strain change with respect to lithium intercalation from micro X-ray tomography and diffraction images of active particles [43]. This optimization process is often time-consuming and PIML has the potential to achieve faster identification.
Nowadays, the greater scientific community has recognized the value of integrating physics and data into one unified framework as a high-level vision. The energy storage community is one of the pioneering areas. For example, the U.S. Department of Energy (DOE) is the first federal agency to propose the concept of SciML [3]. The current remaining challenge is to find realistic ways to implement them. It is always crucial to first understand the physics in order to construct a proper machine learning architecture for the studied system. The above-mentioned Allen-Cahn and Cahn-Hilliard equations are essential in chemical system modeling. They are able to describe the dynamics of non-conserved and conserved order parameters, respectively, in terms of variationally defined chemical potentials. Both equations can be derived through variational methods that have been well established to obtain the governing equations of complex coupled nonlinear systems [44]. More specifically, Allen-Cahn and Cahn-Hilliard equations are two special cases of gradient flows that entail finding and constructing an appropriate free energy and an inner product to incorporate the kinetics into a variational framework [44, 45, 46]. Gradient flows can be applied to a large variety of physics including diffusion, phase separation, microstructure evolution, etc. Therefore, constructing a machine learning framework for gradient flows can be beneficial to a wide range of applications. As we mentioned earlier, ML can be adopted naturally to solve variational problems, where we can approximate the solutions with ML models by minimizing the free energy functional as a loss function.
In this study, we propose the idea of "Phase-Field DeepONet" as a general neural network framework for dynamical systems governed by gradient flows of free energy functionals, taking advantage of the energy-based loss function, deep operator network, and physics-informed learning. The paper is organized as follows: Section 2 presents the theory of phase-field method and gradient flows; Section 3 describes the framework of Phase-Field DeepONet that incorporates the minimizing movement scheme into a physics-informed deep operator neural network; In Section 4, we investigate three different dynamical systems including the linear relaxation kinetics, Allen-Cahn, and Cahn-Hilliard dynamics to validate the proposed framework.
## 2 phase-field method and gradient flows
### Phase-field method
Phase-field methods are widely used in materials science because of their capability to track microstructure evolution, grain growth and coarsening, crack propagation, etc. Unlike other sharp interface models, phase-field models treat interface in a diffusive way with phase-field variables, which can then describe the domain and all interfaces continuously as a whole. There are two types of phase-field variables, namely conserved and non-conserved fields. The evolution (or
dynamics) of both are governed by the total free energy \(\mathcal{F}\) of a system and its variational derivatives with respect to the field variables, which can be viewed as diffusional chemical potentials.
For a single field variable \(\phi\), the standard free energy functional for an inhomogeneous system, proposed by Van der Waals [47] and Cahn and Hilliard [48], is defined as,
\[\mathcal{F}=\int_{\Omega}\left[f(\phi)+\frac{1}{2}\kappa_{\phi}(\nabla\phi)^{2 }\right]\mathrm{d}x, \tag{1}\]
where \(f(\phi)\) is the homogeneous free-energy density and the second term on the right-hand side represents the gradient energy at phase interfaces with the gradient coefficient \(\kappa_{\phi}\). Generally, the field variables evolve in the direction where the free energy continuously decreases. For a conserved field variable, the dynamics can be expressed as a conservation law for gradient-driven fluxes. The Cahn-Hilliard equation can be then obtained,
\[\frac{\partial\phi}{\partial t}=\nabla\cdot M\nabla\frac{\delta\mathcal{F}}{ \delta\phi}, \tag{2}\]
where \(M\) is a transport coefficient (the product of the mobility and the concentration field variable [36]) and the functional derivative (diffusional chemical potential) is given by,
\[\frac{\delta\mathcal{F}}{\delta\phi}=\frac{\partial f}{\partial\phi}-\kappa_{ \phi}\nabla\cdot\nabla\phi. \tag{3}\]
For a non-conserved field variable, we have the Allen-Cahn equation,
\[\frac{\partial\phi}{\partial t}=-M\frac{\delta\mathcal{F}}{\delta\phi}, \tag{4}\]
where the functional derivative is also given by Eq. 3. The Allen-Cahn equation can be viewed as a linearized model of a reaction producing the field variable, proportional to the affinity, or difference in diffusional chemical potential with respect to an external reservoir [49, 50].
### Mathematics of gradient flows
One common way to establish the theories or governing equations of a system is to start with constitutive relations based on experimental data. These theories need then to be checked for consistency with thermodynamic laws. The variational methods, on the other hand, start with thermodynamics so that the derived theories are always consistent with the thermodynamic laws. In addition, the variational methods can handle extreme anisotropy, non-differentiability, and nonlinearity more easily. In this section, we will review a general mathematical framework to derive phase-field models as gradient flows of free energy functionals.
The second law of thermodynamics states that the total free energy of a system always decreases. Therefore, the equations governing the evolution of field variables in a system should be constructed in an appropriate way to guarantee a monotonic decrease in total free energy. One approach is the gradient flows which can be described by,
\[\frac{\mathrm{d}}{\mathrm{d}t}\,u(x,t)=-\nabla\mathcal{F}(u), \tag{5}\]
where \(u\) is a field variable that evolves with time (depending on space and time); \(\mathcal{F}\) is a smooth and convex energy functional of \(u\). \(\nabla F(u)\) indicates the functional gradient of \(\mathcal{F}\) with respect to \(u\). Physically, this equation states that the field variable \(u\) evolves in the direction where the free energy decreases fastest. The functional gradient is the driving force of the dynamic process. Mathematically, analogous to the directional derivative of a multi-variable function, the functional gradient \(\nabla\mathcal{F}\) and functional derivative \(\delta\mathcal{F}/\delta u\) are related by the inner product \(<.,.>\),
\[<\nabla\mathcal{F},v>=\frac{\mathrm{d}\mathcal{F}(u+v\cdot t)}{\mathrm{d}t} \left|{}_{t=0}\right.=\int_{\Omega}\frac{\delta\mathcal{F}}{\delta u}v\, \mathrm{d}x, \tag{6}\]
where \(v\) is an arbitrary function and can be viewed as a flow field; \(t\) is a short time and \(v\cdot t\) is also called the variation of
Therefore, the functional gradient depends on not only the free energy functional (its functional derivative) but also the construction of the inner product (or the measure of distance).
For a functional energy defined as \(\mathcal{F}=\int_{\Omega}F(x,u(x),\nabla u(x)\,\mathrm{d}x\), the functional derivative can be determined by (see A.2)
\[\frac{\delta\mathcal{F}}{\delta u}=\frac{\partial F}{\partial u}-\nabla\cdot \frac{\partial F}{\partial\nabla u}\,. \tag{7}\]
In order to get the functional gradient, we still need to construct an inner product. In this study, we introduce two different inner products. The first one is the weighted \(L^{2}\) inner product,
\[<f,g>_{L^{2},M}=\int_{\Omega}\frac{f(x)\cdot g(x)}{M}\,\mathrm{d}x, \tag{8}\]
where \(M\) is the weight; \(f(x)\) and \(g(x)\) are two arbitrary functions. The corresponding \(L^{2}\) norm is
\[|f|_{L^{2},M}=\sqrt{\int_{\Omega}\frac{f^{2}(x)}{M}\,\mathrm{d}x}. \tag{9}\]
The second one is the \(H^{-1}\) inner product with a weight, defined as
\[<f,g>_{H^{-1},M}=\int_{\Omega}\nabla\phi_{f}\cdot M\nabla\phi_{g}\,\mathrm{d}x, \tag{10}\]
where \(\phi_{f}\) is the solution of the Poisson's equation with Neumann boundary condition,
\[\begin{cases}\nabla^{2}\phi_{f}=f(x),&\text{in }\Omega,\\ \partial_{n}\phi_{f}=0,&\text{on }\partial\Omega.\end{cases} \tag{11}\]
Note that \(\phi_{f}\) has an unique solution when \(\int_{\Omega}f\,\mathrm{d}x=\int_{\Omega}\phi_{f}\,\mathrm{d}x=0\). The corresponding \(H^{-1}\) norm can be obtained with
\[|f|_{H^{-1},M}=\sqrt{\int_{\Omega}\nabla\phi_{f}\cdot M\nabla\phi_{f}\, \mathrm{d}x}. \tag{12}\]
We can then obtain the functional gradient. For \(L^{2}\) inner product, with Eq. 6 and Eq. 9 we have
\[<\nabla\mathcal{F},v>_{L^{2},M}=\int_{\Omega}\frac{\nabla\mathcal{F}v}{M}\, \mathrm{d}x=\int_{\Omega}\frac{\delta\mathcal{F}}{\delta u}v\,\mathrm{d}x. \tag{13}\]
Since \(v\) is an arbitrary function, to ensure the equality is always satisfied, we have,
\[\nabla\mathcal{F}=M\frac{\delta\mathcal{F}}{\delta u}. \tag{14}\]
Similarly, with \(H^{-1}\) inner product (Eq. 6 and Eq. 12) we have,
\[\nabla\mathcal{F}=-\nabla\cdot M\nabla\frac{\delta\mathcal{F}}{\delta u}. \tag{15}\]
Substituting the above two functional gradients (Eq. 14 and Eq. 15) into Eq. 5, we get the Allen-Cahn and Cahn-Hilliard equations, respectively.
### The minimizing movement scheme
We have demonstrated a mathematical way to derive the phase-field equations (i.e., Allen-Cahn and Cahn-Hilliard equations) with the gradient flows theory. In order to predict the dynamic response of a system governed by gradient flows, we can directly solve these equations with the finite element (FE) or finite difference (FD) method. Alternatively, we can make use of an important feature of gradient flows. Given a fixed small time step \(\tau>0\) and a smooth and convex functional \(\mathcal{F}(u)\), we can find a time sequence of \(u\), \([u_{1}^{\tau},u_{2}^{\tau},\dots,u_{n}^{\tau}]\) from the initial condition \(u_{0}^{\tau}\), through the following iterated scheme, which is called minimizing movement scheme,
\[u_{k+1}^{\tau}\in\text{argmin}_{u}\left[\mathcal{F}(u)+\frac{d^{2}(u,u_{k}^{ \tau})}{2\tau}\right], \tag{16}\]
where \(u_{k}^{\tau}\) and \(u_{k+1}^{\tau}\) represent the field distribution at time step \(k\) and \(k+1\); \(d(.,.)\) indicates the distance between two functions. The time sequence obtained from this iterative minimization scheme is actually the solution of the corresponding PDEs. For the above minimization problem, we know \(u_{k+1}^{\tau}\) is the solution of
\[\frac{\partial}{\partial u}\left[\mathcal{F}(u)+\frac{d(u,u_{k}^{\tau})}{2 \tau}\right]=\nabla\mathcal{F}(u)+\frac{d(u,u_{k}^{\tau})}{\tau}=0, \tag{17}\]
which gives
\[\frac{d(u,u_{k}^{\tau})}{\tau}=-\nabla\mathcal{F}(u). \tag{18}\]
This equation is the discrete-time implicit Euler scheme for Eq. 5. Therefore, we can get the solution by the minimizing movement scheme instead of solving the PDEs directly.
## 3 Phase-Field DeepONet: physics-informed deep operator neural network for gradient flows
In this section, we propose Phase-Field DeepONet, a physics-informed deep operator neural network framework incorporating the aforementioned minimizing movement scheme to solve the gradient flows of free energy functionals. There are two important ingredients in this framework: 1) operator learning, which aims to learn the mapping from one function to another function. In this study, we make use of this concept to learn the mapping of the field variable distribution from the current time step to the next time step; 2) physics-informed machine learning, which incorporates known physics into a machine learning framework. We directly utilize Eq. 16 as the loss function to implement the physics of gradient flows. These two aspects will be further explained in the following.
### Deep Operator Neural Network (DeepONet)
DeepONet was first proposed by Lu et al. [13]. It is a high-level network structure with two sub-networks, namely the "branch" network and the "trunk" network, as shown in Figure 0(a). The trunk network takes the coordinates \(x\) as the input, while the branch network takes the function \(u\) as the input. The final output is given by multiplying the outputs of both networks,
\[\mathcal{G}(x,u)=\sum_{k=1}^{p}b_{k}(u)t_{k}(x)+b_{0}, \tag{19}\]
where \(b_{k}\) and \(t_{k}(k=1,2,...,p)\) are the outputs of the branch network and trunk network, respectively; \(p\) is the number of outputs of both sub-networks.
There are several important features we should note about DeepONet. First, since the trunk network takes coordinates as input, the output is continuous, which means we can get predictions at any location. More importantly, the gradients of outputs with respect to inputs can be easily estimated by automatic differentiation. This is crucial to constructing the physics-informed loss function. Second, it is highly flexible. The essence of this structure is to separate the vector input and function input into two sub-networks. It can be tailored to fit different applications. On the one hand, feature
expansion can be performed on the inputs. For the branch network, instead of a vector of discretized points of a function, we can extract any other features from the function to be the inputs, e.g, magnitude and phase in the frequency domain. On the other hand, the sub-networks can be any type of neural network such as the FNN, CNN, recurrent neural network (RNN), etc.
DeepONet enables us to map one function to another. In real applications, the input function can be the initial or boundary condition and the output can be the solution at a random time. Here, we aim to predict the distribution of a field variable at any given time step. To achieve this, we propose to take the current distribution of a field variable as the input and the distribution at the next time step as the output (i.e. a mapping \(u_{k}^{\tau}\to u_{k+1}^{\tau}\)), which can be expressed by,
\[u_{k+1}^{\tau}=\sum_{k=1}^{p}b_{k}(u_{k}^{\tau})t_{k}(x)+b_{0}, \tag{20}\]
where \(u_{k}^{\tau}=u(x,t=k\tau)\) represents the distribution of \(u\) at the \(k^{\text{th}}\) time step with a time interval of \(\tau\). The proposed network structure works like an explicit time-stepper. Given the input at the \(k^{\text{th}}\) time step \(u_{k}^{\tau}\), we can get the output at the \((k+1)^{\text{th}}\) time step \(u_{k+1}^{\tau}\), which can then be treated as the input to get \(u_{k+2}^{\tau}\). Following this iterative process, a sequence of distributions \([u_{k}^{\tau},u_{k+1}^{\tau},u_{k+2}^{\tau},...]\) at all following time steps can be obtained.
### Phase-Field DeepONet
Machine learning algorithms often require a large dataset for training and the loss function is commonly constructed based on the error (e.g., mean square error, mean absolute error) between the data and predictions. Physics-informed machine learning, however, can work with only a small dataset or even without data. This is achieved by introducing physical laws (e.g. PDEs) into the loss function. Physics-informed machine learning can deal with two typical types of problems, namely the inverse problem and the forward problem. For the former one, the physics is partially known with a small dataset available, and the aim is to learn the unknown physics; for the latter one, we know all the physics without any data and the goal is to solve the governing equations. We will focus on the forward problem in this study. That is to solve the systems governed by gradient flows.
Herein, we propose a general framework with physics-informed DeepONet for gradient flows of free energy potential, as illustrated in Figure 0(b), which will be referred to as "Phase-Field DeepONet". We include the DeepONet structure
Figure 1: Schematic illustration of (a) DeepONet and (b) Physics-informed DeepONet with energy-based loss function.
as mentioned above and construct a physics-informed loss function according to the minimizing movement scheme (Eq. 16). The loss function can be written as,
\[\mathcal{L}=\mathcal{F}(u_{k+1}^{\tau})+\frac{d^{2}(u_{k+1}^{\tau},u_{k}^{\tau})} {2\tau}. \tag{21}\]
The training process will minimize this loss function, equivalent to the minimizing movement scheme, therefore approximating the ground truth. To train this network, space coordinates \(x\) and the distribution \(u_{k}^{\tau}\) can be randomly sampled as the input of training data. There is no need for the output \(u_{k+1}^{\tau}\) for the training. The trained DeepONets can work as efficient surrogates (explicit time-steppers) that are able to predict the time sequence of field distributions at all following time steps, given the current field distribution. The detailed training process will be explained in the following numerical examples.
It is worth noting that a PINN can be regarded as a special example of a physics-informed DeepONet, wherein the branch net is omitted or the input for the branch network is held constant as the initial condition. For example, if we only keep the trunk net in Figure 0(a) and replace \(u_{k}^{\tau}\) in Eq. 21 with \(u_{0}^{\tau}\), the resulting framework is a PINN to solve for the distribution of \(u\) in the next step.
## 4 Numerical examples
### Relaxation kinetics
We start with an example of the relaxation kinetics in 1D space governed by gradient flows. The governing free energy is
\[\mathcal{F}(u)=\int_{\Omega}\frac{1}{2}k\cdot u^{2}(x,t)\,\mathrm{d}x. \tag{22}\]
We consider a 1D domain within \([-1,1]\). With the \(L^{2}\) inner product, the corresponding PDE and boundary conditions can be derived with Eq. 5, Eq. 7, and Eq. 14 (\(M=1\)),
\[\left\{\begin{array}{l}\frac{\partial u}{\partial t}=-ku,\\ u(x=\pm 1)=0.\end{array}\right. \tag{23}\]
Given the initial condition \(u_{0}=u(x,t=0)\), an analytical solution can be found as \(u=-e^{kt}u_{0}\). In this example, we will compare the two different frameworks, namely, the traditional PINN and the proposed Phase-Field DeepONet, to illustrate the feasibility of incorporating the minimizing movement scheme into a machine learning framework and also to discuss the advantages of the proposed framework.
For the PINN framework, the initial condition needs to be given, which is set as \(u_{0}=\text{sin}(\pi x)\). The constant \(k\) is set to 10. Following the minimizing movement scheme, a straightforward approach is to discretize the time domain into many time steps with a fixed time interval \(\tau\). As shown in Figure 1(a), for each time step \(k\), we construct one corresponding neural network \(\mathcal{N}_{k}(x;\theta)\), where \(x\) denotes the input spatial coordinate and \(\theta\) represents the weights and biases to be optimized. The output of the sub-network is the current distribution of the field variable \(u_{k}^{\tau}=u(x,t=k\tau)\). The sum of free energy and distance as in Eq. 16 is directly treated as the loss function,
\[\mathcal{L} =\mathcal{F}(u_{k}^{\tau})+\frac{d_{L^{2}}^{2}\left(u_{k}^{\tau},u_{k-1}^{\tau}\right)}{2\tau} \tag{24}\] \[=\int_{-1}^{1}\left[(u_{k}^{\tau})^{2}+\frac{(u_{k}^{\tau}-u_{k- 1}^{\tau})^{2}}{2\tau}\right]\mathrm{d}x\] \[\approx\frac{1}{N_{F}}\sum_{i=1}^{N_{F}}(u_{k}^{\tau})^{2}+\frac {1}{N_{d}}\sum_{i=1}^{N_{d}}\frac{(u_{k}^{\tau}-u_{k-1}^{\tau})^{2}}{2\tau},\]
where \(u_{k}^{\tau,i}=u(x=x_{i},t=k\tau)\) represents the field value at the sampled locations \(x_{i}\); \(N_{F},N_{d}\) are the total number of samples to evaluate the numerical integration of the free energy term and distance term, respectively. \(N_{F}\) and \(N_{d}\) can
be different; for simplicity, they are set to be equal in this study. The training process follows the minimizing movement scheme: a) we first train the 1st sub-network \(\mathcal{N}_{1}\) given the initial condition \(u_{0}\); b) after training, we can get the output \(u_{1}^{*}\). We then train the 2nd sub-network \(\mathcal{N}_{2}\) with \(u_{1}^{*}\) as the input; c) we repeat this process to train all the sub-networks sequentially.
All the sub-networks were constructed with two hidden layers with 20 nodes each. The ReLU activation function was adopted. We randomly sampled 1000 uniformly distributed data points within the space domain (\(N_{F}=N_{d}=1000\)) as the training data. A learning rate of 0.001 and the Adam optimizer were used. Each sub-network was trained for 500 epochs in sequence and the training was repeated for 3 rounds. The training took around 25 minutes on a single GPU (NVIDIA T400 4GB) system. The predictions of the sub-networks at different time steps are compared with the analytical results and a good match can be observed (Figure 2b). We can also see the decreasing free energy of the system (Figure 2c). These successfully validate the feasibility to implement the minimizing movement scheme into physics-informed machine learning. However, this PINN-based framework is not very efficient. First, all the sub-networks have to be trained in sequence and the number of sub-networks will increase largely if predictions at a large time interval are required. Second, the PINNs need to be retrained once the initial condition is changed, which limits their applications.
To address these issues, as we proposed in the previous section, a DeepONet is constructed to take the distribution of the current time step as the input and predicts the distribution of the next time step, which works like an explicit time-stepper. As shown in Figure 3a, both the branch net and the trunk net are fully connected neural networks. Note that a continuous function cannot be directly fed into the branch net; discretization is therefore performed here. We discretized \(u\) by sampling its values at equally spaced locations in the space domain; the resulting vector of values was then treated as the input (Figure 3b). The loss function is the same as in Eq. 24, except that \(k\) and \(k-1\) should be replaced by \(k+1\) and \(k\), respectively.
The input of the whole network is \([x,u_{k}^{*}]\). The physical constant \(k\) is also set to 10; a given initial condition is unnecessary for DeepONet. To simplify the training process, we evenly sampled 100 points (\(N_{F}=N_{d}=100\)) in the space domain [-1, 1]. We generated 10,000 different distributions of \(u\) by Gaussian random process, half of which
Figure 2: (a) The multi-network structure for minimizing movement scheme; (b) Comparison between the ground truth and neural network predictions at different time steps; (c) Predicted free energy change with time.
was for training and the other half for testing. The size and structure of the DeepONet can be found in Table 1. The network was trained for 3000 epochs with a learning rate of 0.001. The training time was around 10 minutes on the same single GPU system. The training and testing error is shown in Figure 3c. The coefficient of determination (r2-value) reached 0.99 in the testing set. This implies that the trained DeepONet can accurately predict the relaxation kinetics. Figure 4a demonstrates the predicted distributions of \(u\) at different time steps, which agree well with the ground truth. Note that the sinusoidal input of \(u\) is outside of the training dataset, which indicates a good generality of the trained DeepONet. More importantly, given a random input of \(u\) at any time step, the trained network can accurately predict the distribution of \(u\) at the next step (Figure 4b).
\begin{table}
\begin{tabular}{l|c c|c c|c} \hline
**Case** & \multicolumn{2}{c|}{**Trunk Net**} & \multicolumn{2}{c|}{**Branch Net**} & \multicolumn{1}{c}{**\# of Sensors**} \\ \hline \multirow{3}{*}{Relaxation Kinetics (1D)} & Depth & Width & Depth & Width & \multirow{3}{*}{100} \\ & 3 & 100 & 2 & 100 & \\ \cline{1-1} \cline{5-6} \cline{7-7} \cline{10-7} \cline{1
### Allen-Cahn equation
In this example, the more complex Allen-Cahn equation in 2D domain is explored. The total free energy governing this equation is,
\[\mathcal{F}=\int_{\Omega}\left[\frac{1}{\varepsilon^{2}}f(u)+0.5(\nabla u)^{2} \right]\mathrm{d}A, \tag{25}\]
where \(f(u)\) is the bulk energy density and a common choice is \(f(u)=\frac{(u^{2}-1)^{2}}{4}\); the second term represents the interracial energy. With the \(L^{2}\) norm (or inner product), the corresponding PDE and boundary conditions can then be derived with Eq. 5, Eq. 7, and Eq. 14,
\[\left\{\begin{array}{l}\dfrac{\partial u}{\partial t}=\nabla^{2}u-\frac{1}{ \varepsilon^{2}}\dfrac{\mathrm{d}f}{\mathrm{d}u}\,,\\ \nabla u\cdot\mathbf{n}=0.\end{array}\right. \tag{26}\]
A 2D domain \([-1,1]\times[-1,1]\) is considered. The only physical constant, length scale \(\epsilon\), is set to 0.25. We constructed a DeepONet structure, as shown in Figure 5a. The branch net takes the 2D distribution at the current time step as the input; a convolution neural network (CNN) is therefore adopted, which is then connected to a FNN. The trunk net is a FNN taking the spatial coordinates as the input. The final output is the distribution at the next time step. The loss function for this case is
\[\begin{split}\mathcal{L}&=\mathcal{F}(u_{k+1}^{\tau} )+\frac{d_{L^{2}}^{2}\left(u_{k+1}^{\tau},u_{k}^{\tau}\right)}{2\tau}\\ &=\int_{0}^{1}\int_{0}^{1}\left[\left(\mathcal{F}(u_{k+1}^{\tau} )+\frac{\left(u_{k+1}^{\tau}-u_{k}^{\tau}\right)^{2}}{2\tau}\right)\mathrm{d} x\,\mathrm{d}y\\ &\approx\frac{1}{N_{F}}\sum_{i=1}^{N_{F}}\mathcal{F}(u_{k+1}^{\tau,i})+\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}\frac{(u_{k+1}^{\tau,i}-u_{k}^{\tau,i})^{ 2}}{2\tau},\end{split} \tag{27}\]
where \(u_{k}^{\tau,i}=u(x=x_{i},y=y_{i},t=k\tau)\) denotes the field value at the sampled locations; \(N_{F}\) and \(N_{d}\) are the total number of samples for evaluating the integral of the free energy and the distance term, respectively. To train this network, random 2D distributions of \(u\) are generated by Gaussian random process on a uniform \(28\times 28\) grid (\(N_{F}=256\)) as the
Figure 4: Predictions of trained DeepONet: (a) predictions at multiple time steps; (b) prediction for a random input.
input of the branch net (Figure 5b). A total of 10,000 distributions (images) were generated, half of which were used for training and the remaining half for testing. To simplify the training process, the corresponding spatial coordinates of the same uniform \(28\times 28\) grid (\(N_{d}=N_{F}=256\)) were taken as the input of the trunk net. The time step \(\tau=0.005\). The size and structure of the network can be found in Table 4.1. The CNN in the branch net consists of two layers. The first layer has 32 filters with a kernel size of 3 and a stride of 1; the second layer has 6 filters with a kernel size of 3 and a stride of 3. The output of CNN is then flattened and fully connected to an output layer of 120 nodes. The trunk net consists of three hidden layers and each has 120 nodes. The training process took around 150 minutes on the single GPU system due to the relatively large network and higher gradient terms in the loss function.
Figure 5c shows the training and testing error. An r2-value as high as 0.96 can be reached in the testing dataset. The predictions of the trained DeepONet are compared with the corresponding solution by the finite difference method, which is treated as the ground truth (Figure 6). One representative sample in the testing set is compared in Figure 6a, b, and c, where we can see the prediction of the neural network well matches the ground truth. We also compared a quite distinct case (Figure 6d,e, and f), the distribution of which is not seen in the training set. A good match can still be observed. This further validates the reliability of the proposed framework.
Figure 5: (a) Illustration of physics-informed DeepONet incorporating minimizing movement scheme for 2D Allen-Cahn equation; (b) Image input of the branch net (\(28\times 28\) grid of random distributed \(u\) generated by Gaussian Random Process); (c) Mean square error and r2-value (accuracy) in training and testing sets.
### Cahn-Hilliard equation
An even more challenging case is the Cahn-Hilliard equation. Though the free energy is the same as in the previous case (Eq. 25), a different inner product, the \(H^{-1}\) inner product, is used to describe the dynamics, which gives a higher order of PDE with boundary conditions,
\[\left\{\begin{array}{l}\frac{\partial u}{\partial t}=\nabla^{2} \left(\frac{1}{\varepsilon^{2}}\,\frac{\mathrm{d}f}{\mathrm{d}u}-\nabla^{2}u \right),\\ \nabla\left(\frac{1}{\varepsilon^{2}}\,\frac{\mathrm{d}f}{\mathrm{d}u}- \nabla^{2}u\right)\cdot\mathbf{n}=0,\\ \nabla u\cdot\mathbf{n}=0.\end{array}\right. \tag{28}\]
We consider a 1D domain [0, 1]. The loss function is,
\[\mathcal{L} =\mathcal{F}(u_{k+1}^{\tau})+\frac{d_{L^{2}}^{2}\left(u_{k+1}^{ \tau},u_{k}^{\tau}\right)}{2\tau} \tag{29}\] \[=\int_{0}^{1}\left[(\mathcal{F}(u_{k+1}^{\tau})+\frac{(u_{k+1}^{ \tau}-u_{k}^{\tau})(\phi_{k+1}^{\tau}-\phi_{k}^{\tau})}{2\tau}\right]\mathrm{d}x\] \[\approx\frac{1}{N_{F}}\sum_{i=1}^{N_{F}}\mathcal{F}(u_{k+1}^{ \tau,i})+\frac{1}{N_{d}}\sum_{i=1}^{N_{d}}\frac{(u_{k+1}^{\tau,i}-u_{k}^{\tau, i})(\phi_{k+1}^{\tau,1}-\phi_{k}^{\tau,1})}{2\tau},\]
Figure 6: Comparison between the predictions of the trained DeepONet and the ground truth for two representative cases: (a), (b), (c) for a case in the testing set; (d), (e), (f) for another distinct case outside the dataset. Data points in (c) and (f) were extracted from the diagonals as indicated in (b) and (e).
where \(\phi_{k}^{\tau}\) is the solution of Poisson's equation with the source term \(u_{k}^{\tau}\). A linear mapping from discretized \(u_{k}^{\tau}\) to \(\phi_{k}^{\tau}\) can be obtained with the finite difference scheme,
\[\phi_{k}^{\tau}=\mathbf{M}^{-1}\mathbf{u}_{k}^{\tau}, \tag{30}\]
where \(\phi_{k}^{\tau}=[\phi_{k}^{\tau,1},\dots,\phi_{k}^{\tau,i},\dots,\phi_{k}^{ \tau,N_{d}}]\), \(\mathbf{u}_{k}^{\tau}=[u_{k}^{\tau,1},\dots,u_{k}^{\tau,i},\dots,u_{k}^{\tau,N_ {d}}]\), and
\[\mathbf{M}=\frac{1}{\Delta x}\begin{bmatrix}-1&1\\ 1&-2&1\\ &\ddots&\ddots&\ddots\\ &&1&-2&1\\ &&&\ddots&\ddots&\ddots\\ &&&1&-2&1\\ &&&1&-1\end{bmatrix},\Delta x=\frac{1}{N_{d}-1}. \tag{31}\]
Considering the complexity of this problem, we started with a relatively simple PINN-based framework to further validate the feasibility of incorporating minimizing movement scheme into physics-informed machine learning. An initial condition \(u(x,t=0)=cos(4\pi x)\) and a boundary condition \(u(x=0,t)=u(x=1,t)=0\) were given. We used the same structure as described in the first example. Two different cases with (\(\epsilon=0.025\)) and without (\(\epsilon=0.25\)) apparent phase separation were explored by varying the length scale \(\epsilon\). The corresponding time steps \(\tau\) are \(5\times 10^{-4}\) and \(5\times 10^{-5}\) for the two cases. The training process is the same as in the first example and the training results are shown in Figure 7a, b. We can see a good agreement between the predictions of PINNs and the ground truth for both cases. This indicates that incorporating minimizing movement scheme into machine learning also works for this high \(4^{\text{th}}\) order equation.
We then trained a DeepONet, the structure of which can be found in Table 1. Similarly, 10,000 randomly distributed \(u\) were sampled for training (half) and testing (half). Figure 7c shows the results for the case without apparent phase separation. The prediction of trained DeepONet can also well capture the ground truth at multiple time steps. These further validate the feasibility of the proposed framework. It is important to note that accurately predicting the dynamic evolution of the phase separation remains a significant challenge with the current framework. This is mainly due to the relatively large time scale of phase separation and a small time step is still required to capture the fast phase separation process at the same time. The slight difference with one time step makes it difficult to be captured by the current framework. We will further discuss this limitation in the next sections.
## 5 Discussion
So far, we successfully developed a general physics-informed deep operator neural network with an energy-based loss function and have validated it with three different numerical examples. Here, we would like to elaborate on a few aspects to deepen the theoretical base of this approach and broaden its applicability.
Figure 7: Predictions of PINN for cases without phase separation (a) and with phase separation (b). (c) Prediction of DeepONet for the case without phase separation.
### Estimation of the time step \(\tau\)
The time step \(\tau\) needs to be small enough to achieve an accurate solution. However, to the authors' best knowledge, there is no direct method to estimate the upper limit of \(\tau\). In this study, we gradually decreased the time step to ensure the predictions converged to the ground truth. Figure 8 shows the training results for three different values of \(\tau\) in the kinetic relaxation example. As the value of \(\tau\) increases from 0.01 (Figure 8a) to 0.02 (Figure 8b) and 0.04 (Figure 8c), a growing discrepancy between DeepONet predictions and ground truth can be observed.
In practice, we have two suggestions for estimating the time step. First, select a time step that enables the detection of a noticeable change in the field variable if experiment data is available. Second, optimize this time step with a trial-and-error process based on PINN. As we mentioned earlier, PINN is a simplified case of DeepONet with only the trunk net. Training PINN is much faster than training DeepONet and it is therefore suggested to determine \(\tau\) with PINN. Since the estimation of the time step should be independent of the initial conditions, a time step ensuring a converged solution for PINN should also work for DeepONet.
The requirement of a small time step is also a limitation of the proposed framework. When making predictions on a large time scale, the trained DeepONets need to be evaluated more times. This will decrease the efficiency and the error is expected to accumulate. To avoid this, we can train a DeepONet without including the minimizing movement scheme. The outputs are therefore required, either from experiments or simulations (supervised learning). This way, a large time step can be selected to further increase the computation efficiency.
The potential for implementing an adaptive time step within this framework is another compelling topic to investigate. As in the phase separation case of the Cahn-Hilliard equation (Figure 7b), prior to the onset of phase separation, the field variable undergoes minimal changes and this transitional period occurs over a relatively extended temporal duration. In contrast, during phase separation, the process unfolds rapidly within a short time frame. Therefore, the possible implementation of an adaptive time step, wherein the time step is larger during the pre-phase separation period and smaller during the phase separation period, has the potential to be advantageous in terms of computation efficiency.
### Influence of training samples
The sampling method is another possible factor influencing the accuracy. The input function \(u(x)\) of the field variable is generated by a mean-zero Gaussian random process,
\[\begin{split} u(x)&\sim\mathbf{G}(0,k_{l}(x_{1},x_{2} )),\\ k_{l}(x_{1},x_{2})&=exp(-||x_{1}-x_{2}||^{2}/2l^{2 }),\end{split} \tag{32}\]
where \(k_{l}(x_{1},x_{2})\) is the covariance kernel with a length-scale parameter \(l\). It controls the smoothness of the generated distributions of \(u\). A larger \(l\) results in a smoother \(u\) (Figure 9a).
In this study, the training and testing datasets are set to have the same "smoothness" for simplicity. For example, \(l=0.2\) is used for both training and testing datasets in the first numerical example. However, this may impact the accuracy when the input distribution of \(u\) has a different degree of smoothness. Figure 9b and c show the predictions of the same DeepONet with a smoother (\(l=0.5\)) and a sharper (\(l=0.1\)) input of randomly disturbed \(u\), respectively. A strong agreement can still be seen for the smoother input, however, discrepancies can be observed in the sharp corners for the sharper input. One potential solution for further enhancing the performance is to include training data with
Figure 8: Predictions of DeepONets for cases with different time steps \(\tau\): (a) \(\tau=0.01\), (b) \(\tau=0.02\), (c) \(\tau=0.04\).
varying degrees of smoothness. Note that the overall trend is captured reasonably well even with the sharper input. This demonstrates the generality of this framework when applied to random inputs.
### Other potential improvements
Apart from optimizing the time step and sampling dataset, there are several other aspects that could be improved in the future for this framework.
a) Accounting for a wider range of physical constants. This will be beneficial to practical applications. Currently, a fixed set of physical constants is used for simplicity, but in reality, these values may vary across different systems. To address this issue, physical constants could be included as inputs by adding an extra branch net, allowing a single DeepONet to be trained for all possible physical constants.
b) Handling irregular-shaped domains. This would further enhance its applicability. DeepONet is capable of handling any fixed irregular domain by randomly sampling locations in a 2D or higher dimensional space and feeding them into the network as a vector. However, the challenge is how a DeepONet trained for a specific domain can be applied to domains with other geometric shapes.
c) Extending the framework to higher-dimensional problems. It would be interesting to investigate whether DeepONets trained at lower dimensions can be effectively applied to higher dimensions. Phase-field simulations at higher dimensions are notoriously computationally expensive, and scaling up the simulation using DeepONets could potentially overcome this issue.
d) Inverse learning from experiment data. The current framework focuses on the forward problem, where all physics and constants are known and the DeepONet approximates the solution with unsupervised learning (no experiment data needed). It will be even more meaningful if we could extend the framework to the inverse problem. In the inverse problem, only part of the physics and some experimental data are known, and the goal is to identify the unknown physical constants or laws. Modifying the loss function to include the experimental data is one way to realize this. By doing so, both physics and experimental data can be included in one framework, and the training process is able to approximate the ground truth as well as learn the unknown physical constants and laws. However, the training process can be time-consuming and is not applicable for fast or real-time identification. Developing a framework for fast identification based on DeepONets would require more effort.
## 6 Conclusion
We propose a physics-informed Phase-Field DeepONet framework for dynamical systems governed by gradient flows of free energy functionals. The minimizing movement scheme is incorporated into the framework to solve the system dynamics instead of directly solving the governing PDEs. Three different numerical examples validate the proposed framework, including the two major equations of the phase-field method, namely the Allen-Cahn and Cahn-Hilliard equations. Some major conclusions can be drawn from this work:
1. Variational principles, such as gradient flows, hold great potential to be seamlessly integrated into a physics-informed machine learning framework, providing a novel approach for the fusion of data and physics with wider practical implications.
Figure 9: (a) Random distributions generated with different length scales \(l=0.1,0.2,and0.5\); predictions of DeepONets for randomly generated inputs of \(u\) with length scales \(l=0.5\) (b) and \(l=0.5\) (c).
2. The proposed Phase-Field DeepONet framework successfully solves both the Allen-Cahn and Cahn-Hilliard equations in the phase-field method, demonstrating its effectiveness in simulating pattern formation in chemical systems.
3. The Phase-Field DeepONets trained in this study can serve as efficient explicit time-steppers, potentially enabling fast real-time predictions of dynamic systems.
This work raises the possibility of deep operator learning of more general phase-field models, including those of chemical nonequilibrium thermodynamics [51; 50], which involve nonlinear dependencies of fluxes or reaction rates on diffusional chemical potentials. The minimizing movement scheme would need to be extended to go beyond the gradient flows approximation to account for nonlinear dynamics. In this way, the Phase-Field DeepONet framework could enable data-driven learning and fast simulations of pattern formation from rich image datasets, going beyond PDE-constrained optimization [42].
## Acknowledgment
W.L. and J.Z. gratefully acknowledge the support of the present work through the NASA 19-TTT-0103 project (Award No. 80NSSC21M0114). They are also supported by the Northeastern University and College of Engineering startup funds. M.Z.B and W.L. are grateful for the support of Toyota Research Institute through the D3BATT Center on Data-Driven-Design of Rechargeable Batteries.
[title]
## Appendix A Basics of variational calculus
### Euler-Lagrangian equation
Given a smooth manifold \(X\) and a smooth real-value function \(f=f(x,u(x),u^{\prime}(x))\), the functional \(\mathcal{F}\) defined as
\[\mathcal{F}=\int_{x_{a}}^{x_{b}}f(x,u(x),u^{\prime}(x))\,\mathrm{d}x \tag{33}\]
has a stationary value (maximum, minimum, or saddle point) if the Euler-Lagrangian equation is satisfied,
\[\frac{\partial f}{\partial u_{i}}-\frac{\mathrm{d}}{\mathrm{d}x}\,\frac{ \partial f}{\partial(u_{i})}=0,\quad i=1,2,...,n. \tag{34}\]
### Derivation of functional derivative
Consider the functional derivative of a specific type of energy functional that only depends on the field variable and its first-order derivatives, i.e., \(\mathcal{F}=\int_{\Omega}F(x,u(x),\nabla u(x)\,\mathrm{d}x\). We have
\[\begin{split}\int_{\Omega}\frac{\delta\mathcal{F}}{\delta u}v\, \mathrm{d}x&=\left[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}F( x,u+vt,\nabla u+\nabla vt)\,\mathrm{d}x\right]_{t=0}\\ &=\int_{\Omega}\left(\frac{\partial F}{\partial u}\,v+\frac{ \partial F}{\partial\nabla u}\cdot\nabla v\right)\mathrm{d}x\\ &=\int_{\Omega}\left(\frac{\partial F}{\partial u}\,v-(\nabla \cdot\frac{\partial F}{\partial\nabla u})v\right)\mathrm{d}x+\int_{\partial \Omega}\left(\frac{\partial F}{\partial\nabla u}\cdot\mathbf{n}\right)v\, \mathrm{d}x\\ &=\int_{\Omega}\left(\frac{\partial F}{\partial u}-\nabla\cdot \frac{\partial F}{\partial\nabla u}\right)v\,\mathrm{d}x.\end{split} \tag{35}\]
The fourth line is valid when \(v=0\) or \(\frac{\partial F}{\partial\nabla u}\cdot\mathbf{n}=0\) on the boundary. Since \(v\) is an arbitrary function, we get the functional derivative
\[\frac{\delta\mathcal{F}}{\delta u}=\frac{\partial F}{\partial u}-\nabla\cdot \frac{\partial F}{\partial\nabla u}\,. \tag{36}\] |
2306.16081 | Graph neural networks for sound source localization on distributed
microphone networks | Distributed Microphone Arrays (DMAs) present many challenges with respect to
centralized microphone arrays. An important requirement of applications on
these arrays is handling a variable number of input channels. We consider the
use of Graph Neural Networks (GNNs) as a solution to this challenge. We present
a localization method using the Relation Network GNN, which we show shares many
similarities to classical signal processing algorithms for Sound Source
Localization (SSL). We apply our method for the task of SSL and validate it
experimentally using an unseen number of microphones. We test different feature
extractors and show that our approach significantly outperforms classical
baselines. | Eric Grinstein, Mike Brookes, Patrick A. Naylor | 2023-06-28T10:27:53Z | http://arxiv.org/abs/2306.16081v1 | # Graph Neural Networks for Sound Source Localization on Distributed Microphone Networks
###### Abstract
Distributed Microphone Arrays (DMAs) present many challenges with respect to centralized microphone arrays. An important requirement of applications on these arrays is handling a variable number of input channels. We consider the use of Graph Neural Networks (GNNs) as a solution to this challenge. We present a localization method using the Relation Network GNN, which we show shares many similarities to classical signal processing algorithms for Sound Source Localization (SSL). We apply our method for the task of SSL and validate it experimentally using an unseen number of microphones. We test different feature extractors and show that our approach significantly outperforms classical baselines.
Eric Grinstein 1, Mike Brookes, Patrick A. Naylor Department of Electrical and Electronic Engineering, Imperial College London, U.K.
Footnote 1: Contact: [email protected]
Footnote 2: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Słodowska-Curie grant agreement No 956369
## 1 Introduction
Distributed Microphone Array (DMA) signal processing [1] is an active field in the acoustic signal processing community, with important applications in speech enhancement, noise reduction and Sound Source Localization (SSL) [1, 2, 3]. In contrast to centralized microphone arrays [4], DMAs may be created through the wireless connection of multiple distributed devices such as cell phones, laptops and virtual assistants. In this context, they are also frequently referred to as Ad-hoc microphone arrays, or Wireless Acoustic Sensor Networks (WASNs).
Although DMAs bring advantages in terms of acoustic coverage with respect to centralized arrays, they also bring challenges. One such challenge forms the focus of this paper, namely, having a dynamic number of input microphone channels, as a DMA may be created using the devices present in a dynamic scene. This number may change in runtime due to many reasons, including software or hardware failures of individual devices, battery depletion, or the device being removed from the scene. This restricts the application of many of the deep learning methods that have been successfully applied to centralized microphone networks such as [5, 6], which require a static input size. Conversely, classical SSL approaches such as [7] are able to function on an arbitrary number of microphones.
In this work, we propose the use of Graph Neural Networks (GNNs) [8, 9, 10] as a suitable way of processing DMA signals for the task of SSL. We adopt a GNN variant called the Relation network (RelNet) [10]. We validate our approach for the task of localizing a single static source in multiple scenarios, showing it to outperform the baselines. The main contribution of this work is the first application of GNNs for the task of SSL, allowing our method to handle a variable number of microphone channels. Furthermore, our approach can work on unseen microphone coordinates and room dimensions through a metadata fusion procedure.
This paper continues by providing a problem statement in Sec. 2. Sec. 3 includes a review of related work on DMAs using deep learning, as well as a review of classical SSL methods and the RelNet GNN, which serve as building blocks for our model. In Sec. 4, we describe our proposed approach. Sec. 5 describes our experimental validation, and Sec. 6 presents the results and Sec. 7 concludes the paper.
Figure 1: (a): Example of a graph of distributed microphones. (b): Representation of the GNN-SLF model for three microphones. The computation of the heatmaps is described in Sec. 4.
## 2 Problem Statement
Our goal is to estimate the 2D coordinates \(\hat{\mathbf{p}}_{s}\) of a sound source located at \(\mathbf{p}_{s}=[p_{s}^{x}\,p_{s}^{y}]^{T}\) within a reverberant room of known dimensions \(\mathbf{d}=[d^{x}\,d^{y}\,d^{z}]^{T}\). The source emits a special signal \(s(t)\) at instant \(t\). Besides the source, \(M\) microphones are present in the room, where microphone \(m\) has a known position \(\mathbf{p}_{m}=[p_{m}^{x}\,p_{m}^{y}\,p_{m}^{z}]^{T}\), and receives a signal \(x_{m}(t)\) modeled as
\[x_{m}(t)=a_{m}s(t-\tau_{m})+\epsilon_{m}(t), \tag{1}\]
where \(a_{m}\) is a scaling factor representing the attenuation suffered by the wave propagating from \(\mathbf{p}_{s}\) to \(\mathbf{p}_{m}\). \(\tau_{m}\) represents the time delay taken for a sound wave to propagate from the source to the microphone, and \(\epsilon_{m}\) models the noise and reverberation. We assume \(\tau_{m}\) to be equal to \(c^{-1}\|\mathbf{p}_{m}-\mathbf{p}_{s}\|_{2}\), the distance between the source and the microphone divided by the speed of sound \(c\).
In our method and baselines, the microphone signals are sampled and processed in frames of size \(L\), defined as \(\mathbf{x}_{m}(t)=[x_{m}(t-(L-1)T_{s})...x_{m}(t)]^{T}\), where \(T_{s}\) is the sample period. Finally, we also define a metadata vector \(\mathbf{\phi}\) as
\[\mathbf{\phi}=[p_{1}^{x}\,p_{1}^{y}...d^{y}\,d^{z}]^{T}, \tag{2}\]
which serves as a secondary input to our method, allowing it to function on any room dimensions and microphone coordinates.
## 3 Related Work
### Classical SSL methods
Our proposed method can be seen as a generalization of classical grid-based SSL methods such as the Time-Difference-of-Arrival (TDOA) [11, 12], Spatial Likelihood Function (SLF) [7, 13] and energy-based [14] approaches. These approaches share many similarities, which are summarized by their shared behaviour described in Alg. 1.
```
functionestimate_source_location(\(\mathbf{X},\mathbf{\phi}\)) \(\mathbf{u}\leftarrow\mathbf{0}\) for each\(i\in[1..M]\)do for each\(j\in[(i+1)..M]\)do \(\mathbf{u}\leftarrow\mathcal{F}(\mathbf{x}_{i},\mathbf{x}_{j};\mathbf{\phi}(i,j))\) return\(\mathcal{G}(\mathbf{u})\)
```
**Algorithm 1** Classical SSL methods
Alg. 1 starts with the creation of an empty grid \(\mathbf{u}\), which we assume to be a flattened 2D for our applications. The next step consists of computing a _relation_\(\mathcal{F}\) between each pair of microphones \((i,j)\) available, using their signals \((\mathbf{x}_{i},\mathbf{x}_{j})\) as well as the _metadata_ available \(\mathbf{\phi}\), consisting of the microphone and room dimensions and the speed of sound. These relations consist of assigning, for each cell within the grid, a value expressing how likely a source is to be in a particular grid cell.
The relations between all pairs are aggregated through summation (or multiplication, see [13]) to generate a heatmap gathering all pairwise information. Depending on whether the problem is formulated using a Least-Squares (LS) or Maximum Likelihood (ML) approach, the minimum or maximum value of the grid will respectively correspond to the location of the source [11]. \(\mathcal{G}\) is therefore a peak-picking function, whose goal is to select the grid cell where the source is located.
The TDOA, SLF and energy-based methods differ mainly by the function \(\mathcal{F}\) computed. Each cell within the grid represents a candidate source location which has a theoretical TDOA between the two microphones. In the TDOA method, each grid cell is assigned the distance between its theoretical TDOA and the measured TDOA, computed by picking the peak of the generalized cross-correlation function between the microphones' signals, typically computed using the Generalized Cross-Correlation with Phase Transform (GCC-PHAT) [15].
In the SLF method, each cell receives the cross-correlation value at the lag corresponding to its TDOA. SLF is shown to be equivalent to Steered Response Power (SRP) [16]. Finally, the energy-based method uses a metric based on the ratio of the two microphone signals' energies. In Fig. 0(a), the edges of the graph represent maps computed using the SLF method.
### Neural network methods for DMA signal processing
Classical SSL methods normally do not account for room reverberation, which may divert the heatmap's peak from the true source location, or reduce its sharpness. Neural networks can become robust to reverberation if trained on suitable scenarios. Here we review works on neural networks for DMAs.
In [17], an attention-based neural network capable of handling connection failures is proposed for the task of speech enhancement. Unlike our method, this network is limited to a maximum number of input microphones channels. In [18] and [19], variable-input processing is achieved through a global average pooling scheme.
Two works have explored GNNs for acoustic signal processing. In [20], a GNN is used to profile noise within a railway setting. However, their work the source signal to be known beforehand, limiting its application in many scenarios. This restriction is not present in our proposed approach. In [2], a Graph Convolutional Network (GCN) [21] is used in con
junction with an encoder-decoder network for the task of speech enhancement. Conversely, we do not use an encoder-decoder and explore the Relation Network GNN, which we show to be well suited for the task of SSL.
### Relation Networks
We choose the Relation network (RelNet) [10] as our graph network architecture due its conceptual similarities to classical SSL methods. RelNets were introduced in the context of visual question answering. The input of the network consists of a set of _nodes_, represented by feature vectors \(\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{M}\}\). The network \(\mathcal{RN}\) may be summarized as
\[\hat{\mathbf{y}}=\mathcal{RN}(\mathbf{X})=\mathcal{G}\bigg{(}\sum_{i\neq j}\mathcal{F }(\mathbf{x}_{i},\mathbf{x}_{j})\bigg{)}, \tag{3}\]
where (3), \(\mathcal{F}\) generates a _relation_ between nodes \((i,j)\). These relations are summed together, and this sum is the input to \(\mathcal{G}\), which produces the answer \(\hat{\mathbf{y}}\) to the target question. The nodes \(\mathbf{x}_{i}\) and the relations \(\mathcal{F}(\mathbf{x}_{i},\mathbf{x}_{j})\) can be seen as a complete undirected graph \(\mathbf{G}=(\{\mathbf{x}_{i}\},\{\mathcal{F}(\mathbf{x}_{i},\mathbf{x}_{j})\})\). As in [10], we implement both \(\mathcal{F}\) and \(\mathcal{G}\) as Multi-layer Perceptrons (MLPs), trained jointly using backpropagation.
## 4 Method
A diagram of our proposed network is shown in Fig. 1. Using a RelNet allows our approach to first process pairs of microphone signals into features, and later combine them through summation. This allows it to function on a variable number of input microphones. Furthermore, our method can operate on unknown room dimensions and microphone coordinates by combining this metadata \(\mathbf{\phi}\) before estimating the source location.
The input to our method consists of the set of \(M\) microphone signal frames \(\{\mathbf{x}_{m}\}\), where \(\mathbf{x}_{m}\) is a vector of size \(L\) representing a frame of recordings, and a metadata vector \(\mathbf{\phi}\) containing relevant information such as the microphone coordinates and room dimensions. We define the relation function \(\mathcal{F}\) as
\[\mathcal{F}(\mathbf{x}_{i},\mathbf{x}_{j};\mathbf{\phi})=\text{MLP}(\mathcal{H}(\mathbf{x}_{i },\mathbf{x}_{j};\mathbf{\phi})), \tag{4}\]
Where MLP is a multi-layer perceptron and \(\mathcal{H}\) is a preprocessing or feature extraction function. The inclusion of a preprocessing function allows us to use the classical features such as GCC-PHAT or SLF. Conversely, post-processing these functions using a MLP allows us to improve these features by introducing learned rules, as we will show for the application of SSL.
In turn, the relation fusion function is chosen as \(\mathcal{G}(\mathbf{u})=\text{MLP}(\mathbf{u})\), where \(\mathbf{u}\) represents the sum of all pairs of relations as in Alg. 1. This function is a substitution of the peak-picking algorithm in Alg. 1, expanding its functionality for other possible applications.
As in [10], we train the weights \(\mathbf{w}_{\mathcal{F}}\) and \(\mathbf{w}_{\mathcal{G}}\) of the MLPs in \(\mathcal{F}\) and \(\mathcal{G}\) jointly through a gradient-based procedure, by minimizing an application-specific loss function \(\mathcal{L}(y,\hat{y})\) between the network output \(\hat{y}\) and target \(y\):
\[\begin{split}\mathbf{w}_{\mathcal{F}}&=\mathbf{w}_{ \mathcal{F}}-\lambda_{\mathcal{F}}\frac{\partial\mathcal{L}(\mathbf{y},\hat{\mathbf{y} })}{\partial\mathbf{w}_{\mathcal{F}}}\\ \mathbf{w}_{\mathcal{G}}&=\mathbf{w}_{\mathcal{G}}-\lambda_{ \mathcal{G}}\frac{\partial\mathcal{L}(\mathbf{y},\hat{\mathbf{y}})}{\partial\mathbf{w}_{ \mathcal{G}}},\end{split} \tag{5}\]
Where \((\lambda_{\mathcal{F}},\lambda_{\mathcal{G}})\) are the learning rates, usually defined by the optimizer used, such as Adam [22].
We experiment with two preprocessing functions \(\mathcal{H}\) for our relation function \(\mathcal{F}\). The first is the cross-correlation between the two microphones, computed using the GCC-PHAT method. In this case, the network needs to learn to map time lags into space. As an alternative, we project the cross-correlation into space using the SLF method. The output of this method is a flattened \(N\times N\) grid or a \(N^{2}\) vector. In this case, the network needs to learn to denoise the maps which may have been corrupted by reverberation.
A final step in the feature extraction step is concatenating the microphone coordinates of the pair as well as its room dimensions into the features. This is especially important for the GCC-PHAT feature extractor, as the network must learn how to project the temporal information into space.
The target of the MLP of function \(\mathcal{G}\) is to further enhance the summed maps produced by \(\mathcal{F}\). Its output has the same size as \(\mathcal{F}\), representing a flattened \(N\times N\) grid of cells centered at coordinates \(\{\mathbf{p}_{u,v}\}\) within the room. The target value \(y(u,v)\) of each grid cell \((u,v)\) is computed as
\[y(u,v)=e^{-\|\mathbf{p}_{u,v}-\mathbf{p}_{s}\|_{2}}, \tag{6}\]
where \(\mathbf{p}_{s}\) is the target source location. Note the maximum value of 1 occurs when \(\mathbf{p}_{u,v}=\mathbf{p}_{s}\) and approaches 0 exponentially as the distance between \(\mathbf{p}_{u,v}\) and \(\mathbf{p}_{s}\) increases. We use the mean absolute error between the network output and target as our loss function. This formulation allows for detection of multiple sources, which can be extracted through peak-picking. However, in this work, we focus on the detection of a single source.
## 5 Experimentation
This section describes our experiments with our proposed network for SSL described in the previous section. We refer to our proposed methods as GNN-GCC for the network using the GCC-PHAT feature
extractor and GNN-SLF for the one using the SLF extractor. We compare our approach with two baselines, the classical Time-Difference-of-Arrival (TDOA)-based and Spatial Likelihood Function (SLF)-based approaches, as described in Sec. 3. We provide a public repository containing all methods on Github 1
Footnote 1: [https://github.com/egrinstein/gnn_ssl](https://github.com/egrinstein/gnn_ssl)
### Dataset
We test our approach using synthetically generated data using the image source method [23], generated using the Pyroomacoustics library [24]. To demonstrate that our approach is able to operate with a different number of microphones than it was trained on, the training set for our GNN uses training examples containing \(\{5,7\}\) microphones, while the test set examples contain \(\{4,5,6,7\}\) microphones.
For each dataset sample, we randomly select two numbers from a uniform distribution in the interval [3, 6] m representing the room's width and length. The room's height is uniformly selected from the interval [2, 4] m. The room's reverberation time is sampled uniformly from the interval [0.3, 0.6] s using Eyring's formula [25]. We place the microphones and source randomly within the room, with the restriction of each device being at least 0.5 m from each other and the room's walls. Each source is set to play a speech sample from the VCTK corpus [26]. The Signal-to-Noise Ratio (SNR) in each microphone is set at 30 dB, simulated by adding White Gaussian Noise (WGN) independently to each channel to the auralizations generated using the image source method. The training, validation and test datasets contain respectively 15,000, 5000 and 10,000 examples.
### Method hyperparameters
We train the networks for a maximum of 100 epochs with early stopping if the validation loss stops increasing after 3 epochs. We employ a learning rate of 0.0005 using the Adam optimizer [22]. We use a batch size of 32. These parameters were chosen empirically. All grids used are of dimensions \(25\times 25\). Our input frame size used is L=500 ms. For the GCC-PHAT method, we use a Discrete Fourier Transform (DFT) of \(1,024\) samples. Since the maximum TDOA value is bounded by the room's diagonal, we only select the central 200 correlation bins, similar to [27]. In our proposed method, our relation function's MLP contains 3 layers, each of output size 625. The function \(\mathcal{G}\)'s MLP consists of 3 layers, all with an output size of 625 neurons. We use a ReLU activation function for all layers except for the output, which uses no activation.
The grids computed in the SLF and TDOA baselines as well as the feature extractor in the GNN-SLF method have a size of \(25\times 25\). The source estimation procedure in the baselines and proposed methods consists of picking the location of the highest value in the SLF method, and the lowest on in the SLF method.
## 6 Results
The metric used to evaluate the methods consists of the mean euclidean distance between the estimated and true source location on the test set. The results are shown in Fig. 2. Note that although we test all methods on unseen simulations containing \(\{4,5,6,7\}\) microphones, our method was only trained using examples containing \(\{5,7\}\) microphones. To ensure a fair comparison, the networks were trained multiple times. The black bars show their standard deviation.
We can see that the GNN-SLF method outperforms all others, demonstrating the effectiveness of the approach. The biggest relative improvement of 29% with respect to classical SLF is observed for four microphones. An explanation is that when there are fewer measurements available improving or discarding them becomes crucial, which may be the operation being performed by the network. We also see that GNN-GCC performed poorly, only surpassing the TDOA baseline. This indicates that requiring the network to learn to map time delays to spatial position is a more demanding task than dealing with the already spatialized information.
## 7 Conclusion and Future Work
We applied the RelNet, a type of GNN for the task of SSL on distributed microphone arrays. Our results show the RelNet is able to significantly improve the localization performance over classical localization algorithms, achieving a 29% improvement in the case of 4 microphones. We also show the method generalizing to an unseen number of microphones. Future directions include testing approach for localizing multiple sources and learning graph topologies different than the complete graph.
Figure 2: Localization error for our proposed methods and baselines. |
2307.12333 | An axiomatized PDE model of deep neural networks | Inspired by the relation between deep neural network (DNN) and partial
differential equations (PDEs), we study the general form of the PDE models of
deep neural networks. To achieve this goal, we formulate DNN as an evolution
operator from a simple base model. Based on several reasonable assumptions, we
prove that the evolution operator is actually determined by
convection-diffusion equation. This convection-diffusion equation model gives
mathematical explanation for several effective networks. Moreover, we show that
the convection-diffusion model improves the robustness and reduces the
Rademacher complexity. Based on the convection-diffusion equation, we design a
new training method for ResNets. Experiments validate the performance of the
proposed method. | Tangjun Wang, Wenqi Tao, Chenglong Bao, Zuoqiang Shi | 2023-07-23T14:00:33Z | http://arxiv.org/abs/2307.12333v2 | # An Axiomatized PDE Model of Deep Neural Networks +
###### Abstract
Inspired by the relation between deep neural network (DNN) and partial differential equations (PDEs), we study the general form of the PDE models of deep neural networks. To achieve this goal, we formulate DNN as an evolution operator from a simple base model. Based on several reasonable assumptions, we prove that the evolution operator is actually determined by convection-diffusion equation. This convection-diffusion equation model gives mathematical explanation for several effective networks. Moreover, we show that the convection-diffusion model improves the robustness and reduces the Rademacher complexity. Based on the convection-diffusion equation, we design a new training method for ResNets. Experiments validate the performance of the proposed method.
R esidual network, axiomatization, convection-diffusion equation
35K57, 93B35
## 1 Introduction
Deep neural networks (DNN) have achieved success in tasks such as image classification [33], speech recognition [7], video analysis [3], and action recognition [39]. Among these networks, residual networks (ResNets) are important architectures, making it practical to train ultra-deep DNN, and has ability to avoid gradient vanishing [12, 13]. Also, the idea of ResNets has motivated the development of many other DNNs including WideResNet [43], ResNeXt [41], and DenseNet [15].
In recent years, understanding the ResNets from the dynamical perspective has become a promising approach [8, 11]. More specifically, assume \(\mathbf{x}_{0}\in\mathbb{R}^{d}\) as the input of ResNet [12] and define \(\mathcal{F}\) to be a mapping, then the \(l\)-th residual block can be realized by
\[\mathbf{x}_{l+1}=\mathbf{x}_{l}+\mathcal{F}(\mathbf{x}_{l},\mathbf{w}_{l}) \tag{1}\]
where \(\mathbf{x}_{l}\) and \(\mathbf{x}_{l+1}\) are the input and output tensors of the residual mapping, and \(\mathbf{w}_{l}\) are parameters of \(l\)-th layer that are learned by minimizing the training error. Define \(\mathbf{x}_{L}\) as the output of a ResNet with \(L\) layers, then the classification score is determined by \(\mathbf{y}=\text{softmax}(\mathbf{w}_{\text{fc}}\mathbf{x}_{L})\), where \(\mathbf{w}_{\text{fc}}\) are also learnable parameters of the final linear layer.
For any \(T>0\), introducing a temporal partition \(\Delta t=T/L\), the ResNet represented by (1) is the explicit Euler discretization with time step \(\Delta t\) of the following differential equation:
\[\frac{\mathrm{d}\mathbf{x}(t)}{\mathrm{d}t}=v(\mathbf{x}(t),t),\quad\mathbf{x}(0)=\mathbf{x}_ {0}\quad t\in[0,T], \tag{2}\]
where \(v(\mathbf{x}(t),t)\) is a velocity field such that \(\Delta tv(\mathbf{x}(t),t)=F(\mathbf{x}(t),\mathbf{w}(t))\). The above ordinary differential equation (ODE) interpretation of ResNet provides new perspective and has inspired many networks. As shown in [25], by applying stable and different numerical methods for (2), it leads to PolyNet [45] and FractalNet [21]. Besides,
the other direction is to consider the the continuous form of (1.2), representing the velocity \(v(\mathbf{x},t)\) by a deep neural network. One typical method is the Neural ODE [4] in which \(v\) is updated by the adjoint state method. Similar extensions along this direction include neural stochastic differential equation (SDE) [17] and neural jump SDE [18]. The theoretical property of these continuous models has been analyzed in [37, 2, 44].
The connection between ODE and partial differential equation (PDE) through the well-known characteristics method has motivated the analysis of ResNet from PDE perspective, including theoretical analysis [34], novel training algorithms [36] and improvement of adversarial robustness [38] for DNNs. To be specific, from the PDE theory, the ODE (1.2) is the characteristic curve of the transport equation:
\[\frac{\partial u}{\partial t}(\mathbf{x},t)=-v(\mathbf{x},t)\nabla u(\mathbf{x},t),\ (\mathbf{x},t)\in\mathbb{R}^{d}\times[0,T]. \tag{1.3}\]
The method of characteristics tells us that, along the curve \((\mathbf{x},t)\) defined by (1.2), the function value \(u(\mathbf{x},t)\) remains unchanged. Assume at \(t=T\), \(u(\mathbf{x},T)=f(\mathbf{x})\coloneqq\mathrm{softmax}(\mathbf{w}_{\mathrm{fc}}\mathbf{x})\) is the linear classifier, then
\[u(\mathbf{x}(0),0)=u(\mathbf{x}(T),T)=f(\mathbf{x}(T))=f\circ k(\mathbf{x}(0))\]
where \(k\) represents the mapping from \(\mathbf{x}(0)\) to \(\mathbf{x}(T)\), which is the continuous form of feature extraction in ResNet. Thus at \(t=0\), \(u(\cdot,0)\) is the composition of a feature extractor and a classifier, which is analogous to ResNet. Nonetheless, since the transport equation (1.3) is reversible in time, and initial value problem is more common than terminal value problem in PDE, we assume \(u(\mathbf{x},0)=f(\mathbf{x})\) in our paper. Consequently, the direction of solving ODE (1.2) needs to be reversed, but its connection to ResNet remains consistent. In one word, the transport equation (1.3) can describe the evolution from a linear classifier to ResNet.
Suppose we fix the initial condition \(u(\cdot,0)\) as the linear classifier \(f\). DNN can be seen as a map between two functions: \(u(\cdot,0)\) and \(u(\cdot,T)\). In ResNets, this map is
Figure 1: \(\mathcal{T}_{T}\) is the map between two colored boxes. The left represents the base linear classifier. Right represents feature extractor + linear classifier, which together form a typical neural network.
formulated as a convection equation. A natural question is: _Is convection equation is the only PDE to formulate this map? If not, can we derive a general form of PDE to formulate the map?_ In this paper, we try to answer above questions from mathematical point of view. First we construct a continuous flow \(\mathcal{T}_{t}\) which maps a simple linear classifier to a more complicate function, as illustrated in Figure 1,
\[\mathcal{T}_{t}:f\mapsto u(\cdot,t),\quad t\in[0,T], \tag{4}\]
The idea in this paper is also classical in mathematics. First, we extract some _basic properties_\(\mathcal{T}_{t}\) should satisfies. Then based on these basis properties, a general form of \(\mathcal{T}_{t}\) can be derived rigorously. More specifically, inspired by the scale space theory, we prove that under several reasonable assumptions on \(\mathcal{T}_{t}\), \(u(\mathbf{x},t)=\mathcal{T}_{t}f(\mathbf{x})\) is the solution of a second order convection-diffusion equation. This theoretical result provides a unified framework which covers transport equation and some existing works including Gaussian noise injection [38, 24], dropout techniques [36, 35] and randomized smoothing [5, 23, 31]. It also illuminates new thinking for designing networks. In summary, we list the main contributions as follows.
* We establish several basic assumptions on operator \(\mathcal{T}_{t}\), and prove the sufficiency of these assumptions for generalizing ResNet and beyond. To the best of our knowledge, this is the first theoretical attempt for establishing a sufficient condition for designing the variants of ResNet from the PDE perspective, which may provide some insights when considering the search space in neural architecture search (NAS).
* Inspired by our theoretical analysis, we propose an isotropic model by adding isotropic diffusion to (3). Compared to the linear classifier \(f\), we prove that the proposed model has lower Rademacher complexity and larger region with certified robustness. Moreover, we design a training method by applying the operator splitting scheme for solving the proposed convection-diffusion equation.
_Notations._ We denote scalars, vectors, and matrices by lowercase and uppercase letters, where vectors and matrices are bolded. We denote the \(\ell_{2}\) and \(\ell_{\infty}\) norms of the vector \(\mathbf{x}=(x_{1},\cdots,x_{d})\in\mathbb{R}^{d}\) by \(\|\mathbf{x}\|_{2}=(\sum_{i=1}^{d}|x_{i}|^{2})^{1/2}\) and \(\|\mathbf{x}\|_{\infty}=\max_{i=1}^{d}|x_{i}|\), respectively. We denote the gradient and Laplace operators by \(\nabla=(\frac{\partial}{\partial x_{1}},\cdots,\frac{\partial}{\partial x_{d}})\) and \(\Delta=\sum_{i=1}^{d}\frac{\partial^{2}}{\partial x_{i}^{2}}\), respectively. For a function \(f:\mathbb{R}^{d}\to\mathbb{R}\), \(D^{\alpha}f\) denotes its \(\alpha\)-order derivative, \(\|f(\mathbf{x})\|_{L^{\infty}}=\sup_{\mathbf{x}\in\mathbb{R}^{d}}|f(\mathbf{x})|\) its \(L^{\infty}\) norm. \(\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})\) denotes Gaussian noise with mean \(\mathbf{0}\) and variance \(\sigma^{2}\). \(C_{b}^{\infty}\) is the space of bounded functions which have bounded derivatives at any order.
## 2 General PDE model
In this section, we show under several reasonable assumptions, the sequence of operator images \(u(\mathbf{x},t)=\mathcal{T}_{t}f(\mathbf{x})\) is the solution of the convection-diffusion equation. Then we show that our convection-diffusion equation model can naturally cover various existing effective models.
### The characterization of \(\mathcal{T}_{t}\)
Throughout this section, we assume \(\mathcal{T}_{t}\) is well defined on \(C_{b}^{\infty}\), and \(\mathcal{T}_{t}f\) is a bounded continuous function. The assumption is reasonable, since typical classifiers \(f\) like a linear classifier is indeed bounded (between 0 and 1) and has bounded derivatives. The operator image \(\mathcal{T}_{t}f\), which we hope to be a neural network, is obviously bounded and continuous. To get the expression of the evolution operator \(\mathcal{T}_{t}\), we assume it has some fundamental properties, which fall into two categories: deep neural network type and partial differential equation type.
#### 2.1.1 DNN-type assumptions
Suppose we are given two classifiers \(f\) and \(g\) such that \(f(\mathbf{x})\geq g(\mathbf{x})\) for all data point \(\mathbf{x}\in\mathbb{R}^{d}\). Then \(f\circ k(\mathbf{x})\geq g\circ k(\mathbf{x})\) if we replace the data points with extracted features. Recall that for ResNet, \(f\circ k=\mathcal{T}_{T}(f)\), which implies \(\mathcal{T}_{T}\left(f\right)\geq\mathcal{T}_{T}\left(g\right)\). Since the order-preserving property holds both at initial time step \(t=0\) and final time step \(t=T\), it is reasonable to make the following assumption **[Comparison Principle]** For all \(t\geq 0\) and \(f,g\in C_{b}^{\infty}\), if \(f\leq g\), then \(\mathcal{T}_{t}(f)\leq\mathcal{T}_{t}(g)\).
The prediction of a deep neural network is computed using forward propagation, i.e. the network uses output of former layer as input of current layer. Thus, for a DNN model, it's natural that the output of a DNN can be deduced from the output of intermediate \(l\)-th layer without any information depending upon the original data point \(\mathbf{x}\) and output of \(m\)-th layer (\(m<l\)). Regarding the evolution of operator \(\mathcal{T}_{t}\) as stacking layers in the neural network, we should require that \(\mathcal{T}_{t+s}\) can be computed from \(\mathcal{T}_{t}\) for any \(s\geq 0\), and \(T_{0}\) is of course the identity, which implies **[Markov Property]**\(\mathcal{T}_{t+s}=T_{t}\circ T_{t+s,t}\), for all \(s,t\geq 0\) and \(t+s\leq T\). \(\mathcal{T}_{t+s,t}\) denotes the flow from time \(t\) to time \(t+s\).
Linearity is also an intrinsic property of deep neural networks. Notice that we are not referring to a single DNN's output v.s. input linearity, which is obviously wrong because of the activation function. Rather, we are stating that two different DNN with the same feature extractor can be merged in to a new DNN with a new classifier composed with the shared extractor, i.e.
\[(\beta_{1}f+\beta_{2}g)\circ k=\beta_{1}f\circ k+\beta_{2}g\circ k\]
This is linearity at \(t=T\), and for \(t=0\) it is trivial. Thus we assume,
**[Linearity]** For any \(f,g\in C_{b}^{\infty}\), and real constants \(\beta_{1},\beta_{2}\), we have
\[\mathcal{T}_{t}(\beta_{1}f+\beta_{2}g)=\beta_{1}\mathcal{T}_{t}(f)+\beta_{2} \mathcal{T}_{t}(g)\]
if \(C\) is a constant function, then \(\mathcal{T}_{t}(C)=C\).
#### 2.1.2 PDE-type assumptions
First of all, we need an assumption to ensure the existence of a differential equation. If two classifiers \(f\) and \(g\) have the same derivatives of any order at some point, then we should assume same evolution at this point when \(t\) is small. If we unrigorously define \(\partial T_{t}(f)/\partial t=\left(\mathcal{T}_{t}(f)-f\right)/t\) when \(t\to 0^{+}\) (or infinitesimal generator in our proof), then \(\partial T_{t}(f)/\partial t\) should equal to \(\partial T_{t}(g)/\partial t\). Thus, we give the following assumption concerning the local character of the operator \(\mathcal{T}_{t}\) for \(t\) small.
**[Locality]** For all fixed \(\mathbf{x}\), if \(f,g\in C_{b}^{\infty}\) satisfy \(D^{\alpha}f(\mathbf{x})=D^{\alpha}g(\mathbf{x})\) for all \(|\alpha|\geq 0\), then
\[\lim_{t\to 0^{+}}\frac{(\mathcal{T}_{t}(f)-\mathcal{T}_{t}(g))(\mathbf{x})}{t}=0\]
Regularity is an essential component in PDE theory. Thus, when considering PDE-type assumptions on \(\mathcal{T}_{t}\), it is necessary to study its regularity. We separate the regularity requirements into spatial and temporal. First, spatial regularity means that if we add a perturbation \(\mathbf{h}\) to data point \(\mathbf{x}\), the output \(\mathcal{T}_{t}(f)(\mathbf{x}+\mathbf{h})\) will not be much different from adding the same perturbation to the output \(\mathcal{T}_{t}(f)(\mathbf{x})\). One can relate it to the well-known translation invariance in image processing, but our assumption is weaker, as we allow small difference rather than require strict equivalence,
**[Spatial Regularity]** There exist a positive constant \(C\) depending on \(f\) such that
\[\|\mathcal{T}_{t}(\tau_{\mathbf{h}}f)-\tau_{\mathbf{h}}(\mathcal{T}_{t}f)\|_{L^{ \infty}}\leq Cht\]
for all \(f\in C_{b}^{\infty},\mathbf{h}\in\mathbb{R}^{d},t\geq 0\), where \((\tau_{\mathbf{h}}f)(\mathbf{x})=f(\mathbf{x}+\mathbf{h})\) and \(\|\mathbf{h}\|_{2}=h\).
_Remark 2.1_.: Spatial regularity is also beneficial for adversarial robustness. DNN have been shown to be vulnerable to some well-designed input samples (adversarial examples) [10, 20]. These adversarial examples are produced by adding carefully hand-crafted perturbations to the inputs of the targeted model. Although these perturbations are imperceptible to human eyes, they can fool DNN to make wrong prediction. In some sense, the existence of these adversarial examples is due to spatial unstability of DNN. So in our method, we hope the new model \(\mathcal{T}_{t}(f)\) to be spatially stable.
Secondly, temporal stability requires that in any small time interval, the evolution process will not be rapid. We want a smooth operator \(\mathcal{T}_{t}\) in time. Our assumption goes
**[Temporal Regularity]** For all \(t,s,t+s\in[0,T]\) and all \(f\in C_{b}^{\infty}\), there exist a constant \(C\geq 0\) depending on \(f\) such that
\[\|\mathcal{T}_{t+s,s}(f)-f\|_{L^{\infty}} \leq Ct\] \[\|\mathcal{T}_{t+s,s}(f)-\mathcal{T}_{t}(f)\|_{L^{\infty}} \leq Cst\]
Finally, combine all the assumptions on \(\mathcal{T}_{t}\), we can derive the following theorem, emphasizing that the output value of neural network \(T_{t}(f)\) with time evolution satisfies a convection-diffusion equation,
**Theorem 2.2**.: _Under the above assumptions, there exists Lipschitz continuous function \(v:\mathbb{R}^{d}\times[0,T]\to\mathbb{R}^{d}\) and Lipschitz continuous positive function \(\sigma:\mathbb{R}^{d}\times[0,T]\to\mathbb{R}^{d\times d}\) such that for any bounded and uniformly continuous base classifier \(f(\mathbf{x})\), \(u(\mathbf{x},t)=\mathcal{T}_{t}(f)(\mathbf{x})\) is the unique solution of the following convection-diffusion equation:_
\[\begin{cases}\frac{\partial u(\mathbf{x},t)}{\partial t}=v(\mathbf{x},t)\cdot\nabla u (\mathbf{x},t)+\sum_{i,j}\sigma_{i,j}\frac{\partial^{2}u}{\partial x_{i}\partial x _{j}}(\mathbf{x},t),\\ u(\mathbf{x},0)=f(\mathbf{x}),\end{cases} \tag{1}\]
_where \(\mathbf{x}\in\mathbb{R}^{d},t\in[0,T]\). Here \(\sigma_{i,j}\) is the \(i,j\)-th element of matrix function \(\sigma(\mathbf{x},t)\)._
_Remark 2.3_.: The right hands of the differential equation in (1) consist of two terms, the first order term \(v(\mathbf{x},t)\cdot\nabla u(\mathbf{x},t)\) called convection term and the second order term \(\sum_{i,j}\sigma_{i,j}\frac{\partial^{2}u}{\partial x_{i}\partial x_{j}}(\mathbf{ x},t)\) called diffusion term.
_Remark 2.4_.: When \(\sigma(\mathbf{x},t)=\sigma^{2}\mathbf{I}\), we call these type equations isotropic equations and the corresponding models isotropic models. When \(\sigma(\mathbf{x},t)\) is a diagonal matrix and \(\sigma(\mathbf{x},t)\neq\sigma^{2}\mathbf{I}\), we call these type equations anisotropic equations that lead to anisotropic models.
We will provide the proof of Theorem 2.2 in Appendix A. In this subsection, we have introduced a convection-diffusion equation framework for ResNets. The framework is quite general, as many existing models with residual connections can be interpreted as special cases in our framework.
### Examples Convection-Diffusion Model
Under the convection-diffusion framework, we can give interpretation to several regularization mechanisms including Gaussian noise injection [38, 24], ResNet with stochastic dropping out the hidden state of residual block [35, 36] and randomized smoothing [5, 23, 31]. Actually, they can be seen as convection-diffusion model with different diffusion term. Corresponding diffusion terms of these models are listed in Table 1.
**Gaussian noise injection:** Gaussian noise injection is an effective regularization mechanism for a DNN model. For a vanilla ResNet with \(L\) residual mapping, the \(n\)-th residual mapping with Gaussian noise injected can be written as
\[\mathbf{x}_{n+1}=\mathbf{x}_{n}+\mathcal{F}(\mathbf{x}_{n},\mathbf{w}_{n})+a\mathbf{\varepsilon}_{n },\ \mathbf{\varepsilon}_{n}\sim\mathcal{N}(0,\mathbf{I})\]
where the parameter \(a\) is a noise coefficient. By introducing a temporal partition: \(t_{n}=nT/L\), for \(n=0,1,..,L\) with the time interval \(\Delta t=T/L\) and let \(\mathbf{x}(t_{n})=\mathbf{x}_{n}\) and \(\mathbf{w}(t_{n})=\mathbf{w}_{n}\). And let \(a=\sigma\sqrt{\Delta t}\) and \(\mathcal{F}(\mathbf{x}_{n},\mathbf{w}_{n})/\Delta t=v(\mathbf{x},t)\). This noise injection technique in a discrete neural network can be viewed as the approximation of continuous dynamic
\[d\mathbf{x}(t)=v(\mathbf{x}_{n},t)dt+\sigma d\mathbf{B}(t) \tag{2.2}\]
where \(\mathbf{B}(t)\) is multidimensional Brownian motion. The output of \(L\)-th residual mapping can be written as \(\hat{\text{I}}\hat{\text{$o$}}\) process (2.2) at terminal time \(T\), \(\mathbf{x}(T)\). So, an ensemble prediction over all the possible sub-networks with shared parameters can be written as
\[\hat{y}=\mathbb{E}(\text{softmax}(\mathbf{w}_{\text{fc}}\mathbf{x}(T))|\mathbf{x}(0)=\mathbf{x }_{0}). \tag{2.3}\]
According to Feynman-Kac formula [26], Equation (2.3) is known to solve the following convection-diffusion equation
\[\begin{cases}\frac{\partial u(\mathbf{x},t)}{\partial t}=v(\mathbf{x},t)\cdot\nabla u +\sigma^{2}\Delta u,\quad\mathbf{x}\in\mathbb{R}^{d},t\in[0,T]\\ u(\mathbf{x},0)=\text{softmax}(\mathbf{w}_{\text{fc}}\mathbf{x}).\end{cases}\]
**Dropout of Hidden Units:** Consider the case that we disable every hidden units independently from a Bernoulli distribution \(\mathcal{B}(1,p)\) with \(p\in(0,1)\) in each residual mapping
\[\mathbf{x}_{n+1} =\mathbf{x}_{n}+\mathcal{F}(\mathbf{x}_{n},\mathbf{w}_{n})\odot\frac{\mathbf{z}_ {n}}{p}\] \[=\mathbf{x}_{n}+\mathcal{F}(\mathbf{x}_{n},\mathbf{w}_{n})+\mathcal{F}(\mathbf{x} _{n},\mathbf{w}_{n})\odot(\frac{\mathbf{z}_{n}}{p}-\mathbf{I})\]
where \(\mathbf{z}_{n}\sim\mathcal{B}(1,p)\) namely \(\mathbb{P}(\mathbf{z}_{n}=0)=1-p\), \(\mathbb{P}(\mathbf{z}_{n}=1)=p\) and \(\odot\) indicates the Hadamard product. If the number of the ensemble is large enough, according to Central Limit Theorem, we have
\[\mathcal{F}(\mathbf{x}_{n},\mathbf{w}_{n})\odot(\frac{\mathbf{z}_{n}}{p}-\mathbf{I})\approx \mathcal{F}(\mathbf{x}_{n},\mathbf{w}_{n})\odot\mathcal{N}(0,\frac{1-p}{p})\]
\begin{table}
\begin{tabular}{c c} \hline Models & Diffusion terms \\ \hline ResNet & 0 \\ Gaussian noise injection & \(\sigma^{2}\Delta u\) \\ Dropout of Hidden Units & \(\frac{1-p}{2p}\sum_{i}(v^{T}v)_{i,i}\frac{\partial^{2}u}{\partial x_{i}^{2}}\) \\ Randomized smoothing & \(\sigma^{2}\Delta u\) \\ \hline \end{tabular}
\end{table}
Table 1: Examples of networks under our proposed framework
The similar way with Gaussian noise injection, the ensemble prediction \(\hat{y}\) can be viewed as the solution \(u(\mathbf{x},T)\) of following equation:
\[\begin{cases}\frac{\partial u(\mathbf{x},t)}{\partial t}=v(\mathbf{x},t)\cdot\nabla u( \mathbf{x},t)+\frac{1-p}{2p}\sum_{i}(v^{T}v)_{i,i}\frac{\partial^{2}u}{\partial x_{ i}^{2}}(\mathbf{x},t),\quad\mathbf{x}\in\mathbb{R}^{d},t\in[0,T]\\ u(\mathbf{x},0)=\operatorname{softmax}(\mathbf{w}_{\text{fc}}\mathbf{x}),\end{cases}\]
_Remark 2.5_.: In fact, similar to dropout, shake-shake regularization [9, 14] and ResNet with stochastic depth [16] can also be interpreted by our convection-diffusion equation model.
**Randomized Smoothing:** Consider to transform a trained classifier into a new smoothed classifier by adding Gaussian noise to the input when inference time. If we denote the trained classifier by \(f(\mathbf{x})\) and denote the new smoothed classifier by \(g(\mathbf{x})\). Then \(f(\mathbf{x})\) and \(g(\mathbf{x})\) have the following relation:
\[g(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}f(\mathbf{x}+\mathbf{\varepsilon}_{i})\approx \mathbb{E}_{\mathbf{e}\sim\mathcal{N}(0,\sigma^{2}I)}[f(\mathbf{x}+\mathbf{\varepsilon})]\]
where \(\mathbf{\varepsilon}_{i}\sim\mathcal{N}(0,\sigma^{2}I)\). According to Feynman-Kac formula, \(g(\mathbf{x})\) can be viewed as the solution of the following PDEs
\[\begin{cases}\frac{\partial u(\mathbf{x},t)}{\partial t}=\frac{1}{2}\sigma^{2} \Delta u,t\in[0,1]\\ u(\mathbf{x},0)=f(\mathbf{x}).\end{cases} \tag{4}\]
Especially, when \(f(\mathbf{x})\) is ResNet, the smoothed classifier \(g(\mathbf{x})=u(\mathbf{x},T+1)\) can be expressed as
\[\begin{cases}\frac{\partial u(\mathbf{x},t)}{\partial t}=v(\mathbf{x},t)\cdot\nabla u (\mathbf{x},t),\mathbf{x}\in\mathbb{R}^{d},t\in[0,T]\\ \frac{\partial u(\mathbf{x},t)}{\partial t}=\frac{1}{2}\sigma^{2}\Delta u,t\in[T,T +1]\\ u(\mathbf{x},0)=\operatorname{softmax}(\mathbf{w}_{\text{fc}}\mathbf{x}).\end{cases}\]
The differential equation formulation of randomized smoothing is similar to our method presented in the next section. However, randomized smoothing is a postprocessing step which ensures certified robustness. It does not involve the training of velocity field \(v\), which is parametrized by a neural network. Moreover, our method adds regularization term in each time step \(t\), while randomized smoothing only adds Gaussian noise on initial time step \(t=0\).
Obviously, ResNet is no diffusion model, Gaussian noise injection and randomized smoothing are isotropic models and dropout of hidden units is anisotropic models.
The viewpoint that forward propagation of a ResNet is the solving process of a TE enables us to interpret poor robustness and generalizability as the irregularity of the solution. Moreover, we can also interpret the effectiveness of the models above-mentioned in improving generalizability as the action of diffusion term. Next, we focus on the case of isotropic models, because anisotropic models are difficult for theoretical analysis and practical experiment and we take it as our future work.
## 3 Theoretical Analysis
In this section, under our convection-diffusion model, we focus on isotropic-type equation. Furthermore, for the sake of simplicity and practical computation, we consider the model with split convection and diffusion in
time.
\[\begin{cases}\frac{\partial u(\mathbf{x},t)}{\partial t}=v(\mathbf{x},t)\cdot\nabla u(\mathbf{ x},t),&\mathbf{x}\in\mathbb{R}^{d},\quad t\in[0,T-1]\\ \frac{\partial u(\mathbf{x},t)}{\partial t}=\sigma^{2}\Delta u,&\mathbf{x}\in\mathbb{R }^{d},\quad t\in[T-1,T]\\ u(\mathbf{x},0)=f(\mathbf{x})\end{cases} \tag{1}\]
In the following subsections, we will theoretically illustrate its benefits for improving the robustness and reducing the Rademacher complexity of neural networks.
### Robustness guarantee
Assume the data point lies in a bounded domain \(\mathcal{D}\). Consider a multi-class classification problem from \(\mathcal{D}\subset\mathbb{R}^{d}\) to label class \(\mathcal{Y}=\{1,\cdots,k\}\). Let \(G\) be a prediction function defined by \(G(\mathbf{x})=\operatorname*{arg\,max}_{i\in\mathcal{Y}}u^{i}(\mathbf{x},T)\), where \(u^{i}(\mathbf{x},T)\) is the \(i\)-th element of \(u(\mathbf{x},T)\). Suppose that DNN classifies \(\mathbf{x}\) the most probable class \(c_{A}\) is returned with probability \(p_{A}=u^{c_{A}}(\mathbf{x},T)\), and the "runner-up" class is returned with probability \(p_{B}\). Our main result of this subsection is to estimate the area around the data point \(\mathbf{x}\) in which the prediction \(G(\mathbf{x})\) is robust,
Suppose the velocity \(v(\mathbf{x},t)\) is a continuous function \(\mathbb{R}^{d}\times[0,T]\) which satisfies the Lipschitz condition, i.e. there exists a given constant \(L>0\), such that
\[\|v(\mathbf{x}_{1},t)-v(\mathbf{x}_{2},t)\|\leq L\|\mathbf{x}_{1}-\mathbf{x}_{2}\|,\quad\forall (\mathbf{x}_{1},t),(\mathbf{x}_{2},t)\in\mathbb{R}^{d}\times[0,T]\]
Let \(u(\mathbf{x},T)\) be the bounded solution of Equation (1). Suppose \(c_{A}\in\mathcal{Y}\) and
\[u^{c_{A}}(\mathbf{x},T)=p_{A}\geq p_{B}=\max_{i\neq c_{A}}u^{i}(\mathbf{x},T),\]
then \(G(\mathbf{x}+\mathbf{\delta})=c_{A}\) for all \(\|\mathbf{\delta}\|_{2}\leq R\), where
\[R=\frac{\sigma}{\sqrt{2d}}(p_{A}-p_{B})\]
We assume \(v\) as Lipschitz continuous because \(v(\mathbf{x},t)=\frac{1}{\Delta t}F(\mathbf{x},\mathbf{w})\) corresponds to a residual block, so its gradient can be bounded easily using the network parameters \(\mathbf{w}\).
We provide the proof of Theorem 3.1 in Appendix B. According to Theorem 3.1, we can obtain the certified radius \(R\) is large when the diffusion coefficient \(\sigma^{2}\) and the probability of the top class is high. We will also consider the generalization ability of model Equation (1) in the next subsection.
### Rademacher complexity
For simplicity, we consider binary classification problems. Assume that data points are drawn from the underlying distribution \(\mathcal{D}\). The training set \(S_{N}=\{\mathbf{x}_{i}\}_{i=1}^{N}\) is composed of \(N\) samples drawn i.i.d. from \(\mathcal{D}\). Rademacher complexity is one of the classic measures of generalization error. We first recap on the definition of empirical Rademacher complexity.
[[22]] Let \(\mathcal{H}:X\to\mathbb{R}\) be the space of real-valued functions on the space \(X\). For a given sample set \(S_{N}=\{\mathbf{x}_{i}\}_{i=1}^{N}\), the empirical Rademacher complexity of \(\mathcal{H}\) is defined as
\[R_{S_{N}}(\mathcal{H}):=\frac{1}{N}\mathbb{E}_{\sigma}[\sup_{h\in\mathcal{H}} \sum_{i=1}^{N}\sigma_{i}h(\mathbf{x}_{i})]\]
where \(\sigma_{1},\sigma_{2},\cdots,\sigma_{N}\) are i.i.d. Rademacher random variables with \(\mathbb{P}(\sigma_{i}=1)=\mathbb{P}(\sigma_{i}=-1)=\frac{1}{2}\).
Rademacher complexity is a tool to bound the generalization error. The smaller the generalization gap is, the less overfitting the model is. We are interested in the empirical Rademacher complexity of the following function class:
\[\mathcal{G}_{\sigma}:=\left\{g(\mathbf{x})=\mathcal{T}_{T}(f)|f\in\mathcal{F}\right\}\]
where
\[\mathcal{F}:=\left\{f:\mathbf{x}\mapsto\phi(\mathbf{w}_{\mathrm{fc}}\mathbf{x})|\left\|\mathbf{ w}_{\mathrm{fc}}\right\|_{1}\leq W\right\}\]
Here \(\mathcal{T}_{t}\) is the solution operator of (1), \(\phi\) is sigmoid activation function. The function class \(\mathcal{F}\) represents the hypothesis linear classifiers function class, where we assume the \(\ell^{1}\) norm of the weight of fully-connected layer is bounded by \(W\). The function class \(\mathcal{G}_{\sigma}\) includes the evolved residual neural networks from base classifiers. We can also assume the data points are bounded by \(R\), which is reasonable, e.g. in CIFAR-10 dataset [19], the pixel values lie in \([0,1]\). Then we have the following theorem.
**Theorem 3.1**: _Given a data set \(S_{N}=\{\mathbf{x}_{i}\}_{i=1}^{N}\). Suppose the data points \(\|\mathbf{x}_{i}\|_{\infty}\leq R\) for \(\mathbf{x}_{i}\in S_{N}\), then_
\[R_{S_{N}}(\mathcal{G}_{\sigma})\leq R_{S_{N}}(\mathcal{F})\leq\inf_{\epsilon} \left(\sqrt{\frac{2d\log(3WR/\epsilon)}{N}}+\epsilon\right)\]
We provide the proof of Theorem 3.1 in Appendix C. According to Theorem 3.1, we can obtain that the empirical Rademacher complexity of \(\mathcal{G}_{\sigma}\) can be upper bounded by that of a simple linear classifier. This can help bounding the generalization error of the evolved neural network. In the next section, we will present a training method for ResNet to verify our results.
## 4 Training Algorithm
To verify the effectiveness of the convection-diffusion model, we design a training method for ResNet. We assume that the data point lies in an unknown domain \(\mathcal{D}\subset\mathbb{R}^{d}\) and the label function \(l(\mathbf{x})\) is defined on \(\mathcal{D}\). Let \(\mathcal{S}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\subset\mathcal{D}\) be the training set, where \(\mathbf{x}_{i}\) is a data point sampled from \(\mathcal{D}\) and \(y_{i}=l(\mathbf{x}_{i})\) is the corresponding label.
As stated in the introduction, the forward propagation of ResNet corresponds to the transport equation. Denote ResNet as \(g_{\theta}\) with trainable parameters \(\theta\). Then the process from \(0\) to \(T-1\) in model (1) corresponds to the forward propagation of \(g_{\theta}\). In other words, the transport equation part in the convection-diffusion model (1) is already inherently included in \(g_{\theta}\). To incorporate the diffusion part, we enforce the network \(g_{\theta}\) to satisfy the following constraint
\[\begin{cases}\frac{\partial g_{\theta}(\mathbf{x},t)}{\partial t}=\sigma^{2}\Delta g _{\theta}(\mathbf{x},t),&\mathbf{x}\in\mathcal{D},t\in[0,1]\\ g_{\theta}(\mathbf{x},t)=l(\mathbf{x}),&\mathbf{x}\in\mathcal{S},t\in[0,1]\end{cases} \tag{1}\]
where we set \(T-1\) as the new initial time. The first equation is the diffusion part in convection-diffusion model (1). The second equation is the boundary condition for points in the training set. It is natural to require the network to classify training data correctly at any time \(t\in[0,1]\). To impose the constraints, we follow the techniques in PINN [30] and design two loss terms for the boundary condition and differential equation respectively.
First, to fit the boundary condition, we use the following loss
\[L_{1}=\int_{\mathcal{S}}\int_{0}^{1}l^{\mathrm{CE}}\left(g_{\theta}(\mathbf{x},s),l( \mathbf{x})\right)\mathrm{d}s\mathrm{d}\mathbf{x}\approx\frac{1}{MN}\sum_{i=1}^{N}\sum_ {k=1}^{M}l^{\mathrm{CE}}(g_{\theta}(\mathbf{x}_{i},t_{k}),y_{i})\]
where \(l^{\mathrm{CE}}\) denotes cross-entropy loss. \(N\) is the number of data points in \(\mathcal{S}\), and \(M\) is the number of time steps. In practice, we evenly choose \(t_{k}=(k-1)/(M-1)\) to discretize time.
Then, to fit the differential equation, we use the following mean square error loss
\[L_{2}=\int_{\mathcal{D}}\int_{0}^{1}\left(\frac{\partial g_{\theta}}{\partial t }(\mathbf{x},s)-\sigma^{2}\Delta g_{\theta}(\mathbf{x},s)\right)^{2}\mathrm{d}s \mathrm{d}\mathbf{x}\]
In practice, we only have access to the training set \(\mathcal{S}\) and do not know the underlying domain \(\mathcal{D}\), thus we treat the neighborhood of \(\mathcal{S}\) as the domain. The neighborhood is obtained by adding a uniform noise \(\epsilon\) to \(\mathcal{S}\). Then the integral term is approximated by
\[L_{2} \approx\frac{1}{N}\sum_{i=1}^{N}\int_{0}^{1}\left(\frac{\partial g _{\theta}}{\partial t}(\mathbf{x}_{i}+\epsilon,s)-\sigma^{2}\Delta g_{\theta}( \mathbf{x}_{i}+\epsilon,s)\right)^{2}\mathrm{d}s\] \[\approx\frac{1}{MN}\sum_{i=1}^{N}\sum_{k=1}^{M}\left(\frac{ \partial g_{\theta}}{\partial t}(\mathbf{x}_{i}+\epsilon,t_{k})-\sigma^{2}\Delta g _{\theta}(\mathbf{x}_{i}+\epsilon,t_{k})\right)^{2}.\]
Nonetheless, in neural network, the exact computation of Laplace of output w.r.t the input is computationally intractable [27] because of the high dimension inputs, e.g. for CIFAR-10 pictures the dimension is 3072. Thus, we use finite difference method to approximate. Denote difference operator \(\Delta_{h,\mathbf{v}}\) by
\[\Delta_{h,\mathbf{v}}g_{\theta}(\mathbf{x}_{i},s)=\frac{g_{\theta}(\mathbf{x}_{i}+h\mathbf{v},s)+g_{\theta}(\mathbf{x}_{i}-h\mathbf{v},s)-2g_{\theta}(\mathbf{x}_{i},s)}{h^{2}}\]
Using Taylor formula and law of large numbers, we have
\[\Delta g_{\theta}(\mathbf{x}_{i},s)=\mathbb{E}_{\mathbf{v}\sim\mathcal{N}(0,I)}\left( \Delta_{h,\mathbf{v}}g_{\theta}(\mathbf{x}_{i},s)\right)+O(h^{2})\approx\frac{1}{K} \sum_{j=1}^{K}\Delta_{h,\mathbf{v}_{i,j}}g_{\theta}(\mathbf{x}_{i},s)\]
where \(\{\mathbf{v}_{i,j}\}\) is i.i.d and standard normal distributed, \(K\) is the average number. Unless otherwise specified, we set \(K\) equals \(1\) in order to reduce the computation cost. Similarly, we introduce the time differential operator \(\mathbf{dt}_{\tau}\),
\[\mathbf{dt}_{\tau}g_{\theta}(\mathbf{x}_{i},s)=\frac{g_{\theta}(\mathbf{x}_{i},s+\tau) -g_{\theta}(\mathbf{x}_{i},s-\tau)}{2\tau}\]
to substitute for \(\frac{\partial g_{\theta}}{\partial t}\). Include these difference operators into \(L_{2}\), now the loss term for differential equation becomes
\[L_{2}=\frac{1}{MN}\sum_{i=1}^{N}\sum_{k=1}^{M}\left(\mathbf{dt}_{\tau}g_{ \theta}(\mathbf{x}_{i}+\epsilon,t_{k})-\sigma^{2}\frac{1}{K}\sum_{j=1}^{K}\Delta_ {h,\mathbf{v}_{i,j}}g_{\theta}(\mathbf{x}_{i}+\epsilon,t_{k})\right)^{2}\]
The final loss function for training \(g_{\theta}\) is a weighted sum of \(L_{1}\) and \(L_{2}\), \(L=L_{1}+\lambda L_{2}\).
Since the input of ResNet \(g_{\theta}\) is \((\mathbf{x},t)\), which now contains an additional time variable \(t\), we modify the network input dimension from \([C\times H\times W]\) to \([(C+1)\times H\times W]\), where the additional channal represents \(t\). Accordingly, we slightly change the structure of ResNet by adding an additional channel in the first convolution layer. Other than that, our ResNet structure is the same as vanilla ResNet. Obviously, when \(t=0\), the additional channel has no impact on the network output, and it functions as same as vanilla ResNet.
## 5 Experiments
In this section, we will numerically verify the performance of our training method for ResNets.
### Preliminaries
_Datasets._ In our experiments, we consider both synthetic half-moon dataset and real-world CIFAR-10 [19], Fashion-MNIST [40] and SVHN [28] datasets. Half-moon dataset is a randomly generated 2d synthetic dataset in which we randomly generate 500 points and 1000 points with a standard deviation of 0.3 as training set and testing set, respectively. CIFAR-10 contains 60K \(3\times 32\times 32\) color images from 10 different classes with 50K and 10K of them used for training and testing, respectively. SVHN is a 10-class house number classification dataset which contains 73257 training images and 26032 testing images, each of size \(3\times 32\times 32\). Fashion-MNIST is a 10-class greyscale dataset which contains 60K training images and 10K testing images. The size of each image is \(28\times 28\).
_Performance evaluations._ We evaluate the performance by both natural accuracy on original test samples and robust accuracy on adversarial test samples within a perturbation range \(\epsilon\). To craft adversarial samples, we use Project Gradient Descent (PGD) and AutoAttack [6]. PGD first adds random uniform noise from \((-\epsilon,\epsilon)\) to clean samples, and iterates Fast Gradient Sign Method (FGSM) with fixed step size \(\alpha\) for \(m\) steps,
\[\mathbf{x}^{(m)}=\text{Clip}_{\mathbf{x},\epsilon}\{\mathbf{x}^{(m-1)}+\alpha\text{sign}( \nabla\mathcal{L}(\mathbf{x}^{(m-1)},y))\}\]
AutoAttack [6] is a more reliable benchmark for evaluating robustness, which ensembles four strong parameter-free attacks. It is reported that some defense methods which claim to be robust against PGD attack are destroyed by AutoAttack. Thus, we report classification accuracy on adversarial examples crafted by AutoAttack to evaluate the robustness of our method.
For half-moon dataset, we apply PGD[20] (step number \(m=20\)) attack with \(\alpha=0.01\) and \(\epsilon=0.2\) under \(l^{\infty}\) norm. For CIFAR-10 and SVHN, we apply PGD[20] attack with \(\alpha=2/255\) and \(\epsilon=8/255\). For Fashion-MNIST, we apply PGD[20] attack with \(\alpha=0.01\) and \(\epsilon=0.1\). AutoAttack is parameter-free and thus we only set \(\epsilon\) as same as that of PGD attack.
### Experiments on synthetic dataset
In this subsection, we numerically verify the efficacy of our training method in improving the robustness of ResNet on the half-moon dataset.
We first vary hyperparameters \(\lambda\) and \(\sigma^{2}\) and present the classification accuracy on adversarial examples in Figure 2. The robust accuracy increase when \(\lambda\) and \(\sigma^{2}\) increase, which indicates that the introduced regularizer is helpful for improving the performance of ResNet. Moreover, we plot the decision boundary of naturally trained ResNet and ResNet trained with different \(\lambda,\sigma^{2}\) in Figure 3. We can observe that
the decision boundary of natural training is irregular, while that of our models is smoother. These experimental results are consistent with our theory.
### Experiment on benchmarks
We further test the performance of our method on CIFAR-10, SVHN and Fashion-MNIST datasets. We choose ResNet18 as the backbone model, where our method will add an additional channel in the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & Methods & Natural & PGD[20] & AutoAttack \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet18 & 95.18 & 0.0 & 0.0 \\ & Ours & 88.54 & 20.02 & 17.64 \\ \hline \multirow{2}{*}{SVHN} & ResNet18 & 96.58 & 0.40 & 0.02 \\ & Ours & 94.16 & 19.05 & 15.48 \\ \hline \multirow{2}{*}{Fashion-MNIST} & ResNet18 & 93.95 & 0.0 & 0.0 \\ & Ours & 92.01 & 35.27 & 23.48 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Natural accuracy and robust accuracy under PGD[20] and AutoAttack on different datasets (%).
Figure 3: Decision boundary of natural trained ResNet and ResNet trained by our method with different hyperparameters \(\lambda\) and \(\sigma^{2}\).
Figure 2: Robust accuracy of ResNet trained by our model with different hyperparameters \(\lambda\) and \(\sigma^{2}\) on half-moon dataset.
input convolutional layer. During training, we apply standard data augmentation techniques including random crops and horizontal flips. The batch size is 128. We run 200 epochs with initial learning rate of 0.1, which decays by a factor of 10 at the 80th, 120th and 160th epochs. We use stochastic gradient descent optimizer with momentum of 0.9 and weight decay of \(5\times 10^{-4}\).
There are several hyperparameters in our algorithm that need to be determined. \(h\) and \(\tau\) are spatial and temporal discretization parameters. We choose \(\tau=10^{-4}\) and \(h=0.1\). We choose a fairly large \(h\) because neural networks are highly unstable on the spatial dimension and cannot converge when \(h\) is too small. To reduce the computation cost, we choose \(K=1\) when computing the Laplacian \(\Delta\), and only consider starting and final time step \(t_{1}=0,t_{2}=1\) when computing the regularization loss. The uniform noise \(\epsilon\) which we add to training set \(\mathcal{S}\) to fit the underlying domain \(\mathcal{D}\) is chosen the same as the attack range of each dataset, i.e. 8/255 for CIFAR-10 and SVHN, and 0.1 for Fashion-MNIST. \(\lambda\) and \(\sigma^{2}\) both affect the trade-off between natural accuracy and robust accuracy. With larger \(\lambda\) or \(\sigma^{2}\), the natural accuracy decreases, while robust accuracy increases. We report the result of \(\lambda=0.005\) and \(\sigma^{2}=0.2\) in Table 2, and an ablation study on the two parameters can be found in Table 3. From Table 2, we can see that our method, which introduces a regularization term based on convection-diffusion differential equation, raises the classification accuracy of ResNet against adversarial samples by a large margin. We want to emphasize that our method does not include any adversarial training techniques, which trains the model using adversarial samples crafted by PGD attacks. From Table 3, we observe that with fixed parameter \(\sigma^{2}\), the natural accuracy decreases and robust accuracy increases as the increasing of parameters \(\lambda\) and vice versa.
## 6 Conclusion
In this paper, we theoretically prove that under reasonable assumptions, the evolution from a base linear classifier to residual neural networks should be modeled by a convection-diffusion equation. Motivated by PDE theory, we analyze the robustness and Rademacher complexity of the proposed isotropic models. Based on these theoretical results, we develop a training method for ResNet and verify its effectiveness through experiments. We are aware that modeling the convection-diffusion equation through introducing a regularization term is one of the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\lambda\) & \(\sigma^{2}\) & Natural & PGD[20] & AutoAttack \\ \hline
0.2 & 0.2 & 80.33 & 24.91 & 23.89 \\
0.1 & 0.2 & 82.64 & 24.04 & 22.97 \\
0.05 & 0.2 & 84.51 & 23.58 & 22.53 \\
0.01 & 0.2 & 87.45 & 20.63 & 18.37 \\
0.005 & 0.2 & 88.54 & 20.02 & 17.64 \\ \hline
0.2 & 0.1 & 81.79 & 20.56 & 19.74 \\
0.1 & 0.1 & 84.73 & 20.02 & 18.87 \\
0.05 & 0.1 & 85.95 & 19.07 & 17.76 \\
0.01 & 0.1 & 89.52 & 18.60 & 14.12 \\
0.005 & 0.1 & 89.95 & 15.22 & 11.11 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Natural accuracy and robust accuracy under PGD[20] and AutoAttack on CIFAR-10, with varying parameter \(\lambda\) and \(\sigma^{2}\) (%).
many possible approaches. We are looking forward to explore other paths in the future work.
|
2308.13870 | Brain-like representational straightening of natural movies in robust
feedforward neural networks | Representational straightening refers to a decrease in curvature of visual
feature representations of a sequence of frames taken from natural movies.
Prior work established straightening in neural representations of the primate
primary visual cortex (V1) and perceptual straightening in human behavior as a
hallmark of biological vision in contrast to artificial feedforward neural
networks which did not demonstrate this phenomenon as they were not explicitly
optimized to produce temporally predictable movie representations. Here, we
show robustness to noise in the input image can produce representational
straightening in feedforward neural networks. Both adversarial training (AT)
and base classifiers for Random Smoothing (RS) induced remarkably straightened
feature codes. Demonstrating their utility within the domain of natural movies,
these codes could be inverted to generate intervening movie frames by linear
interpolation in the feature space even though they were not trained on these
trajectories. Demonstrating their biological utility, we found that AT and RS
training improved predictions of neural data in primate V1 over baseline models
providing a parsimonious, bio-plausible mechanism -- noise in the sensory input
stages -- for generating representations in early visual cortex. Finally, we
compared the geometric properties of frame representations in these networks to
better understand how they produced representations that mimicked the
straightening phenomenon from biology. Overall, this work elucidating emergent
properties of robust neural networks demonstrates that it is not necessary to
utilize predictive objectives or train directly on natural movie statistics to
achieve models supporting straightened movie representations similar to human
perception that also predict V1 neural responses. | Tahereh Toosi, Elias B. Issa | 2023-08-26T13:04:36Z | http://arxiv.org/abs/2308.13870v1 | # Brain-like representational straightening of natural movies in robust feedforward neural networks
###### Abstract
Representational straightening refers to a decrease in curvature of visual feature representations of a sequence of frames taken from natural movies. Prior work established straightening in neural representations of the primate primary visual cortex (V1) and perceptual straightening in human behavior as a hallmark of biological vision in contrast to artificial feedforward neural networks which did not demonstrate this phenomenon as they were not explicitly optimized to produce temporally predictable movie representations. Here, we show robustness to noise in the input image can produce representational straightening in feedforward neural networks. Both adversarial training (AT) and base classifiers for Random Smoothing (RS) induced remarkably straightened feature codes. Demonstrating their utility within the domain of natural movies, these codes could be inverted to generate intervening movie frames by linear interpolation in the feature space even though they were not trained on these trajectories. Demonstrating their biological utility, we found that AT and RS training improved predictions of neural data in primate V1 over baseline models providing a parsimonious, bio-plausible mechanism - noise in the sensory input stages - for generating representations in the early visual cortex. Finally, we compared the geometric properties of frame representations in these networks to better understand how they produced representations that mimicked the straightening phenomenon from biology. Overall, this work elucidating emergent properties of robust neural networks demonstrates that it is not necessary to utilize predictive objectives or train directly on natural movie statistics to achieve models supporting straightened movie representations similar to human perception that also predict V1 neural responses.
## 1 Introduction
In understanding the principles underlying biological vision, a longstanding debate in computational neuroscience is whether the brain is wired to predict the incoming sensory stimulus, most notably formalized in predictive coding (Rao & Ballard, 1999; Friston, 2009; Millidge et al., 2021), or whether neural circuitry is wired to recognize or discriminate among patterns formed on the sensory epithelium, popularly exemplified by discriminatively trained feedforward neural networks (DiCarlo et al., 2012; Tacchetti et al., 2018; Kubilius et al., 2018). Arguing for a role of prediction in vision, recent work found perceptual straightening of natural movie sequences in human visual perception (Henaff et al., 2019). Such straightening is diagnostic of a system whose representation could be linearly read out to perform prediction over time, and the idea of representational straightening resonates with machine learning efforts to create new types of models that achieve equivariant, linear codes for natural movie sequences. Discrimatively trained networks, however, lack any prediction over time in their supervision. It may not be surprising then that large-scale ANNs trained for classification produce representations that have almost no improvement in straightening relative to the input pixel space, while human observers clearly demonstrated perceptual straightening of natural movie sequences (subsequently also found in neurons of primary visual cortex, V1 (Henaff et al.,
2019; 2021)). This deficiency in standard feedforward ANNs might suggest a need for new models trained on predictive loss functions rather than pure classification to emulate biological vision.
Here, we provide evidence for an alternative viewpoint, that biologically plausible straightening can be achieved in ANNs trained for robust discrimination, without resorting to a prediction objective or natural movies in training. Drawing on insights from emergent properties of adversarially-trained neural networks in producing linearly invertible latent representations, we highlight the link between perceptual straightening of natural movies to invertible latent representations learned from static images (Figure 1). We examine straightening in these robust feedforward ANNs finding that their properties relate to those in the biological vision framework. The contributions of this work are as follows:
1. We show that robust neural networks give rise to straightened feature representations for natural movies in their feature space, comparable to the straightening measured in the primate brain and human behavior, and completely absent from standard feedforward networks.
2. We show that linearly interpolating between the start and end frames of a movie in the output feature space of robust ANNs produces synthetic frames similar to those of the original natural movie sequence in image space. Such invertible linear interpolation is precisely the definition of a temporally predictive feature representation.
3. Compared to prior models of early visual cortex, robustness to input noise (corruption or adversarial robustness) is significantly better at explaining neural variance measured from V1 neurons than non-robustly trained baseline models, suggesting a new hitherto unconsidered mechanism for learning the representations in early cortical areas that achieves natural movie straightening.
Figure 1: Perceptual straightening of movie frames can be viewed as invertibility of latent representations for static images. Left: straightening of representations refers to a decrease in the curvature of the trajectory in representation space such as a neural population in the brain or human perceptual space, but standard ANNs do not show straightening (Henaff et al., 2019; 2021). Right: Invertibility of latent representation refers to interpolation between the representation of two images (e.g. an image of a dog and an image of a cat), where the invertible interpolations show the main features of a dog morph into the main features of a cat. Invertible representations emerge in robust ANNs (Engstrom et al., 2019), obviating the need to directly train for temporal straightening.
Related work
### Mechanisms for producing brain-like representations
Feedforward ANNs as models of biological vision.Standard feedforward ANNs, although lacking a number of bio-plausible features such as feedback connections or a local learning rule (Whittington & Bogacz, 2019), still can explain the neural variance (Schrimpf et al., 2018) recorded from rodent (Bakhtiari et al., 2021), monkey (Yamins et al., 2014; Bashivan et al., 2019), and human visual cortex (Khaligh-Razavi & Kriegeskorte, 2014; Cichy et al., 2016) better than alternatives which are considered more bio-plausible by using a prediction objective function (e.g., PredNet and CPC (Zhuang et al., 2021; Schrimpf et al., 2020)). Thus, to learn the representations in the brain, regardless of the bio-plausibility of mechanisms, feedforward ANNs provide a parsimonious more tractable class of leading models for object recognition in the visual cortex.
Models of primary visual cortex.In neuroscience, rather than rely solely on top-down training objectives like standard ANNs do, there has been a tradition of explaining early visual representations using more fundamental principles such as sparse coding and predictive coding as well as invoking unsupervised training (Olshausen & Field, 1996; Rao & Ballard, 1999). For example, unsupervised _slow feature analysis_ extracts the slow-varying features from fast-varying signals in movies based on the intuition that most external salient events (such as objects) are persistent in time, and this idea can be used to explain the emergence of complex cells in V1 (Berkes & Wiskott, 2005). Recent work in machine learning has attempted to blend more bottom-up principles with top-down training by experimenting with swapping out ANN early layers with V1-like models whose filters are inspired by neuroscience studies (Dapello et al., 2020). This blended model turns out to have benefits for classification robustness in the outputs. However, it remains unclear whether there is a form of top-down training that can produce V1-like models. Such a mechanism would provide a fundamentally different alternative to prior proposals of creating a V1 through sparse coding or future prediction (Henaff et al., 2019, 2021).
### Temporal prediction and invertibility in neural networks
Learning to predict over time.Changes in architecture, training diet (movies), and objective (predicting future frames) have all been explored as mechanisms to produce more explicit equivariant representations of natural movies (Lotter et al., 2016; van den Oord et al., 2018). Directly related to the idea of straightening, penalizing the curvature of representations of frames was used in _Learning to linearize_(Goroshin et al., 2015) to learn straightened representations from unlabeled videos. This class of models does not need supervision which makes them more bio-plausible in nature; however, as mentioned in the previous section, they lag behind supervised feedforward ANNs both in terms of learning effective representations for object recognition and in producing feature representations that predict neural data.
Learning invertible latents.In deep learning applications, invertibility is mostly discussed in generative neural networks as a constraint to learn a prior to address applications in signals and systems such as image de-noising, signal compression, and image reconstruction from few and noisy measurements or to be able to reconstruct or modify real images. Usually, invertibility is implemented by carefully designing dedicated architectures (Jacobsen et al., 2018; Chen et al., 2019). However, recently it has been shown it can be implemented in standard feedforward ANNs when they undergo training for adversarial robustness (Engstrom et al., 2019;c). These works showed empirically that adversarially robust training encourages invertibility as linear interpolation between classes (e.g., cat to dog) results in semantically smooth image-to-image translation (Engstrom et al., 2019) as opposed to blurry image sequences produced by standard ANNs.
We reasoned that robust networks that encourage invertibility may also lead to straightening as this is a property that would be related to improved invertibility of a network, so we sought to extend prior work and study the behavior of robustly trained networks specifically in the domain of natural movies. We report on how these networks straighten natural movies in their feature spaces and can invertibly reproduce movie frames in a natural sequence.
## 3 Methods
### Baseline models
We consider the class of feedforward convolutional neural networks, typically restricting to the ResNet-50 (He et al., 2015) architecture trained on ImageNet for the main analyses. Baseline networks (not trained for robustness) include a supervised ResNet-50/ResNet-101/ResNet-152, and self-supervised (Barloufwins (Zbontar et al., 2021)). We trained ResNet-50 for imagenet classification without augmentations and with extensive augmentations (Chen et al., 2020), labeled as _SupNoAugm_ and _SupMocoAugm_, respectively. We also consider V0resnet (biological V1 front-end (Dapello et al., 2020)) and ResNet-50 trained as a base network for action recognition (Chen et al., 2021) but include these as separate examples in the Appendix since they use a modified architecture.
\begin{table}
\begin{tabular}{l c c c}
**Models** & **Clean accuracy** & **Robust accuracy** & **Model reference** \\ \hline RN50 AT \(L_{2}:\epsilon=3\) & 58.50 & 57.81 & (Engstrom et al., 2019a) \\ RN50 AT \(L_{\infty}:\epsilon=4\) & 62.80 & 61.40 & (Engstrom et al., 2019a) \\ RN50 AT \(L_{\infty}:\epsilon=8\) & 48.29 & 47.01 & (Engstrom et al., 2019a) \\ RN50 RS \(L_{2}:\epsilon=0.25\) & 39.40 & 36.01 & (Cohen et al., 2019) \\ RN50 RS \(L_{2}:\epsilon=0.5\) & 23.75 & 22.21 & (Cohen et al., 2019) \\ RN50 RS \(L_{2}:\epsilon=1\) & 10.62 & 10.17 & (Cohen et al., 2019) \\ RN50 Standard & 75.43 & 52.32 & (He et al., 2015) \\ RN50 No augmentation & 64.35 & 28.13 & custom \\ RN50 Extensive augmentation & 75.27 & 53.08 & custom \\ RN50 Self-supervised & 70.18 & 41.73 & (Zbontar et al., 2021) \\ \end{tabular}
\end{table}
Table 1: Clean accuracy and robust (attack: \(L_{2},\epsilon=0.1\)) accuracy for the models used. Except for the custom models, all the other models were obtained from the repository of the references. Note that RS here refers to the base classifier in random smoothing without probabilistic inference.
Figure 2: ANNs show straightening of representations when robustness to noise constraints (noise augmentation or adversarial attack) is added to their training. Measurements for straightening of movie sequences (from (Hénaff et al., 2019), in each layer of ResNet50 architecture under different training regimes: supervised training (standard), no training (random parameters), self-supervised training (Zbontar et al., 2021), supervised training with no augmentations, supervised training with extensive augmentations, supervised training with noise augmentation (base classifiers for RS) (Cohen et al., 2019), and supervised training with adversarial training (Engstrom et al., 2019a)
### Models trained for robustness
We consider two forms of models trained for minimizing a classification loss \(\mathcal{L}_{ce}\) in the face of input perturbations \(\delta\in\mathbb{R}^{h\times w\times c}\) subject to constraints on the overall magnitude of perturbations in the input space, where \(x\), \(y\), \(\theta\) are the network input, output, and classifier parameters, respectively:
\[\mathcal{L}_{ce}(\theta,x+\delta,y) \tag{1}\]
In adversarially trained networks, projected gradient descent from the output space finds maximal directions of perturbation in the input space limited to length \(\epsilon\), and training entails minimizing the effect of these perturbation directions on the network's output (Madry et al., 2018). In random smoothing (Lecuyer et al., 2018; Cohen et al., 2019), a supervised network is trained but in the face of Gaussian noise added to the input space as the base classifier before performing a probabilistic inference. In this work, we only use the representations as learned in base classifiers without the probabilistic inference. The perturbations in the base classifiers \(\delta\) thus can follow:
\[\delta_{rand}\sim\mathcal{N}(0,\sigma^{2}I),\qquad\delta_{adv}:=\operatorname {arg\,max}_{|\delta|_{r}\leq\epsilon}\mathcal{L}_{ce}(\theta,x+\delta,y) \tag{2}\]
These defenses to input noise have different motivations. Adversarial robustness provides defense against white box attacks whereas random smoothing protects against general image corruptions. However, prior work has suggested a connection between corruption robustness and adversarial robustness (Ford et al., 2019). Theoretically, random smoothing leads to certified robustness (Cohen et al., 2019) and trains a condition of invertible networks (Jacobsen et al., 2018), while adversarial robustness has been shown empirically to lead to invertible latent representations in networks (Engstrom et al., 2019).
### Representational Metrics
_Representational straightening_ estimates the local curvature \(c\) in a given representation \(r\) of a sequence of images (natural or artificial) of length \(N\), \(C_{seq}:\{x_{t_{1}},x_{t_{2}},...,x_{t_{N}}\}\) as the angle between vectors connecting nearby frames, and these local estimates are averaged over the entire movie sequence for the overall straightening in that representational trajectory (same as (Henaff et al., 2019)):
\[c_{t}=\arccos\bigg{(}\frac{r_{t}-r_{t-1}}{\|r_{t}-r_{t-1}\|}\cdot\frac{r_{t+1}- r_{t}}{\|r_{t+1}-r_{t}\|}\bigg{)},\quad C_{seq}=\frac{1}{N}\sum_{t=1}^{N-1}c_{t} \tag{3}\]
Lower curvature (angle between neighboring vectors) indicates a straighter trajectory, and in the results, we generally reference curvature values to the curvature in the input space (i.e., straightening relative to pixel space). This metric has been utilized in neuroscience showing that humans tend to represent nearby movie frames in a straightened manner relative to pixels (Henaff et al., 2019). This curvature metric is also closely related to objectives used in efforts to train models with equivariance by linearizing natural transformations in the world as an alternative to standard networks trained for invariant object classification (Goroshin et al., 2015; Sabour et al., 2017).
_Expansion._ We define the radius of a sequence of images from a movie clip as the radial size of the minimum covering hyper-sphere circumscribing all points representing the frames in \(r\)(Gartner, 1999). We use this measure to supplement the geometrical characterization of a movie sequence in pixel space and in a model's representational spaces. Like representational straightening values, expansion values for models in the main text are referenced to the radius measured in pixel space or to the radius measured for the same layer in a baseline network by simply dividing by those references. We used mini-ball, a publicly available python package based on (Gartner, 1999) to measure the radius of the covering hyper-sphere.
## 4 Results
### Robust ANNs exhibit representational straightening
With insights from connections to invertibility (see Figure 1), we hypothesized representational straightening of movie trajectories could be present in robustly trained neural networks. We took
the same movie stimuli publicly available (Henaff et al., 2019)(A.4.1, Figure 12) and the same metrics, and we tested the same architecture, ResNet50 (He et al., 2015)) trained under different loss functions Table 1 to perform controlled head-to-head comparisons. Figure 2 shows representational straightening of natural movies measured in layers of ResNet50 trained under AT (Engstrom et al., 2019) and RS (Cohen et al., 2019) at different adversarial attack or noise levels, respectively. Robust neural networks in contrast to other ANNs decreased the curvature of natural movies. Straightening for artificial sequences as measured in (Henaff et al., 2019) (A.1, Figure 7) and other models (A.2, Figures 9 and 8) are provided in Appendix. Importantly, although most models, whether a standard ResNet-50 or one with a V1-like front-end, may display an initial dip in curvature for natural movies in the very earliest layers, this is not sustained in feature representations of later layers except for robustly trained networks (A.2, Figure 9 vs. A.1, Figure 7) and those trained on action recognition from temporally instructed training, which we include here as a proxy for a movie-like training though its feedforward architecture deviates from a ResNet50 by additional temporal processing components (A.2, Figure 8).
_Index Measure_ (SSIM (Wang et al., 2004)), that utilizes intermediate-level statistics motivated from biological vision and putatively more related to some aspects of human perception than simple pixel space correspondence. Figure 3 shows an example of such inverted frames for standard ResNet50, RS (\(L_{2}:\sigma^{2}=0.5\)) and AT (\(L_{2}:\sigma^{2}=3\)), and a summary of average measured invertibility using the SSIM metric on pseudo-frames from each model. As expected, in line with the findings of previous work (Engstrom et al., 2019), AT models scored relatively higher on the invertibility of frames than a baseline discriminative model. However, what had not been previously shown is that RS models, using merely the benefits of their robustness to noisy augmentation (base classifier on top of learned representation; no probabilistic inference), also exhibit higher invertibility scores compared to standard trained models. Invertibility scores were consistently improved in RS and AT models across a variety of movies tested including those with relatively stationary textures and not just dynamic objects (see A.4.4, Figure 13 for further examples and A.4.3, Table 3 for scores across all 11 movies). Thus, RS models along with AT models exhibit invertibility of representations for movie frames which further demonstrates their ability to support perceptual straightening of natural movies in their highest layers that may be functionally similar to perceptual straightening previously measured from human subjects (Henaff et al., 2019).
Random smoothing and adversarial training in explaining neural representations in the primate visual system
**Robustness to noise as a bio-plausible mechanism underlying straightening in primary visual cortex.** As shown above, straightening which is a constraint for brain-like representations in the visual cortex manifests in robust neural networks. Both classes of RS and AT training for robustness to \(L_{2}\) norm generate straightened representations of movie sequences. However, to distinguish among models of object recognition, we can measure how well they explain variance in patterns of neural activity elicited in different visual cortical areas. Here, for all neural comparisons in our analyses, we measured the Brain-Score (Schrimpf et al., 2018) using the publicly available online resource to assess the similarity to the biological vision of each model, which is a battery of tests comparing models against previously collected data from the primate visual system (see Brain-Score.org). We found that RS and AT models provided a better model of V1 (in terms of explained variance) compared to non-robust models Figure 4. On other benchmarks, as we go up
Figure 4: Left: RS and AT are more predictive of V1 neural responses than other non-robust models of the same architecture (ResNet50). Right: Each dot represents a layer in ResNet50 trained under different loss function (color codes same as left). Higher representational straightening (negative curvature change) associates with higher V1 predictivity. Intriguingly, the highest V1 predictivity corresponds to layers that exhibit comparable straightening to that measured from V1 neurons (\(-10^{\circ}\) on average) (Henaff et al., 2021). Explained variance is noise-corrected and computed as in (Schrimpf et al., 2018)
the ventral stream hierarchy from V1 to IT again, keeping the layer assignment fixed across models for proper comparison, we observed a decrease in explainability of robust models (A.3, Figure 11), in part presumably because robust models have lower object classification performance which is known to drive fits in higher brain areas like V4 and IT supporting object recognition (Yamins et al., 2014). Previous work (Dapello et al., 2020; Kong et al., 2022) linked adversarial robustness in models to their higher Brain-Score for V1, but we found that it may not be specifically driven by _adversarial_ robustness per se, rather (\(L_{2}\)) noise robustness is also sufficient (as in base classifiers of RS tested here). More broadly, looking at neural fits across all models and their layers, we find that straightening in a particular model-layer correlates with improved explanatory power of variance in cortical area V1 (Figure 4, middle panel, each dot is a layer from a model), being even more strongly predictive than robustness of the overall model (A3, Figure 10). The level of straightening reached by best fitting layers of RS and AT models was comparable to the 10 degree straightening estimated in macaque V1 neural populations (black dashed reference line in Figure 4). This complements the fact that robust models peak near the 30 degree straightening measured in perception (Figure 2), suggesting that robust models can achieve a brain-like level of straightening to V1 and perception.
**Does the geometry of movie frame representations in pixel space dictate straightening in downstream representations?** The connection between two properties of the same representation manifold, robustness to independently sampled noise, and straightened trajectories of smooth input temporal sequences, is not immediately clear. Because robustness is achieved by adding noise bounded by a norm (\(L_{2}\), \(L_{2}\), or \(L_{\infty}\)) in pixel space, a natural question is whether the radius of the bounding hyper-sphere of the frames of the tested movies in pixel space (see _Expansion_ in Methods) was correlated with the measured straightening in feature space in each layer of the robustly trained models (Figure 5; also see A.5, Figure 14). We found, however, that there seemed to be different mechanisms at play for RS versus AT in terms of achieving straightening. RS models showed (small but) positive correlations, which means the smaller the ball containing all the frames of the movie in input space, the larger the straightening effect for the representations of frames of that movie in the model. While in AT models we see the opposite (negative) or no correlation. These divergent patterns underscore differences between these models and suggest that geometric size in pixel space is not strongly constraining the degree to which a movie can be straightened.
**Geometry of movie frame representations in feature space is relevant for capturing neural representations in V1** Between different RS models tested on different input noise levels, RS \(L_{2}:\sigma^{2}=0.5\) stands out as it gives a better model of V1 than those using smaller or larger magnitude input noise (Figure 4). For this model, we found that in addition to its intermediate level of straightening, the expansion score of movie frames, which is the radial size in its representation normalized to size in the same layer of a baseline ResNet50, was highest compared to the
Figure 5: Can straightening for a movie sequence be explained by the size of the hyper-sphere bounding the frames (i.e. radius in pixel space)? While RS exhibits a small but positive correlation, the rest of the models, including AT, show negative or no correlations. A positive correlation means the smaller the size of the bounding hyper-sphere in pixel space, the more straightened the representation over the layers of the model.
other RS models (Figure 6, middle panel; measures are referenced to layers in a standard ResNet50 to highlight relative effect of robustness training rather than effects driven by hierarchical layer). This demonstrates a potential trade-off between improving straightening in a representation while avoiding too much added contraction of movies by robust training relative to standard training. This balance seems to be best achieved for \(\sigma^{2}=0.5\), where we also see the significantly higher predictivity of V1 cortical data (Figure 6, right panel). The best AT model also shows little contraction of movies coupled with high straightening (A.5, 15).
## 5 Discussion
We have demonstrated novel properties of robust neural networks in how they represent natural movies. Conceptually, this work establishes a seemingly surprising connection between disparate ideas, robust discriminative networks trained on static images, on one hand, to work learning to linearize by training on natural movies, on the other. These modeling paths could both result in linearized or straightened, natural movie representations (Figure 1). From a machine learning perspective, the invertibility and concomitant representational straightening of robust networks suggest that they learn explainable representations of natural movie statistics. Biologically, the emergence of straightening in these networks as well as their ability to better explain V1 data than baselines relatively lacking in straightening Figure 4 provides new insights into potential neural mechanisms for previously difficult-to-explain brain phenomena.
Biological constraints could lend parsimony to selecting among models, each with a different engineering goal. On the face, RS by virtue of utilizing Gaussian noise instead of engineered noise gains traction over adversarial training as a more simple, and powerful way of achieving robustness in ANNs, which is in line with a long history of probabilistic inference in the visual cortex of humans (Pouget et al., 2013). Indeed, looking across the range of robust models tested, the best-fitting model of V1 was not necessarily the most robust but tended toward more straightened representations that also showed the least contracted representations - consistent with a known dimensionality expansion from the sensory periphery to V1 in the brain (Field, 1994). Future work exploring a wider variety of robustness training in conjunction with more biolapusible architectures, objectives, and training diets may yet elucidate the balance of factors contributing to biological vision.
At the same time, our work does not directly address how straightened representations in the visual system may or may not be utilized to influence downstream visual perception and behavior, and this connection is an important topic for future work. On the one hand, for supporting dynamical scene perception, behaviors that predict (extrapolate) or postdict (interpolate) scene properties over time (e.g., object position) may be supported by straightened natural movie representations. Indeed, both explanations, prediction and postdiction, have been invoked to account for psychophysical phenomena like the flash-lag illusion which presents an interesting test case of how the brain processes complex stimuli over time (Eagleman & Sejnowski, 2000). However, even for relatively stationary scenes such as those containing textures, we observed benefits for straightening and invertibility in robustly trained networks (see A.4, Tables 2 and 3). Further work is needed to explore how spatially local versus global features in the presence of simple versus complex motion are affected in their relative straightening by model training.
Figure 6: Geometric characteristics, straightening, and curvature, of RS models related to V1 explainability. \(\Delta\) means quantity is referenced to the same measure in a standard ResNet50.
###### Acknowledgements.
This work was supported by a Klingenstein-Simons fellowship, Sloan Foundation fellowship, and Grossman-Kavli Scholar Award as well as a NVIDIA GPU grant and was performed using the Columbia Zuckerman Axon GPU cluster. We thank all three reviewers for their constructive feedback that led to an improved final version of the paper. |
2304.10074 | Improving Graph Neural Networks on Multi-node Tasks with Labeling Tricks | In this paper, we provide a theory of using graph neural networks (GNNs) for
\textit{multi-node representation learning}, where we are interested in
learning a representation for a set of more than one node such as a link.
Existing GNNs are mainly designed to learn single-node representations. When we
want to learn a node-set representation involving multiple nodes, a common
practice in previous works is to directly aggregate the single-node
representations obtained by a GNN. In this paper, we show a fundamental
limitation of such an approach, namely the inability to capture the dependence
among multiple nodes in a node set, and argue that directly aggregating
individual node representations fails to produce an effective joint
representation for multiple nodes. A straightforward solution is to distinguish
target nodes from others. Formalizing this idea, we propose \text{labeling
trick}, which first labels nodes in the graph according to their relationships
with the target node set before applying a GNN and then aggregates node
representations obtained in the labeled graph for multi-node representations.
The labeling trick also unifies a few previous successful works for multi-node
representation learning, including SEAL, Distance Encoding, ID-GNN, and NBFNet.
Besides node sets in graphs, we also extend labeling tricks to posets, subsets
and hypergraphs. Experiments verify that the labeling trick technique can boost
GNNs on various tasks, including undirected link prediction, directed link
prediction, hyperedge prediction, and subgraph prediction. Our work explains
the superior performance of previous node-labeling-based methods and
establishes a theoretical foundation for using GNNs for multi-node
representation learning. | Xiyuan Wang, Pan Li, Muhan Zhang | 2023-04-20T04:03:40Z | http://arxiv.org/abs/2304.10074v1 | # Improving Graph Neural Networks on Multi-node Tasks with Labeling Tricks
###### Abstract
In this paper, we provide a theory of using graph neural networks (GNNs) for _multi-node representation learning_, where we are interested in learning a representation for a set of more than one node such as a link. Existing GNNs are mainly designed to learn single-node representations. When we want to learn a node-set representation involving multiple nodes, a common practice in previous works is to directly aggregate the single-node representations obtained by a GNN. In this paper, we show a fundamental limitation of such an approach, namely the inability to capture the dependence among multiple nodes in a node set, and argue that directly aggregating individual node representations fails to produce an effective joint representation for multiple nodes. A straightforward solution is to distinguish target nodes from others. Formalizing this idea, we propose labeling trick, which first labels nodes in the graph according to their relationships with the target node set before applying a GNN and then aggregates node representations obtained in the labeled graph for multi-node representations. The labeling trick also unifies a few previous successful works for multi-node representation learning, including SEAL, Distance Encoding, ID-GNN, and NBFNet. Besides node sets in graphs, we also extend labeling tricks to posets, subsets and hypergraphs. Experiments verify that the labeling trick technique can boost GNNs on various tasks, including undirected link prediction, directed link prediction, hyperedge prediction, and subgraph prediction. Our work explains the superior performance of previous node-labeling-based methods and establishes a theoretical foundation for using GNNs for multi-node representation learning.
Graph Neural Networks, Multi-node Representation, Subgraph.
## 1 Introduction
Graph neural networks (GNNs) (Scarselli et al., 2009; Bruna et al., 2013; Duvenaud et al., 2015; Li et al., 2015; Kipf and Welling, 2016; Defferrard et al., 2016; Dai et al., 2016;
Velickovic et al., 2017; Zhang et al., 2018; Ying et al., 2018) have achieved great successes in recent years. While GNNs have been well studied for single-node tasks (such as node classification) and whole-graph tasks (such as graph classification), using GNNs on tasks that involve multi-nodes is less studied and less understood. Among such _multi-node representation learning_ problems, link prediction (predicting the link existence/class/value between a set of two nodes) is perhaps the most important one due to its wide applications in practice, such as friend recommendation in social networks (Adamic and Adar, 2003), movie recommendation in Netflix (Bennett et al., 2007), protein interaction prediction (Qi et al., 2006), drug response prediction (Stanfield et al., 2017), and knowledge graph completion (Nickel et al., 2015). Besides link prediction, other multi-node tasks, like subgraph classification and hyperedge prediction, are relatively new but have found applications in gene set analysis (Wang et al., 2020), user profiling (Alsentzer et al., 2020), drug interaction prediction (Srinivasan et al., 2021), temporal network modeling (Liu et al., 2022), group recommendation Amer-Yahia et al. (2009), etc. In this paper, we study the ability of GNNs to learn multi-node representations. As the link task is the simplest multi-node case, we mainly use link prediction in this paper to visualize and illustrate our method and theory. However, our theory and method apply generally to all multi-node representation learning problems such as subgraph (Alsentzer et al., 2020), hyperedge (Zhang et al., 2018) and network motif (Liu et al., 2022) prediction tasks.
Starting from the link prediction task, we illustrate the deficiency of existing GNN models for multi-node representation learning which motivates our labeling trick. There are two main classes of GNN-based link prediction methods: Graph AutoEncoder (GAE) (Kipf and Welling, 2016) and SEAL (Zhang and Chen, 2018; Li et al., 2020). **GAE** and its variational version VGAE (Kipf and Welling, 2016) first apply a GNN to the entire graph to compute a representation for each node. The representations of the two end nodes of the link are then aggregated to predict the target link. On the contrary, SEAL assigns node labels according to their distances to the two end nodes before applying the GNN on the graph. SEAL often shows much better practical performance than GAE. The key lies in SEAL's node labeling step.
We first give a simple example to show when GAE fails. In Figure 1a, \(v_{2}\) and \(v_{3}\) have symmetric positions in the graph--from their respective views, they have the same \(h\)-hop neighborhood for any \(h\). Thus, without node features, GAE will learn the same representation for \(v_{2}\) and \(v_{3}\). Therefore, when predicting which one of \(v_{2}\) and \(v_{3}\) is more likely to form a link with \(v_{1}\), GAE will aggregate the representations of \(v_{1}\) and \(v_{2}\) as the link representation of \((v_{1},v_{2})\), and aggregate the representations of \(v_{1}\) and \(v_{3}\) to represent \((v_{1},v_{3})\), thus giving \((v_{1},v_{2})\) and \((v_{1},v_{3})\) the same representation and prediction. The failure to distinguish links \((v_{1},v_{2})\) and \((v_{1},v_{3})\) that have different structural roles in the graph reflects one key limitation of GAE-type methods: by computing \(v_{1}\) and \(v_{2}\)'s representations independently of each other, GAE cannot capture the dependence between two end nodes of a link. For example, \((v_{1},v_{2})\) has a much smaller shortest path distance than that of \((v_{1},v_{3})\); and \((v_{1},v_{2})\) has both nodes in the same hexagon, while \((v_{1},v_{3})\) does not.
Take common neighbor (CN) (Liben-Nowell and Kleinberg, 2007), one elementary heuristic feature for link prediction, as another example. CN counts the number of common neighbors between two nodes to measure their likelihood of forming a link. It is the foundation of many other successful heuristics such as Adamic-Adar (Adamic and Adar, 2003)
and Resource Allocation (Zhou et al., 2009), which are also based on neighborhood overlap. However, GAE cannot capture such neighborhood-overlap-based features. As shown in Figure 0(a), there is 1 common neighbor between \((v_{1},v_{2})\) and 0 between \((v_{1},v_{3})\), but GAE always gives \((v_{1},v_{2})\) and \((v_{1},v_{3})\) the same representation. The failure to learn common neighbor demonstrates GAE's severe limitation for link prediction. The root cause still lies in that GAE computes node representations independently of each other, and when computing the representation of one end node, it is unaware of the other end node.
In fact, GAE represents a common practice of using GNNs to learn multi-node representations. That is, obtaining individual node representations through a GNN and then aggregating the representations of those target nodes as the multi-node representation. Similar failures caused by independence of node representation learning also happen in general multi-node representation learning problems. In the subgraph representation learning task, which is to learn representations for subgraphs inside a large graph (Alsentzer et al., 2020), representations aggregated from independently computed node representations will fail to differentiate nodes inside and outside the subgraph. Figure 0(b) (from Wang and Zhang (2022)) shows an example. Directly aggregating node embeddings produced by a GNN will lead to the same representation for subgraphs \((v_{1},v_{2},v_{3})\) and \((v_{1},v_{2},v_{4})\). However, the former subgraph forms a triangle while the latter one does not.
This paper solves the above type of failures from a _structural representation learning_ point of view. We adopt and generalize the notion _most expressive structural representation_(Srinivasan and Ribeiro, 2020), which gives multi-node substructure the same representation if and only if they are _isomorphic_ (a.k.a. symmetric, on the same orbit) in the graph. For example, link \((v_{1},v_{2})\) and link \((v_{4},v_{3})\) in Figure 0(a) are isomorphic, and a most expressive structural representation should give them the same representation. On the other hand, a most expressive structural representation will discriminate all non-isomorphic links (such as \((v_{1},v_{2})\) and \((v_{1},v_{3})\)). According to our discussion above, GAE-type methods that directly
Figure 1: (a) In this graph, nodes \(v_{2}\) and \(v_{3}\) are isomorphic; links \((v_{1},v_{2})\) and \((v_{4},v_{3})\) are isomorphic; link \((v_{1},v_{2})\) and link \((v_{1},v_{3})\) are **not** isomorphic. However, if we aggregate two node representations learned by a GNN as the link representation, we will give \((v_{1},v_{2})\) and \((v_{1},v_{3})\) the same prediction. (b) In this graph, nodes \(v_{3}\) and \(v_{4}\) are isomorphic. Aggregating the node embeddings within the subgraph, GNN will produce equal embeddings for subgraphs \((v_{1},v_{2},v_{3})\) and \((v_{1},v_{2},v_{4})\), while the two subgraphs are not isomorphic. This problem was first observed by You et al. (2019), which was interpret as the failure of GNNs to capture node positions, and later became more formalized in (Srinivasan and Ribeiro, 2020).
aggregate node representations cannot learn a most expressive structural representation. Then, how to learn a most expressive structural representation of node sets?
To answer this question, we revisit the other GNN-based link prediction framework, SEAL, and analyze how node labeling helps a GNN learn better node set representations. We find that two properties of the node labeling are crucial for its effectiveness: 1) target-nodes-distinguishing and 2) permutation equivariance. With these two properties, we define _set labeling trick_, which considers each multi-node substructure as a node set and unifies previous node labeling methods into a single and most general form. Theoretically, we prove that with set labeling trick, a sufficiently expressive GNN can learn most expressive structural representations of node sets (Theorem 12), which reassures GNN's node set prediction ability. It also closes the gap between the nature of GNNs to learn node representations and the need of multi-node representation learning in node-set-based inference tasks.
Set labeling trick is for multi-node structure of a node set and can be used on a wide range of tasks including link prediction and subgraph classification. However, to describe and unify even more tasks and methods, we propose three extensions of set labeling trick. One is _poset labeling trick_. In some tasks, target nodes may have intrinsic order relations in real-world problems. For example, in citation graphs, each link is from the citing article to the cited one. In such cases, describing multi-node substructures with node sets leads to loss of order information. This motivates us to add order information to the label and use poset instead to describe substructures. Another extension is _subset labeling trick_. It unifies labeling methods besides SEAL (Zhang and Chen, 2018), like ID-GNN (You et al., 2021) and NBFNet (Zhu et al., 2021). These works label only a subset of nodes each time. We formalize these methods and analyze the expressivity: when using GNNs without strong expressivity, subset labeling trick exhibits higher expressivity than labeling tricks in some cases. Last but not least, by converting hypergraph to bipartite graph, we straightforwardly extend labeling trick to hypergraph.
## 2 Preliminaries
In this section, we introduce some important concepts that will be used in the analysis of the paper, including _permutation_, _poset isomorphism_ and _most expressive structural representation_.
We consider a graph \(\mathcal{G}=(V,E,\textbf{A})\), where \(V=\{1,2,\ldots,n\}\) is the set of \(n\) vertices, \(E\subseteq V\times V\) is the set of edges, and \(\textbf{A}\in\mathbb{R}^{n\times n\times k}\) is a 3-dimensional tensor containing node and edge features. The diagonal components \(\textbf{A}_{i,i,.}\) denote features of node \(i\), and the off-diagonal components \(\textbf{A}_{i,j,.}\) denote features of edge \((i,j)\). The node/edge types can also be expressed in **A** using integers or one-hot encoding vectors for heterogeneous graphs. We further use \(\textbf{A}\in\{0,1\}^{n\times n}\) to denote the adjacency matrix of \(\mathcal{G}\) with \(\textbf{A}_{i,j}=1\) iff \((i,j)\in E\), where it is possible \(\textbf{A}_{i,j}\neq\textbf{A}_{j,i}\). We let \(\textbf{A}\) be the first slice of **A**, i.e., \(\textbf{A}=\textbf{A}_{:,:,1}\). Since **A** contains the complete information of a graph, we also directly denote the graph by **A**.
### Permutation
**Definition 1**: _A **permutation**\(\pi\) is a bijective mapping from \(\{1,2,\ldots,n\}\) to \(\{1,2,\ldots,n\}\). All \(n!\) possible \(\pi\)'s constitute the permutation group \(\Pi_{n}\)._
Depending on the context, permutation \(\pi\) can mean assigning a new index \(\pi(i)\) to node \(i\in V\), or mapping node \(i\) to node \(\pi(i)\) of another graph. Slightly extending the notation, we let the permutation of a set/sequence denote permuting each element in the set/sequence. For example, permutation \(\pi\) maps a set of nodes \(S\subseteq V\) to \(\pi(S)=\{\pi(i)|i\in S\}\) and maps a set of node pairs \(S^{\prime}\subseteq V\times V\) to \(\pi(S^{\prime})=\{\pi((i,j))|(i,j)\in S^{\prime}\}=\{(\pi(i),\pi(j))|(i,j)\in S ^{\prime}\}\). The permutation of a graph's tensor \(\mathsf{A}\), denoted as \(\pi(\mathsf{A})\), can also be defined. As \(i\)-th node and \(j\)-th node will have new index \(\pi(i),\pi(j)\) while keeping the features of the pair, \(\pi(\mathsf{A})_{\pi(i),\pi(j)}=\mathsf{A}_{i,j}\).
Permutation is closely related to _graph isomorphism_, whether two graphs describe the same structure. Intuitively, as nodes in graphs have no order, no matter what permutation is applied to a graph, the transformed graph should be isomorphic to the original graph. Similarly, if one graph can be transformed into another under some permutation, the two graphs should also be isomorphic. Formally speaking,
**Definition 2**: _Two graphs \(\mathsf{A}\in\mathbb{R}^{n\times n\times d},\mathsf{A}^{\prime}\in\mathbb{R}^ {n^{\prime}\times n^{\prime}\times d^{\prime}}\) are **isomorphic** iff there exists \(\pi\in\Pi_{n}\), \(\pi(\mathsf{A})=\mathsf{A}^{\prime}\)._
In whole graph classification tasks, models should give isomorphic graphs the same prediction as they describe the same structure, and differentiate non-isomorphic graphs.
### Poset isomorphism
To describe a substructure defined by a subset of nodes with internal relation, like a directed edge, we introduce poset. A poset is a set with a partial order. Partial order is a reflexive, antisymmetric, and transitive homogeneous relation on the set (Davey and Priestley, 2002).
**Definition 3**: _A **poset**\(S\) is a tuple \((U,\leq_{S})\), where \(U\) is a set, and \(\leq_{S}\subseteq U\times U\) is a relation on \(U\). Let \(u\leq_{S}v\) denote \((u,v)\in\leq_{S}\). \(\leq_{S}\) fulfills the following conditions._
1. _Reflexivity._ \(\forall v\in U,v\leq_{S}v\)_._
2. _Antisymmetry._ \(\forall u,v\in U\)_, if_ \(u\leq_{S}v\) _and_ \(v\leq_{S}u\)_, then_ \(u=v\)_._
3. _Transitivity._ \(\forall u,v,w\in U\)_, if_ \(u\leq_{S}v\) _and_ \(v\leq_{S}w\)_, then_ \(u\leq_{S}w\)_._
The permutation operation on partial order relation and poset is defined as follows.
\[\pi(\leq_{S})=\pi(\{(u,v)\ |\ (u,v)\in \leq_{S}\}) =\{(\pi(u),\pi(v))\ |\ (u,v)\in \leq_{S}\}, \tag{1}\] \[\pi(S)=\pi((U,\leq_{S})) =(\pi(U),\pi(\leq_{S})). \tag{2}\]
To describe when two posets derive the same substructure, we define _poset isomorphism_, which generalizes graph isomorphism to arbitrary node posets in a graph.
**Definition 4**: _(Poset isomorphism) Given two graphs \(\mathcal{G}=(V,E,\mathsf{A})\), \(\mathcal{G}^{\prime}=(V^{\prime},E^{\prime},\mathsf{A}^{\prime})\), and two node posets \(S=(U,\leq_{S}),U\subseteq V\), \(S^{\prime}=(U^{\prime},\leq_{S^{\prime}}),U^{\prime}\subseteq V^{\prime}\), we say substructures \((S,\mathsf{A})\) and \((S^{\prime},\mathsf{A}^{\prime})\) are isomorphic (denoted by \((S,\mathsf{A})\simeq(S^{\prime},\mathsf{A}^{\prime})\)) iff \(\exists\pi\in\Pi_{n},S=\pi(S^{\prime})\) and \(\mathsf{A}=\pi(\mathsf{A}^{\prime})\)._
Set is a particular case of poset, where the partial order only contains reflexive relations \(u\leq_{S}u,u\in U\). It can describe substructures without order, like undirected edges and subgraphs. Abusing the notation of poset, we sometimes also use \(S\) to denote a set and omit the trivial partial order relation. Then, _set isomorphism_ is defined in the following.
Definition 5 (Set isomorphism): Given two graphs \(\mathcal{G}=(V,E,\boldsymbol{\mathsf{A}})\), \(\mathcal{G}^{\prime}=(V^{\prime},E^{\prime},\boldsymbol{\mathsf{A}}^{\prime})\), and two node sets \(S\subseteq V\), \(S^{\prime}\subseteq V^{\prime}\), we say substructures \((S,\boldsymbol{\mathsf{A}})\) and \((S^{\prime},\boldsymbol{\mathsf{A}}^{\prime})\) are isomorphic (denoted by \((S,\boldsymbol{\mathsf{A}})\simeq(S^{\prime},\boldsymbol{\mathsf{A}}^{\prime})\)) iff \(\exists\pi\in\Pi_{n},S=\pi(S^{\prime})\) and \(\boldsymbol{\mathsf{A}}=\pi(\boldsymbol{\mathsf{A}}^{\prime})\).
Note that both set and poset isomorphism are **more strict** than graph isomorphism. They not only need a permutation which maps one graph to the other but also require the permutation to map a specific node poset \(S\) to \(S^{\prime}\).
In practice, when the target node poset does not contain all nodes in the graph, we are often more concerned with the case of \(\boldsymbol{\mathsf{A}}=\boldsymbol{\mathsf{A}}^{\prime}\), where isomorphic node posets are defined **in the same graph**. For example, when \(S=\{i\},S^{\prime}=\{j\}\) and \((i,\boldsymbol{\mathsf{A}})\simeq(j,\boldsymbol{\mathsf{A}})\), we say nodes \(i\) and \(j\) are isomorphic in graph \(\boldsymbol{\mathsf{A}}\) (or they have symmetric positions/same structural role in graph \(\boldsymbol{\mathsf{A}}\)). An example is \(v_{2}\) and \(v_{3}\) in Figure 0(a). Similarly, edge and subgraph isomorphism can also be defined as the isomorphism of their node posets.
### Structural Representations
Graph models should produce the same prediction for isomorphic substructures. We define permutation invariance and equivariance to formalize this property. A function \(f\) defined over the space of \((S,\boldsymbol{\mathsf{A}})\) is _permutation invariant_ (or _invariant_ for abbreviation) if \(\forall\pi\in\Pi_{n}\), \(f(S,\boldsymbol{\mathsf{A}})=f(\pi(S),\pi(\boldsymbol{\mathsf{A}}))\). Similarly, \(f\) is _permutation equivariant_ if \(\forall\pi\in\Pi_{n}\), \(\pi(f(S,\boldsymbol{\mathsf{A}}))=f(\pi(S),\pi(\boldsymbol{\mathsf{A}}))\). Permutation invariance/equivariance ensures that representations learned by a GNN are invariant to node indexing, a fundamental design principle of GNNs.
Now we define the _most expressive structural representation_ of a substructure \((S,\boldsymbol{\mathsf{A}})\), following (Srinivasan and Ribeiro, 2020; Li et al., 2020). It assigns a unique representation to each equivalence class of isomorphic substructures.
Definition 6: Given an invariant function \(\Gamma(\cdot)\), \(\Gamma(S,\boldsymbol{\mathsf{A}})\) is a **most expressive structural representation** for \((S,\boldsymbol{\mathsf{A}})\) if \(\forall S,\boldsymbol{\mathsf{A}},S^{\prime},\boldsymbol{\mathsf{A}}^{\prime},\ \Gamma(S,\boldsymbol{\mathsf{A}})=\Gamma(S^{\prime},\boldsymbol{\mathsf{A}}^{ \prime})\Leftrightarrow(S,\boldsymbol{\mathsf{A}})\simeq(S^{\prime}, \boldsymbol{\mathsf{A}}^{\prime})\).
For simplicity, we will directly use _structural representation_ to denote most expressive structural representation in the rest of the paper. We will omit \(\boldsymbol{\mathsf{A}}\) if it is clear from context. For a graph \(\boldsymbol{\mathsf{A}}\), we call \(\Gamma(\boldsymbol{\mathsf{A}})=\Gamma(\emptyset,\boldsymbol{\mathsf{A}})\) a _structural graph representation_, \(\Gamma(i,\boldsymbol{\mathsf{A}})\) a _structural node representation_ for node \(i\), and call \(\Gamma(\{i,j\},\boldsymbol{\mathsf{A}})\) a _structural link representation_ for link \((i,j)\). For a general node poset \(S\), we call \(\Gamma(S,\boldsymbol{\mathsf{A}})\) a _structural multi-node representation_ for \(S\).
Definition 6 requires that the structural representations of two substructures are the same if and only if the two substructures are isomorphic. That is, isomorphic substructures always have the **same** structural representation, while non-isomorphic substructures always have **different** structural representations. Due to the permutation invariance requirement, models should not distinguish isomorphic substructures. This implies that structural representations can discriminate all substructures that any invariant model can differentiate, and structural representations reach the highest expressivity.
## 3 The limitation of directly aggregating node representations
In this section, taking GAE for link prediction as an example, we show the critical limitation of directly aggregating node representations as a multi-node representation.
### GAE for multi-node representation
GAE (Kipf and Welling, 2016b) is a kind of link prediction model with GNN. Given a graph \(\boldsymbol{\mathsf{A}}\), GAE first uses a GNN to compute a node representation \(\boldsymbol{z}_{i}\) for each node \(i\), and then use the inner product of \(\boldsymbol{z}_{i}\) and \(\boldsymbol{z}_{j}\) to predict link \(\{i,j\}\):
\[\boldsymbol{\hat{A}}_{i,j}=\text{sigmoid}(\boldsymbol{z}_{i}^{\top} \boldsymbol{z}_{j}),\text{ where }\boldsymbol{z}_{i}\!=\!\text{GNN}(i, \boldsymbol{\mathsf{A}}),\boldsymbol{z}_{j}\!=\!\text{GNN}(j,\boldsymbol{ \mathsf{A}}).\]
Here \(\boldsymbol{\hat{A}}_{i,j}\) is the predicted score for link \(\{i,j\}\). The model is trained to maximize the likelihood of reconstructing the true adjacency matrix. The original GAE uses a two-layer GCN (Kipf and Welling, 2016a) as the GNN. In principle, we can replace GCN with any GNN, use any aggregation function over the set of target node embeddings including mean, sum, and Hadamard product other than inner product, and substitute the sigmoid with an MLP. Then, GAE can be used for multi-node tasks. It aggregates target node embeddings produced by the GNN:
\[\boldsymbol{z}_{S}=\text{MLP}(\text{AGG}(\{\boldsymbol{z}_{i}|i\in S\})) \text{ where }\boldsymbol{z}_{i}\!=\!\text{GNN}(i,\boldsymbol{\mathsf{A}}), \tag{3}\]
where AGG is an aggregation function. We will use GAE to denote this general class of GNN-based multi-node representation learning methods in the following. Two natural questions are: 1) Is the node representation learned by the GNN a _structural node representation_? 2) Is the multi-node representation aggregated from a set of node representations a _structural representation for the node set_? We answer them respectively in the following.
### GNN and structural node representation
Practical GNNs (Gilmer et al., 2017) usually simulate the 1-dimensional Weisfeiler-Lehman (1-WL) test (Weisfeiler and Lehman, 1968) to iteratively update each node's representation by aggregating its neighbors' representations. We use _1-WL-GNN_ to denote a GNN with 1-WL discriminating power, such as GIN (Xu et al., 2018).
A 1-WL-GNN ensures that isomorphic nodes always have the same representation. However, the opposite direction is not guaranteed. For example, a 1-WL-GNN gives the same representation to all nodes in an \(r\)-regular graph, in which non-isomorphic nodes exist. Despite this, 1-WL is known to discriminate almost all non-isomorphic nodes (Babai and Kucera, 1979), which indicates that a 1-WL-GNN can give different representations to **almost all** non-isomorphic nodes.
To study GNN's maximum expressivity, we define a _node-most-expressive (NME) GNN_, which gives different representations to **all** non-isomorphic nodes.
**Definition 7**: _A GNN is **node-most-expressive (NME)** if \(\forall i,\boldsymbol{\mathsf{A}}\!,\!j,\boldsymbol{\mathsf{A}}^{\prime},\)\(GNN(i,\boldsymbol{\mathsf{A}})=\text{GNN}(j,\boldsymbol{\mathsf{A}}^{\prime}) \Leftrightarrow(i,\boldsymbol{\mathsf{A}})\simeq(j,\boldsymbol{\mathsf{A}}^{ \prime}).\)_
NME GNN learns _structural node representations_1. We define such a GNN because our primary focus is multi-node representation. NME GNN ignores the single-node expressivity limitation and simplifies our analysis.
Footnote 1: Although a polynomial-time implementation is not known for NME GNNs, many practical softwares can discriminate all non-isomorphic nodes quite efficiently (McKay and Piperno, 2014), which provides a promising direction.
### GAE cannot learn structural multi-node representations
Suppose GAE is equipped with an NME GNN producing structural node representations. Then the question becomes: does the aggregation of structural node representations of the target nodes result in a structural representation of the target node set? The answer is no. We have already illustrated this problem in the introduction: In Figure 0(a), we have two isomorphic nodes \(v_{2}\) and \(v_{3}\), and thus \(v_{2}\) and \(v_{3}\) will have the same structural node representation. By aggregating structural node representations, GAE will give \((v_{1},v_{2})\) and \((v_{1},v_{3})\) the same link representation. However, \((v_{1},v_{2})\) and \((v_{1},v_{3})\) are not isomorphic in the graph. Figure 0(b) gives another example on the multi-node case involving more than two nodes. Previous works have similar examples (Srinivasan and Ribeiro, 2020; Zhang and Chen, 2020). All these results indicate that:
**Proposition 8**: _GAE **cannot** learn structural multi-node representations no matter how expressive node representations a GNN can learn._
The root cause of this problem is that GNN computes node representations independently without being aware of the other nodes in the target node set \(S\). Thus, even though GNN learns the most expressive single-node representations, there is never a guarantee that their aggregation is a structural representation of a node set. In other words, the multi-node representation learning problem is **not breakable** into multiple **independent** single-node representation learning problems. We need to consider the **dependency** between the target nodes when computing their single-node representations.
## 4 Labeling trick for set
Starting from a common case in real-world applications, we first describe the multi-node substructure defined by a node set (instead of a poset) in the graph and define _set labeling trick_. The majority of this part is included in our conference paper (Zhang et al., 2021a).
### Definition of set labeling trick
The set labeling trick is defined as follows.
**Definition 9**: _(Set labeling trick) Given a graph \(\mathsf{A}\) and a set \(S\) of nodes in the graph, we stack a labeling tensor \(\mathsf{L}(S,\mathsf{A})\in\mathbb{R}^{n\times n\times d}\) in the third dimension of \(\mathsf{A}\) to get a new \(\mathsf{A}^{(S)}\in\mathbb{R}^{n\times n\times(k+d)}\), where \(\mathsf{L}\) satisfies: \(\forall S,\mathsf{A},S^{\prime},\mathsf{A}^{\prime},\pi\in\Pi_{n}\),_
1. _(target-nodes-distinguishing)_ \(\mathsf{L}(S,\mathsf{A})=\pi(\mathsf{L}(S^{\prime},\mathsf{A}^{\prime})) \Rightarrow S=\pi(S^{\prime})\)_._
2. _(permutation equivariance)_ \(S=\pi(S^{\prime}),\mathsf{A}=\pi(\mathsf{A}^{\prime})\Rightarrow\mathsf{L}( S,\mathsf{A})=\pi(\mathsf{L}(S^{\prime},\mathsf{A}^{\prime}))\)_._
To explain a bit, labeling trick assigns a label vector to each node/edge in graph \(\mathsf{A}\), which constitutes the labeling tensor \(\mathsf{L}(S,\mathsf{A})\). By concatenating \(\mathsf{A}\) and \(\mathsf{L}(S,\mathsf{A})\), we get the new labeled graph \(\mathsf{A}^{(S)}\). By definition, we can assign labels to both nodes and edges. However, in this paper, we **consider node labels only** by default for simplicity, i.e., we let the off-diagonal components \(\mathsf{L}(S,\mathsf{A})_{i,j,:},i\neq j\) be all zero.
The labeling tensor \(\mathsf{L}(S,\mathsf{A})\) should satisfy two properties in Definition 9. Property 1 requires that if a permutation \(\pi\) preserving node labels (i.e., \(\mathsf{L}(S,\mathsf{A})=\pi(\mathsf{L}(S^{\prime},\mathsf{A}^{\prime}))\)) exists between nodes of \(\mathsf{A}\) and \(\mathsf{A}^{\prime}\), then the nodes in \(S^{\prime}\) must be mapped to nodes in \(S\) by \(\pi\) (i.e., \(S=\pi(S^{\prime})\)). A sufficient condition for property 1 is to make the target nodes \(S\) have _distinct labels_ from those of the rest nodes so that \(S\) is _distinguishable_ from others. Property 2 requires that when \((S,\mathsf{A})\) and \((S^{\prime},\mathsf{A}^{\prime})\) are isomorphic under \(\pi\) (i.e., \(S=\pi(S^{\prime}),\mathsf{A}=\pi(\mathsf{A}^{\prime})\)), the corresponding nodes \(i\in S,j\in S^{\prime},i=\pi(j)\) must always have the same label (i.e., \(\mathsf{L}(S,\mathsf{A})=\pi(\mathsf{L}(S^{\prime},\mathsf{A}^{\prime}))\)). A sufficient condition for property 2 is to make the labeling function _permutation equivariant_, i.e., when the target \((S,\mathsf{A})\) changes to \((\pi(S),\pi(\mathsf{A}))\), the labeling tensor \(\mathsf{L}(\pi(S),\pi(\mathsf{A}))\) should equivariantly change to \(\pi(\mathsf{L}(S,\mathsf{A}))\).
### How labeling trick works
Obviously, labeling trick puts extra information into the graph, while the details remain unclear. To show some intuition on how labeling trick boosts graph neural networks, we introduce a simplest labeling trick satisfying the two properties in Definition 9.
**Definition 10**: _(Zero-one labeling trick) Given a graph \(\mathsf{A}\) and a set of nodes \(S\) to predict, we give it a diagonal labeling matrix \(\mathsf{L}_{zo}(S,\mathsf{A})\in\mathbb{R}^{n\times n\times 1}\) such that_
\[\mathsf{L}_{zo}(S,\mathsf{A})_{i,i,1}=\begin{cases}1&\text{if }i\in S\\ 0&\text{otherwise}\end{cases}. \tag{4}\]
In other words, the zero-one labeling trick assigns 1 to nodes in \(S\) and labels 0 to all other nodes in the graph. It is a valid labeling trick because nodes in \(S\) get _distinct labels_ from others, and the labeling function is _permutation equivariant_ by always giving nodes in the target node set label 1. These node labels serve as additional node features fed to a GNN together with the original node features.
Let's return to the example in Figure 0(a) to see how the zero-one labeling trick helps GNNs learn better multi-node representations. This time, when we want to predict link \((v_{1},v_{2})\), we will label \(v_{1},v_{2}\) differently from the rest nodes, as shown by the distinct colors in Figure 2 left. When computing \(v_{2}\)'s representation, GNN is also "aware" of the source node \(v_{1}\) with nodes \(v_{1}\) and \(v_{2}\) labeled, rather than treating \(v_{1}\) the same as other nodes. Similarly, when predicting link \((v_{1},v_{3})\), the model will again label \(v_{1},v_{3}\) differently from other nodes as shown in Figure 2 right. This way, \(v_{2}\) and \(v_{3}\)'s node representations are no longer the same in the two differently labeled graphs (due to the presence of the labeled \(v_{1}\)), and the model can predict \((v_{1},v_{2})\) and \((v_{1},v_{3})\) differently. The key difference of model with labeling trick from GAE is that the node representations are no longer computed independently, but are _conditioned_ on each other in order to capture the dependence between nodes.
### Expressivity of GNN with labeling trick
We include all proofs in the appendix.
Labeling trick first bridges the gap between whole-graph representation (the focus of graph level GNNs) and node set representations.
**Theorem 11**: _For any node set \(S\) in graph \(\mathbf{A}\) and \(S^{\prime}\) in graph \(\mathbf{A}^{\prime}\), given a set labeling trick, \((S,\mathbf{A})\simeq(S^{\prime},\mathbf{A}^{\prime})\Leftrightarrow\mathbf{A} ^{(S)}\simeq\mathbf{A}^{\prime(S^{\prime})}\)._
Therefore, the problem of graph-level tasks on a labeled graph is equivalent to that of multi-node tasks. However, the complexity of these graph-level GNNs prevent their application. We further want to connect node set representations with node representations. Now we introduce our main theorem showing that with a valid labeling trick, an NME GNN can _learn structural representations of node sets_.
**Theorem 12**: _Given an NME GNN and an injective set aggregation function AGG, for any \(S,\mathbf{A},S^{\prime},\mathbf{A}^{\prime}\), \(\text{GNN}(S,\mathbf{A}^{(S)})=\text{GNN}(S^{\prime},\mathbf{A}^{\prime(S^{ \prime})})\Leftrightarrow(S,\mathbf{A})\simeq(S^{\prime},\mathbf{A}^{\prime})\), where \(\text{GNN}(S,\mathbf{A}^{(S)}):=\text{AGG}(\{\text{GNN}(i,\mathbf{A}^{(S)})|i \in S\})\)._
Remember that directly aggregating the structural node representations learned from the original graph \(\mathbf{A}\) does not lead to structural representations of node sets (Section 3.3). In contrast, Theorem 12 shows that aggregating the structural node representations learned from the **labeled** graph \(\mathbf{A}^{(S)}\), somewhat surprisingly, results in a structural representation for \((S,\mathbf{A})\).
The significance of Theorem 12 is that it closes the gap between the nature of GNNs for single-node representations and the requirement of multi-node representations for node set prediction problems. Although GNNs alone have severe limitations for multi-node representations, GNNs + labeling trick can learn structural representations of node sets by aggregating structural node representations obtained in the labeled graph.
Theorem 12 assumes an NME GNN. To augment Theorem 12, we give the following theorems, which demonstrate the power of labeling trick for 1-WL-GNNs on link prediction.
**Theorem 13**: _Given an \(h\)-layer 1-WL-GNN, in any non-attributed graph with \(n\) nodes, if the degree of each node in the graph is between \(1\) and \(\big{(}(1-\epsilon)\log n\big{)}^{1/(2h+2)}\) for any constant
Figure 2: When predicting \((v_{1},v_{2})\), we will label these two nodes differently from the rest so that a GNN is aware of the target link when learning \(v_{1}\) and \(v_{2}\)’s representations. Similarly, when predicting \((v_{1},v_{3})\), nodes \(v_{1}\) and \(v_{3}\) will be labeled differently. This way, the representation of \(v_{2}\) in the left graph will be different from that of \(v_{3}\) in the right graph, enabling GNNs to distinguish the non-isomorphic links \((v_{1},v_{2})\) and \((v_{1},v_{3})\).
\(\epsilon>0\), there exists \(\omega(n^{2\epsilon})\) pairs of non-isomorphic links \((u,w),(v,w)\) such that 1-WL-GNN gives \(u,v\) the same representation, while with 1-WL-GNN + zero-one labeling trick gives \(u,v\) different representations._
Theorem 13 shows that in any non-attributed graph there exists a large number (\(\omega(n^{2\epsilon})\)) of link pairs (like the examples \((v_{1},v_{2})\) and \((v_{1},v_{3})\) in Figure 1a) which are not distinguishable by 1-WL-GNNs alone but distinguishable by 1-WL-GNNs + labeling trick. This means, labeling trick can boost the expressive power of 1-WL-GNNs on link prediction tasks.
How labeling trick boosts link prediction can also be shown from another perspective: 1-WL-GNN + zero-one labeling trick can **learn various link prediction heuristics** while vanilla 1-WL-GNN cannot.
**Proposition 14**: _Given a link prediction heuristic of the following form,_
\[h(\{i,j\},\mathbf{\mathsf{A}})=f\big{(}\big{\{}\sum_{v\in N(i)}g_{2 }(\mbox{deg}(v,\mathbf{\mathsf{A}})),\sum_{v\in N(j)}g_{2}(\mbox{deg} (v,\mathbf{\mathsf{A}}))\big{\}},\hskip-14.226378pt\sum_{v\in N(i) \bigcap N(j)}\hskip-14.226378ptg_{1}(\mbox{deg}(v,\mathbf{\mathsf{A}})) \big{)}, \tag{5}\]
_where \(\mbox{deg}(v,\mathbf{\mathsf{A}})\) is the degree of node \(v\) in graph \(\mathsf{A}\), \(g_{1},g_{2}\) are positive functions, and \(f\) is injective w.r.t. the second input with the first input fixed. There exists a 1-WL-GNN + zero-one labeling trick implementing \(h\). In contrast, 1-WL-GNN cannot implement \(h\)._
The \(h\) function defined in the above proposition covers many widely-used and time-tested link prediction heuristics, such as common neighbors (CN) (Barabasi and Albert, 1999), resource allocation(RA) (Zhou et al., 2009), and Adamic-Adar(AA) (Adamic and Adar, 2003). These important structural features for link prediction are not learnable by vanilla GNNs but can be learned if we augment 1-WL-GNNs with a simple zero-one labeling trick.
Labeling trick can also boost graph neural networks in subgraph tasks with more than two nodes. The following theorem from Wang and Zhang (2022) illustrates it.
**Theorem 15**: _Given an \(h\)-layer 1-WL-GNN, in any non-attributed graph with \(n\) nodes, if the degree of each node in the graph is between \(1\) and \(\big{(}(1-\epsilon)\log n\big{)}^{1/(2h+2)}\) for any constant \(\epsilon>0\), there exists \(w(2^{n}n^{2\epsilon-1})\) pairs of non-isomorphic subgraphs such that that 1-WL-GNN produces the same representation, while 1-WL-GNN + labeling trick can distinguish them._
Theorem 15 extends Theorem 13 to more than 2 nodes. It shows that an even larger number of node set pairs need labeling tricks to help 1-WL-GNNs differentiate them.
### Complexity
Despite the expressive power, labeling trick may introduce extra computational complexity. The reason is that for every node set \(S\) to predict, we need to relabel the graph \(\mathsf{A}\) according to \(S\) and compute a new set of node representations within the labeled graph. In contrast, GAE-type methods compute node representations only in the original graph.
Let \(m\) denote the number of edges, \(n\) denote the number of nodes, and \(q\) denote the number of target node sets to predict. As node labels are usually produced by some fast non-parametric method, we neglect the overhead for computing node labels. Then we compare the inference complexity of GAE and GNN with labeling trick. For small graphs, GAE-type methods can compute all node representations first and then predict multiple node
sets at the same time, which saves a significant amount of time. In this case, GAE's time complexity is \(O(m+n+q)\), while GNN with labeling trick takes up to \(O(q(m+n))\) time. However, for large graphs that cannot fit into the GPU memory, extracting a neighborhood subgraph for each node set to predict has to be used for both GAE-type methods and labeling trick, resulting in similar computation cost \(O(q(n_{s}+m_{s}))\), where \(n_{s},m_{s}\) are the average number of nodes and edges in the segregated subgraphs.
## 5 Labeling trick for poset
The previous section describes multi-node substructures \((S,\mathbf{\mathsf{A}})\) defined by node set \(S\), which assumes that nodes in \(S\) have no order relation. However, the assumption may lose some critical information in real-world tasks. For example, the citing and cited articles should be differentiated in citation graphs. As shown in Figure 3, using set labeling trick cannot discriminate the link direction by giving the two directed links the same representation, yet the two directed links are obviously non-isomorphic. Therefore, introducing order relation into node set is necessary for substructures with internal relation. In this section, we use poset to define multi-node substructures and extend set labeling trick to _poset labeling trick_. Note that node order is only additionally introduced for \(S\) because the graph \(\mathsf{A}\) already allows directed edges in our definition.
**Definition 16**: _(Poset labeling trick) Given a graph \(\mathsf{A}\) and a poset \(S\) of nodes in it, we stack a labeling tensor \(\mathsf{L}(S,\mathsf{A})\in\mathbb{R}^{n\times n\times d}\) in the third dimension of \(\mathsf{A}\) to get a new \(\mathsf{A}^{(S)}\in\mathbb{R}^{n\times n\times(k+d)}\), where \(\mathsf{L}\) satisfies: for all poset \(S\) of nodes in graph \(\mathsf{A}\), poset \(S^{\prime}\) of nodes in graph \(\mathsf{A}^{\prime}\), and \(\pi\in\Pi_{n}\),_
1. _(target-nodes-and-order-distinguishing)_ \(\mathsf{L}(S,\mathsf{A})=\pi(\mathsf{L}(S^{\prime},\mathsf{A}))\Rightarrow S =\pi(S^{\prime})\)_._
2. _(permutation equivariance)_ \(S=\pi(S^{\prime}),\mathsf{A}=\pi(\mathsf{A}^{\prime})\Rightarrow\mathsf{L}( S,\mathsf{A})=\pi(\mathsf{L}(S^{\prime},\mathsf{A}^{\prime}))\)_._
The definition of poset labeling trick is nearly the same as that of set labeling trick, except that we require permutation of poset and poset isomorphism (Definition 3 and 4). Poset labeling trick still assigns a label vector to each node/edge in graph \(\mathsf{A}\). The labels distinguish the substructure from other parts of the graph and keep permutation equivariance. As we will show, poset labeling trick enables maximum expressivity for poset learning. Below we first discuss how to design poset labeling tricks that satisfy the two above properties.
### Poset labeling trick design
To describe general partial order relations between nodes in a poset, we introduce _Hasse diagram_, a graph that uniquely determines the partial order relation.
**Definition 17**: _The Hasse diagram of a poset \(S=(U,\leq_{S})\), denoted as \(\mathcal{H}_{S}\), is a directed graph \((V_{H},E_{H})\), \(V_{H}=U\), \(E_{H}=\{(u,v)\ |\ v\neq u\text{ and }v\text{ covers }u\}\), where \(v\) covers \(u\) means that \(u\leq_{S}v\) and there exists no \(w\in U,w\notin\{u,v\}\), \(u\leq_{S}w\) and \(w\leq_{S}v\)._
Figure 3: Set labeling trick cannot differentiate directed links between the same nodes.
Figure 4 shows some examples of Hasse diagram. The reason we use Hasse diagram to encode partial order relation is that we prove any poset labeling trick satisfying Definition 16 must give non-isomorphic nodes in a Hasse diagram different labels.
**Proposition 18**: _Let \(\mathbf{L}\) be the labeling function of a poset labeling trick. If \(\exists\pi\in\Pi_{n},\mathbf{L}(S,\mathbf{A})=\pi(\mathbf{L}(S^{\prime},\mathbf{ A}^{\prime}))\), then for all \(v^{\prime}\in S^{\prime}\), \(\pi(v^{\prime})\) is in \(S\), and \((\{v^{\prime}\},\mathcal{H}_{S^{\prime}})\simeq(\{\pi(v^{\prime})\},\mathcal{H }_{S})\). Furthermore, in the same \(\mathcal{H}_{S}\), non-isomorphic nodes must have different labels._
Proposition 18 shows that a valid poset labeling trick should differentiate non-isomorphic nodes in a Hasse diagram. Theoretically, we can run an NME GNN on the Hasse diagram so that the node embeddings can serve the purpose. Such a poset labeling trick is defined as follows.
**Definition 19**: _Given an NME GNN, Hasse embedding labeling trick is_
\[\mathbf{L}(S,\mathbf{A})_{u,u,:}=\begin{cases}\text{sigmoid}(\text{GNN}(u, \mathcal{H}_{S}))&\text{if }u\in S\\ 0&\text{otherwise}\end{cases} \tag{6}\]
This labeling trick fulfills the two requirements in Definition 16. Note that _sigmoid_ prevents GNN from producing zero embeddings thus losing the distinguishing power from non-target nodes. Hasse embedding labeling trick is similar to the zero-one labeling trick for set in Definition 10. It assigns nodes outside the target poset the same label and distinguishes nodes inside based on their isomorphism class in the Hasse diagram, while the zero-one labeling trick does not differentiate nodes inside the poset.
The above poset labeling trick can work on posets with arbitrary complex partial orders, at the cost of first running an NME GNN on the Hasse diagram. However, in most real-world tasks, differentiating non-isomorphic nodes in Hasse diagrams is usually quite easy. For example, in the directed link prediction task, the target posets all have same simple Hasse diagram: only two roles exist in the poset--source node and target node of the link, which is shown in Figure 4(a). Then we can assign a unique color to each equivalent class of isomorphic nodes in the Hasse diagram as the node labels, e.g., giving 1 to the source node, 2 to the target node, and 0 to all other nodes in directed link prediction.
However, in some cases, we need to learn representations for posets with different Hasse diagrams. The Hasse embedding labeling trick still works for these cases yet is difficult to implement. Can we design some simpler labeling tricks for some special cases of poset representation learning problems where the posets are not restricted to have the same Hasse diagram? Two cases are discussed in the following.
**Linear Order Set.** Linear order set means a poset whose each pair of nodes are comparable, so that the Hasse diagram is a chain as shown in Figure 4(b). Therefore, \(S\) can be sorted in \(u_{1}\leq_{S}u_{2}\leq_{S}u_{3}\leq_{S}...\leq_{S}u_{k}\), where \(S=(U,\leq_{S}),U=\{u_{1},u_{2},...,u_{k}\}\). Then we
can assign \(u_{i}\) label \(i\) and give nodes outside \(S\) 0 label. Such a labeling trick is a valid poset labeling trick and can be used to learn paths with different lengths.
**Nearly Linear Order Set.** Nearly linear order set means there exists a partition of \(S\), \(\{S_{1},S_{2},...,S_{l}\}\), \(\leq_{S}=\bigcup_{i=1}^{l-1}S_{i}\times S_{i+1}\). As shown in Figure 4(c), the Hasse diagram is nearly a chain whose nodes are replaced with a set of nodes with no relations. We can assign nodes in \(S_{i}\) label \(i\) and give nodes outside \(S\) 0 label. It is still a valid poset labeling trick. Nearly linear order set can describe a group in an institute, where the top is the leader.
### Poset labeling trick expressivity
We first show that poset labeling trick enables maximum expressivity for poset learning.
**Theorem 20**: _For any node poset \(S\) in graph \(\mathsf{A}\) and \(S^{\prime}\) in graph \(\mathsf{A}^{\prime}\), given a set labeling trick, \((S,\mathsf{A})\simeq(S^{\prime},\mathsf{A}^{\prime})\Leftrightarrow\mathsf{A} ^{(S)}\simeq\mathsf{A}^{\prime(S^{\prime})}\)._
Theorem 20 shows that structural poset representation is equivalent to the structural whole graph representation of labeled graph. Poset labeling trick can also bridge the gap between node representations and poset representations.
**Theorem 21**: _Given an NME GNN and an injective aggregation function AGG, for any node posets \(S,S^{\prime}\) in graphs \(\mathsf{A},\mathsf{A}^{\prime}\), \(\mbox{GNN}(S,\mathsf{A}^{(S)})=\mbox{GNN}(S^{\prime},\mathsf{A}^{\prime(S^{ \prime})})\Leftrightarrow(S,\mathsf{A})\simeq(S^{\prime},\mathsf{A}^{\prime})\), where \(\mbox{GNN}(S,\mathsf{A}^{(S)})=\mbox{AGG}(\{\mbox{GNN}(u,\mathsf{A}^{(S)}|u \in S\}))\)._
Theorem 21 shows that with an NME GNN, poset labeling trick will produce structural representations of posets. To augment this theorem, we also discuss 1-WL-GNNs with poset labeling trick. 1-WL-GNNs cannot capture any partial order information and cannot differentiate arbitrary different posets with the same set of nodes. Differentiating different posets with different sets is also hard for 1-WL-GNNs as they fail to capture relations between nodes. Poset labeling trick can help in both cases.
**Theorem 22**: _In any non-attributed graph with \(n\) nodes, if the degree of each node in the graph is between \(1\) and \(\left((1-\epsilon)\log n\right)^{1/(2h+2)}\) for any constant \(\epsilon>0\), there exist \(w(n^{2\epsilon})\) pairs of links and \(w((n!)^{2})\) pairs of non-isomorphic node posets such that any \(h\)-layer 1-WL-GNN produces the same representation, while with Hasse embedding labeling trick 1-WL-GNN can distinguish them._
Theorem 22 illustrates that poset labeling trick can help 1-WL-GNNs distinguish significantly more pairs of node posets.
## 6 Subset labeling trick for multi-node representation learning
Besides set labeling trick, there exist other methods that append extra features to the adjacency to boost GNNs. Among them, ID-GNN (You et al., 2021) and NBFNet (Zhu et al., 2021) assign special features to only one node in the target node set and also achieve outstanding performance. In this section, we propose subset labeling trick. As its name implies, subset labeling trick assigns labels only to a subset of nodes in the target node set. We compare set labeling trick with subset labeling trick in different problem settings. In some cases, subset labeling trick is even more expressive than set labeling trick.
### Subset labeling trick
**Definition 23**: _(subset labeling trick) Given set \(S\) in graph \(\mathsf{A}\) and its subset \(P\subseteq S\), we stack a labeling tensor \(\mathsf{L}(P,\mathsf{A})\in\mathbb{R}^{n\times n\times d}\) in the third dimension of \(\mathsf{A}\) to get a new \(\mathsf{A}^{(P)}\in\mathbb{R}^{n\times n\times(k+d)}\), where \(\mathsf{L}\) satisfies: \(\forall S,\mathsf{A},S^{\prime},\mathsf{A}^{\prime},P\subseteq S,P^{\prime} \subseteq S^{\prime},\pi\in\Pi_{n}\),_
1. _(target-subset-distinguishing)_ \(\mathsf{L}(P,\mathsf{A})=\pi(\mathsf{L}(P^{\prime},\mathsf{A}^{\prime})) \Rightarrow P=\pi(P^{\prime})\)_._
2. _(permutation equivariance)_ \(P=\pi(P^{\prime}),\mathsf{A}=\pi(\mathsf{A}^{\prime})\Rightarrow\mathsf{L}(P,\mathsf{A})=\pi(\mathsf{L}(P^{\prime},\mathsf{A}^{\prime}))\)_._
Like set labeling trick, subset labeling trick distinguishes the selected subset in the target set and keeps permutation equivariance. However, it does not need to distinguish all target nodes. Subset(\(k\)) labeling trick means the subset size is \(k\).
Subset zero-one labeling trick is a simplest subset labeling trick fulfilling the requirements in Definition 23.
**Definition 24**: _(Subset zero-one labeling trick) Given a graph \(\mathsf{A}\), a set of nodes \(S\) to predict, and a subset \(P\subseteq S\), we give it a diagonal labeling matrix \(\mathsf{L}(P,\mathsf{A})\in\mathbb{R}^{n\times n\times 1}\) such that \(\mathsf{L}(P,\mathsf{A})_{i,i,1}=1\) if \(i\in P\) and \(\mathsf{L}(P,\mathsf{A})_{i,i,1}=0\) otherwise._
To explain a bit, the subset zero-one labeling trick assigns label 1 to nodes in the selected subset \(P\), and label 0 to all nodes not in \(P\). It only contains the subset identity information.
Then a natural problem arises: how to select subset \(P\) from the target node set \(S\)? Motivated by previous methods, we propose two different routines: subset-pooling and one-head.
### How to select subset
Subset poolingID-GNN (You et al., 2021) proposes an a GNN for node set learning. For each node in the target node set, it labels the node one and all other nodes zero. Then, it uses a 1-WL-GNN to produce the representations of the node. By pooling all node representations, ID-GNN produces the node set representation. As isomorphic node sets can have different embeddings due to different subset selections, choosing only one node randomly can break permutation equivariance. But pooling the representation of all subset selection eliminates the non-determinism caused by selection and solves this problem. Generalizing this method, we propose the _subset pooling routine_. Subset(\(k\)) pooling enumerates all size-\(k\) subsets and then pools the embeddings of them.
\[\text{AGG}(\{\text{GNN}(S,\mathsf{A}^{(P)})|P\subseteq S,|P|=k\}), \tag{7}\]
where AGG is an injective set aggregation function.
As for all \(\pi\in\Pi_{n}\) and target node set \(S\) in graph \(\mathsf{A}\),
\[\text{AGG}(\{\text{GNN}(S,\mathsf{A}^{(P)})|P\!\subseteq\!S,|P|\!=\!k\})= \text{AGG}(\{\text{GNN}(\pi(S),\pi(\mathsf{A})^{(P)}|P\!\subseteq\!\pi(S),|P| \!=\!k\}), \tag{8}\]
the subset pooling routine keeps permutation equivariance.
One head routineContrary to the subset pooling routine, link prediction model NBFNet (Zhu et al., 2021) labels only one head of the link. This design breaks permutation equivariance but improves the scalability. We propose the _one head routine_ to generalize this method to
general node set tasks. It selects only one subset to label. Some policies are shown in the following.
**Selection Policies**
_Random Selection._ For a target set, we can select a subset in it randomly. For example, we can randomly choose one head of each target edge in link prediction task.
_Graph Structural Selection._ We can select a node with maximum degree in the target node set. Note that it cannot keep permutation equivariance either.
_Partial Order Relation Selection._ If the least element exists in a poset, we can choose it as the subset. For example, in directed link prediction task, the source node of each link can be the subset. This method can keep permutation equivariance.
**Complexity**
The efficiency gain of subset labeling trick compared with set labeling trick comes from sharing results across target node sets. GNN with set labeling trick has to compute the representations of each target node set separately. With the target node distinguishing property, no labeling trick can remain unchanged across different target nodes sets. Therefore, the input adjacency will change and node representations have to be reproduced by the GNN.
In contrast, GNN with subset labeling trick can compute the representations of multiple node sets with the same selected subset simultaneously. The subset label is only a function of the selected subset and the graph, so we can maintain the subset label for different target node sets by choosing the same subset. For example, in link prediction task, all links originating from a node share this same source node. By choosing the source node as the subset, these links have the same label and input adjacency to GNN, so the node representations produced by the GNN can be reused. This routine is especially efficient in the knowledge graph completion setting, where a query involves predicting all possible tail entities connected from a head entity with a certain relation.
### Expressivity
When the subset size \(k\) equals the target node set size \(|S|\), subset labeling trick is equivalent to set labeling trick. What is more interesting is, when \(k=|S|-1\), subset labeling trick with the subset pooling routine can achieve the same power as set labeling trick.
**Theorem 25**: _Given an NME GNN, for any graph \(\mathbf{A},\mathbf{A}^{\prime}\), and node sets \(S,S^{\prime}\) in \(\mathbf{A},\mathbf{A}^{\prime}\) respectively, we have_
\[\text{AGG}(\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Theorem 26**: _Given an NME GNN, for any graph \(\mathbf{A},\mathbf{A}^{\prime}\), and node sets \(S,S^{\prime}\) in \(\mathbf{A},\mathbf{A}^{\prime}\) respectively, we have_
\[(S,\mathbf{A})\!\not\simeq\!(S^{\prime},\mathbf{A}^{\prime})\Rightarrow \forall P\subseteq S,P^{\prime}\subseteq S^{\prime},|P|=|S|-1,|P^{\prime}|=|S ^{\prime}|-1,\\ \text{GNN}(S,\mathbf{A}^{(P)})\neq\text{GNN}(S,\mathbf{A}^{\prime (P^{\prime})}). \tag{10}\]
Though one-head routine may produce different representations for isomorphic sets, the above theorem shows that it maintains the capacity to differentiate non-isomorphic sets.
For larger target node set, \(\text{subset}(|S|-1)\) labeling trick is of little use, as the \(|S|-1\) labeling can hardly be reused by other target sets. In contrast, we focus on the expressivity of subset(1) labeling trick, since it is much more common for target node sets to share node rather than sharing another \((|S|-1)\) node set.
When using NME GNN, according to Theorem 12, set labeling trick leads to the highest expressivity. The problem left is whether subset(1) labeling trick can help NME GNN produce structural representations.
**Proposition 27**: _Given an NME GNN, there exists pairs of set \(S\) in graph \(\mathbf{A}\) and set \(S^{\prime}\) in graph \(\mathbf{A}^{\prime}\) such that \(\text{AGG}(\{\text{GNN}(u,\mathbf{A}^{(u)})|u\in S\})=\text{AGG}(\{\text{GNN} (u^{\prime},\mathbf{A}^{\prime(u^{\prime})})|u^{\prime}\in S^{\prime}\})\) while \((S,\mathbf{A})\!\not\simeq\!(S^{\prime},\mathbf{A}^{\prime})\)._
Proposition 27 shows that with NME GNN, subset(1) labeling trick cannot learn structural representation and is less expressive than set labeling trick. However, using 1-WL-GNNs, the expressivity of subset(1) labeling trick is incomparable to that of set labeling trick. In other words, there exists non-isomorphic node sets which are distinguishable by subset(1) labeling trick and indistinguishable by set labeling trick, and vice versa.
**Proposition 28**: _Given a 1-WL-GNN, there exists \(S,\mathbf{A},S^{\prime},\mathbf{A}^{\prime}\) such that \((S,\mathbf{A})\not\simeq(S^{\prime},\mathbf{A}^{\prime})\), \(\text{AGG}(\{\text{GNN}(u,\mathbf{A}^{(u)})|u\in S\})\neq\text{AGG}(\{\text{ GNN}(u^{\prime},\mathbf{A}^{\prime(u^{\prime})})|u^{\prime}\in S^{\prime}\})\) while \(\text{GNN}(S,\mathbf{A}^{S})=\text{GNN}(S^{\prime},\mathbf{A}^{\prime(S^{ \prime})})\). There also exists \(S,\mathbf{A},S^{\prime},\mathbf{A}^{\prime}\) such that \((S,\mathbf{A})\!\not\simeq\!(S^{\prime},\mathbf{A}^{\prime})\), \(\text{AGG}(\{\text{GNN}(u,\mathbf{A}^{(u)})|u\in S\})=\text{AGG}(\{\text{GNN} (u^{\prime},\mathbf{A}^{\prime(u^{\prime})})|u^{\prime}\in S^{\prime}\})\) while \(\text{GNN}(S,\mathbf{A}^{(S)})\neq\text{GNN}(S^{\prime},\mathbf{A}^{\prime(S^{ \prime})})\)._
And 1-WL-GNN with subset(1) labeling trick can also differentiate many pairs of node sets that 1-WL-GNN cannot differentiate, as shown in the following theorem.
**Theorem 29**: _In any non-attributed graph with \(n\) nodes, if the degree of each node in the graph is between \(1\) and \(\big{(}(1-\epsilon)\log n\big{)}^{1/(2h+2)}\) for any constant \(\epsilon>0\), there exist \(n^{2\epsilon}\) pairs of links and \(w(2^{n}n^{3\epsilon-1})\) pairs of non-isomorphic node sets such that any h-layer 1-WL-GNN produces the same representation, while with subset(1) labeling trick 1-WL-GNN can distinguish them._
#### 6.3.1 Why subset labeling trick outperforms labeling trick in some cases?
In this section, we take a closer look at some special cases and then give some intuitions on subset labeling trick and set labeling trick. NME GNN is too expressive to show some weakness of set labeling trick, so we focus on 1-WL-GNN.
Subset labeling trick helps differentiate nodes with the same label. Taking the two graphs in Figure 5 as an example, the target set is the whole graph. With zero-one labeling trick,
1-WL-GNN cannot differentiate them as all nodes in the two graphs have the same rooted subtree (see Figure 5a). However, subset zero-one labeling trick can solve this problem. The rooted subtree in the first graph always contains a nodes with label 1, whereas in the second graph, the rooted subtree may sometimes contain no labeled nodes, leading to different 1-WL-GNN embeddings.
The drawback of subset labeling trick is that it captures pair-wise relation only and loses high-order relations. As shown in Figure 6, the two target node sets (each containing three nodes) are non-isomorphic, but every node pair from the first set is isomorphic to a node pair from the second set. This difference is also reflected in the rooted subtree of target nodes (see the bottom of Figure 6), where set labeling trick (Figure 6a) can differentiate \(v\) while subset(1) labeling trick (Figure 6b) cannot.
## 7 Labeling trick for hypergraph
Graph is appropriate to describe bilateral relations between entities. However, high-order relations among several entities are also worth studying (Agarwal et al., 2006). Hypergraph,
Figure 5: An example of when subset labeling trick differentiates two node sets, while set labeling trick does not. First row: labeled graphs. Second row: rooted subtrees of \(v\).
Figure 6: An example of when subset labeling trick fails to differentiate two node sets while set labeling trick does. First row: labeled graphs. Second row: rooted subtrees of \(v\).
composed of nodes and hyperedges, can model such high-order relations naturally. In this section, we study multi-node representation learning in hypergraphs.
We consider a hypergraph \(H:=(V,E,\mathbf{H},\mathbf{\mathsf{X}}^{V},\mathbf{\mathsf{X}}^{E})\), where \(V\) is the node set \(\{1,2,...,n\}\), \(E\) is the hyperedge set \(\{1,2,...,m\}\), and \(\mathbf{H}\in\{0,1\}^{n\times m}\) is the incidence matrix with \(\mathbf{H}_{i,j}=1\) if node \(i\) is in hyperedge \(j\) and \(0\) otherwise. Each hyperedge contains at least one node. \(\mathbf{\mathsf{X}}^{V}\in\mathbb{R}^{n\times d}\) and \(\mathbf{\mathsf{X}}^{E}\in\mathbb{R}^{m\times d}\) are node and hyperedge features respectively, where \(\mathbf{\mathsf{X}}^{V}_{i,:}\) is of node \(i\), and \(\mathbf{\mathsf{X}}^{E}_{j,:}\) is of hyperedge \(j\).
We define a hypergraph permutation \(\pi=(\pi_{1},\pi_{2})\in\Pi_{n}\times\Pi_{m}\). Its action on a hypergraph \(H=(V,E,\mathbf{H},\mathbf{\mathsf{X}}^{V},\mathbf{\mathsf{X}}^{E})\) is \(\pi(H)=(\pi_{1}(V),\pi_{2}(E),\pi(\mathbf{H}),\pi_{1}(\mathbf{\mathsf{X}}^{V}),\pi_{2} (\mathbf{\mathsf{X}}^{E}))\), where incidence matrix permutation is \(\pi(\mathbf{H})_{\pi_{1}(i),\pi_{2}(j)}=\mathbf{H}_{i,j}\).
The isomorphism and poset isomorphism of hypergraph are defined as follows.
**Definition 30**: _Hypergraphs \(H,H^{\prime}\) are isomorphic iff there exists \(\pi\in\Pi_{n}\times\Pi_{m}\), \(\pi(H)=H^{\prime}\). Given node posets \(S\) in \(H\) and \(S^{\prime}\) in \(H^{\prime}\), \((S,H),(S^{\prime},H^{\prime})\) are isomorphic iff there exists \(\pi=(\pi_{1},\pi_{2})\in\Pi_{n}\times\Pi_{m}\), \((\pi_{1}(S),\pi(H))=(S^{\prime},H^{\prime})\)._
We can define labeling trick for hypergraph similar to that of graph from scratch. However, converting the hypergraph problem to a graph problem is more convenient. We formalize the known convertion (Bretto, 2013) as follows.
**Definition 31**: _(Incidence graph) Given a hypergraph \(H=(V,E,\mathbf{H},\mathbf{\mathsf{X}}^{V},\mathbf{\mathsf{X}}^{E})\), \(V=\{1,2,...,n\}\), \(E=\{1,2,...,m\}\), \(\mathbf{H}\in\{0,1\}^{n\times m}\), \(\mathbf{\mathsf{X}}^{V}\in\mathbb{R}^{n\times d}\),\(\mathbf{\mathsf{X}}^{E}\in\mathbb{R}^{m\times d}\), its incidence graph is \(IG_{H}=(V_{H},E_{H},\mathbf{\mathsf{A}})\), where the node set \(V_{H}=\{1,2,...,n,n+1,...,n+m\}\), edge set \(E_{H}=\{(i,j)|i\in V,j\in E,\mathbf{H}_{i,j}=1\}\), adjacency tensor \(\mathbf{\mathsf{A}}\in\mathbb{R}^{(n+m)\times(n+m)\times(d+1)}\). For all \(i\in V,j\in E\), \(\mathbf{\mathsf{A}}_{i,i,:d}=\mathbf{\mathsf{X}}^{V}_{i,:}\), \(\mathbf{\mathsf{A}}_{i,i,d+1}=\mathbf{\mathsf{X}}^{V}_{i,:}\), \(\mathbf{\mathsf{A}}_{n+j,n+j,:d}=\mathbf{\mathsf{X}}^{E}_{j,:}\), \(\mathbf{\mathsf{A}}_{i,n+j,d+1}=\mathbf{H}_{i,j}\). All other elements in \(\mathbf{\mathsf{A}}\) are \(0\)._
The incidence graph \(IG_{H}\) considers \(H\)'s nodes and hyperedges both as its nodes. Two nodes in \(IG_{H}\) are connected iff one is a node and the other is a hyperedge containing it in \(H\).
The incidence graph contains all information in the hypergraph. Hypergraph's isomorphism and poset isomorphism are equivalent to the isomorphism and poset isomorphism in the corresponding incidence graphs.
**Theorem 32**: _Given node posets \(S\) in hypergraph \(H\), \(S^{\prime}\) in hypergraph \(H^{\prime}\), \((S,H)\simeq(S^{\prime},H^{\prime})\) iff \((S,IG_{H})\simeq(S^{\prime},IG_{H^{\prime}})\)._
Therefore, a hypergraph task can be converted to a graph task. Labeling tricks can be extended to hypergraph by using them on the corresponding incidence graph.
**Corollary 33**: _Given an NME GNN, and an injective aggregation function AGG, for any \(S,H,S^{\prime},H^{\prime}\), let \(\mathbf{\mathsf{A}},\mathbf{\mathsf{A}}^{\prime}\) denote the adjacency tensors of graphs \(IG_{H},IG_{H^{\prime}}\) respectively. Then \(\text{GNN}(S,\mathbf{\mathsf{A}}^{(S)})=\text{GNN}(S^{\prime},\mathbf{\mathsf{A}}^{ \prime(S^{\prime})})\Leftrightarrow(S,H)\!\simeq\!(S^{\prime},H^{\prime})\)._
With NME GNN, set labeling trick can still produce structural representations on hypergraph. This enables us to boost the representation power of hyperedge prediction tasks.
## 8 Related work
There is emerging interest in recent study of graph neural networks' expressivity. Xu et al. (2018) and Morris et al. (2019) first show that the 1-WL test bounds the discriminating power of GNNs performing neighbor aggregation. Many works have since been proposed to increase the power of GNNs by simulating higher-order WL tests (Morris et al., 2019; Maron et al., 2019; Chen et al., 2019; Azizian and Lelarge, 2021), approximating permutation equivariant functions (Maron et al., 2019; Geerts, 2020; Maron et al., 2019; Puny et al., 2022; Chen et al., 2020),, encoding subgraphs (Frasca et al., 2022; Zhang and Li, 2021; Feng et al., 2022), utilizing graph spectral features (Kreuzer et al., 2021; Lim et al., 2022), etc. However, most previous works focus on improving GNN's whole-graph representation power. Little work has been done to analyze GNN's substructure representation power. Srinivasan and Ribeiro (2020) first formally studied the difference between structural representations of nodes and links. Although showing that structural node representations of GNNs cannot perform link prediction, their way to learn structural link representations is to give up GNNs and instead use Monte Carlo samples of node embeddings learned by network embedding methods. In this paper, we show that GNNs combined with labeling tricks can also learn structural link representations, which reassures using GNNs for link prediction.
Many works have implicitly assumed that if a model can learn node representations well, then combining the pairwise node representations can also lead to good node set (for example link) representations (Grover and Leskovec, 2016; Kipf and Welling, 2016; Hamilton et al., 2017). However, we argue in this paper that simply aggregating node representations fails to discriminate a large number of non-isomorphic node sets (links), and with labeling trick the aggregation of structural node representations leads to structural representations.
Li et al. (2020) proposed distance encoding (DE), whose implementations based on \(S\)-discriminating distances can be shown to be specific labeling tricks. You et al. (2019) also noticed that structural node representations of GNNs cannot capture the dependence (in particular distance) between nodes. To learn position-aware node embeddings, they propose P-GNN, which randomly chooses some anchor nodes and aggregates messages only from the anchor nodes. In P-GNN, nodes with similar distances to the anchor nodes, instead of nodes with similar neighborhoods, have similar embeddings. Thus, P-GNN cannot learn structural node/link representations. P-GNN also cannot scale to large datasets.
Finally, although labeling trick is formally defined in our conference paper (Zhang et al., 2021), various forms of specific labeling tricks have already been used in previous works. To our best knowledge, SEAL (Zhang and Chen, 2018) proposes the first labeling trick, which is designed to improve GNN's link prediction power. It is later adopted in inductive knowledge graph completion (Teru et al., 2020) and matrix completion (Zhang and Chen, 2020), and is generalized into DE (Li et al., 2020) and GLASS (Wang and Zhang, 2022) which works for \(|S|>2\) cases. Wan et al. (2021) use labeling trick for hyperedge prediction. Besides these set labeling tricks, some labeling methods similar to the subset labeling trick also exist in existing works. ID-GNN (You et al., 2021) and NBFNet (Zhu et al., 2021) both use a mechanism equivalent to the one head routine of subset labeling trick.
## 9 Experiments
Our experiments include various multi-node representation learning tasks: undirected link prediction, directed link prediction, hyperlink prediction, and subgraph prediction. Labeling trick boosts GNNs on all these tasks. All metrics in this section are the higher the better. Datasets are detailed in Appendix C.
### Undirected link prediction
In this section, we use a two-node task, link prediction, to empirically validate the effectiveness of set and subset labeling trick.
Following the setting in SEAL (Zhang and Chen, 2018), we use eight datasets: USAir, NS, PB, Yeast, C.ele, Power, Router, and E.coli. These datasets are relatively small. So we additionally use four large datasets in Open Graph Benchmark (OGB) (Hu et al., 2020): ogbl-ppa, ogbl-collab, ogbl-ddi, ogbl-citation2. To facilitate the comparison, we use the same metrics, including auroc, Hits@\(K\), and MRR, as in previous works.
We use the following baselines for comparison. We use 4 non-GNN methods: CN (Common-Neighbor), AA (Adamic-Adar), MF (matrix factorization) and Node2vec (Grover and Leskovec, 2016). CN and AA are two simple link prediction heuristics based on counting common neighbors. MF uses free-parameter node embeddings trained end-to-end as the node representations. Two set labeling trick methods are used: ZO and SEAL. ZO uses the zero-one labeling trick, and SEAL uses the DRNL labeling trick (Zhang and Chen, 2018). Three subset labeling trick methods are compared: subset zero-one labeling trick with subset pooling (SZO), subset distance encoding labeling trick with subset pooling (SDE), subset zero-one labeling trick with one-head routine (OSZO).
**Results and discussion.** We present the main results in Table 1. Compared with all non-GNN methods, vanilla 1-WL-GNN with no labeling trick (NO) gets lower auroc on almost all datasets. However, with labeling trick or subset labeling trick, 1-WL-GNN can outperform the baselines on almost all datasets. ZO, SEAL use set labeling trick and outperform non-GNN methods by 4% and 9% respectively on average. The performance difference between ZO and SEAL illustrates that labeling trick implementation can still affect the expressivity of 1-WL-GNN. However, even the simplest labeling trick can still boost 1-WL-GNNs by 6%. Subset(1) labeling trick SZO and SDE also achieve 9% and 11% score increase on average. Compared with ZO, though SZO also uses only the target set identity information, it distinguishes nodes in the target node set and achieves up to 5% performance increase on average, which verifies the usefulness of subset labeling trick. Last but not least, though subset labeling trick with one-head routine (OSZO) loses permutation invariance compared with subset pooling routine (SZO), it still achieves outstanding performance and even outperforms SZO on 4/8 datasets.
We also conduct experiments on some larger datasets as shown in Table 2. GNN augmented by labeling tricks achieves the best performance on all datasets.
### Directed link prediction tasks
To illustrate the necessity of introducing partial order to labeling trick, we compare set labeling trick and poset labeling trick on the directed link prediction task. Following previous
work (He et al., 2022), we use six directed graph datasets, namely Cornell, Texas, Wisconsin, CoraML, Citeseer, and Telegram. Our baselines includes previous state-of-the-art GNNs for directed graph, including DGCN (Tong et al., 2020), DiGCN and DiGCNIB (Tong et al., 2020), and MagNet (Zhang et al., 2021). Our models include NO (vanilla 1-WL-GNN), PL (poset labeling trick which labels the source node as 1, target node as 2, other nodes as 0), ZO (zero-one labeling trick).
The results are shown in Table 3. The existing state-of-the-art method MAGNet (Zhang et al., 2021) outperforms 1-WL-GNN by 0.25% on average. However, 1-WL-GNN with labeling trick outperforms all baselines. Moreover, poset labeling trick (PL) achieves 2% performance gain compared with the set labeling trick (ZO). These results validate the power of poset labeling trick and show that modeling partial order relation is critical for some tasks.
### Hyperedge prediction task
We use the datasets and baselines in (Srinivasan et al., 2021). Our datasets includes two drug networks (NDC-c, NDC-s), two forum networks (tags-m, tags-a), two email networks (email-En, email-Eu), and a network of congress members (congress). We use four GNNs designed for hypergraph as baselines, including ceGCN, ceSAGE, seRGCN, and FS (fam
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & USAir & NS & PB & Yeast & Cele & Power & Router & Ecoli \\ \hline CN & \(93.80_{\pm 1.22}\) & \(94.42_{\pm 0.95}\) & \(92.04_{\pm 0.35}\) & \(89.37_{\pm 0.61}\) & \(85.13_{\pm 1.61}\) & \(58.80_{\pm 0.88}\) & \(56.43_{\pm 0.52}\) & \(93.71_{\pm 0.39}\) \\ AA & \(95.06_{\pm 1.03}\) & \(94.45_{\pm 0.93}\) & \(92.36_{\pm 0.34}\) & \(89.43_{\pm 0.62}\) & \(86.95_{\pm 1.40}\) & \(58.79_{\pm 0.88}\) & \(56.43_{\pm 0.51}\) & \(95.36_{\pm 0.34}\) \\ NV & \(91.44_{\pm 1.78}\) & \(91.52_{\pm 1.28}\) & \(85.79_{\pm 0.78}\) & \(93.67_{\pm 0.46}\) & \(84.11_{\pm 1.27}\) & \(76.22_{\pm 0.92}\) & \(65.46_{\pm 0.86}\) & \(90.82_{\pm 1.49}\) \\ MF & \(94.08_{\pm 0.80}\) & \(74.55_{\pm 3.44}\) & \(94.30_{\pm 0.53}\) & \(90.28_{\pm 0.69}\) & \(85.90_{\pm 1.74}\) & \(50.63_{\pm 1.10}\) & \(78.03_{\pm 1.63}\) & \(93.76_{\pm 0.56}\) \\ \hline NO & \(89.04_{\pm 2.14}\) & \(74.10_{\pm 2.62}\) & \(90.87_{\pm 0.56}\) & \(83.04_{\pm 0.93}\) & \(73.25_{\pm 1.67}\) & \(65.89_{\pm 1.65}\) & \(92.47_{\pm 0.76}\) & \(93.27_{\pm 0.49}\) \\ \hline ZO & \(94.08_{\pm 1.43}\) & \(95.60_{\pm 0.93}\) & \(91.82_{\pm 1.26}\) & \(94.69_{\pm 0.45}\) & \(74.94_{\pm 2.01}\) & \(73.85_{\pm 1.37}\) & \(93.21_{\pm 0.66}\) & \(92.09_{\pm 0.67}\) \\ SEAL & \(\mathbf{97.09_{\pm 0.70}}\) & \(97.71_{\pm 0.93}\) & \(\mathbf{95.0_{\pm 0.34}}\) & \(97.20_{\pm 0.64}\) & \(86.54_{\pm 2.04}\) & \(84.18_{\pm 1.82}\) & \(95.68_{\pm 1.22}\) & \(97.22_{\pm 0.28}\) \\ \hline SZO & \(96.15_{\pm 1.06}\) & \(98.10_{\pm 0.67}\) & \(94.15_{\pm 0.50}\) & \(97.41_{\pm 0.37}\) & \(86.31_{\pm 1.80}\) & \(78.31_{\pm 0.91}\) & \(94.52_{\pm 0.72}\) & \(97.48_{\pm 0.23}\) \\ SDE & \(94.97_{\pm 0.61}\) & \(\mathbf{99.29_{\pm 0.14}}\) & \(94.44_{\pm 0.52}\) & \(\mathbf{98.17_{\pm 0.41}}\) & \(85.95_{\pm 0.36}\) & \(\mathbf{94.16_{\pm 0.14}}\) & \(\mathbf{99.33_{\pm 0.09}}\) & \(\mathbf{98.91_{\pm 0.08}}\) \\ OSZO & \(94.62_{\pm 0.63}\) & \(97.42_{\pm 0.49}\) & \(94.36_{\pm 0.26}\) & \(97.46_{\pm 0.06}\) & \(\mathbf{88.04_{\pm 0.52}}\) & \(84.95_{\pm 0.30}\) & \(93.77_{\pm 0.20}\) & \(95.53_{\pm 0.62}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on undirected link prediction task: auroc (%) \(\pm\) standard deviation.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Dataset & collab & ddi & citation2 & ppa \\ \hline metrics & Hits@50 & Hits@20 & MRR & Hits@100 \\ \hline SZO & \(54.69_{\pm 0.51}\) & \(29.27_{\pm 0.53}\) & \(82.45_{\pm 0.62}\) & \(36.04_{\pm 4.50}\) \\ ZO & \(53.29_{\pm 0.23}\) & \(23.90_{\pm 0.75}\) & \(78.50_{\pm 1.08}\) & \(37.75_{\pm 3.42}\) \\ SEAL & \(\mathbf{54.71_{\pm 0.49}}\) & \(30.56_{\pm 3.86}\) & \(\mathbf{87.67_{\pm 0.32}}\) & \(\mathbf{48.80_{\pm 3.16}}\) \\ NO & \(44.75_{\pm 1.07}\) & \(37.07_{\pm 5.07}\) & \(84.74_{\pm 0.21}\) & \(18.67_{\pm 1.32}\) \\ OSZO & \(49.17_{\pm 3.29}\) & \(\mathbf{41.24_{\pm 1.49}}\) & \(82.85_{\pm 0.43}\) & \(43.27_{\pm 1.19}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on undirected link prediction task.
ily set) (Srinivasan et al., 2021). Our models include ZO (zero-one labeling trick), SZO (subset(1) labeling trick with subset pooling), No (vanilla 1-WL-GNN).ZO and SZO outperform all other methods significantly.
### Subgraph prediction task
We use the datasets and baselines in (Alsentzer et al., 2020). We use three synthetic datasets, namely density, coreness, and cutratio. SubGNN (Alsentzer et al., 2020) and Sub2Vec (Adhikari et al., 2018) are models designed for subgraph. Our models include ZO (zero-one labeling trick), SZO (subset(1) labeling trick with subset pooling), and No (vanilla 1-WL-GNN). Labeling tricks boost vanilla 1-WL-GNN significantly. ZO improves
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & NDC-c & NDC-s & tags-m & tags-a & email-Enemail-EU & congress \\ \hline ceGCN & \(61.4_{\pm 0.5}\) & \(42.1_{\pm 1.4}\) & \(59.9_{\pm 0.9}\) & \(54.5_{\pm 0.5}\) & \(61.8_{\pm 3.2}\) & \(66.4_{\pm 0.3}\) & \(41.2_{\pm 0.3}\) \\ ceSAGE & \(65.7_{\pm 2.0}\) & \(47.9_{\pm 0.7}\) & \(63.5_{\pm 0.3}\) & \(59.7_{\pm 0.7}\) & \(59.4_{\pm 4.6}\) & \(65.1_{\pm 1.9}\) & \(53.0_{\pm 5.5}\) \\ seRGCN & \(67.6_{\pm 4.9}\) & \(52.5_{\pm 0.6}\) & \(57.2_{\pm 0.3}\) & \(54.5_{\pm 0.6}\) & \(59.9_{\pm 4.0}\) & \(66.1_{\pm 0.6}\) & \(54.4_{\pm 0.4}\) \\ FS & \(76.8_{\pm 0.4}\) & \(51.2_{\pm 3.2}\) & \(64.2_{\pm 0.6}\) & \(60.5_{\pm 0.2}\) & \(68.5_{\pm 1.6}\) & \(68.7_{\pm 0.2}\) & \(56.6_{\pm 1.1}\) \\ \hline No & \(60.2_{\pm 2.3}\) & \(45.6_{\pm 0.8}\) & \(56.6_{\pm 1.4}\) & \(56.5_{\pm 1.8}\) & \(56.9_{\pm 1.7}\) & \(57.2_{\pm 0.9}\) & \(54.1_{\pm 0.5}\) \\ \hline ZO & \(\mathbf{82.5_{\pm 1.3}}\mathbf{63.6_{\pm 1.5}}\mathbf{71.4_{\pm 0.5}}\mathbf{70.4_{ \pm 0.8}}\) & \(66.1_{\pm 1.2}\) & \(72.1_{\pm 1.1}\) & \(\mathbf{65.1_{\pm 0.2}}\) \\ SZO & \(75.8_{\pm 0.7}\) & \(62.2_{\pm 1.2}\) & \(71.0_{\pm 0.4}\) & \(69.6_{\pm 0.7}\) & \(\mathbf{67.7_{\pm 1.8}}\) & \(\mathbf{73.3_{\pm 0.5}}\) & \(64.2_{\pm 0.3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on hyperedge prediction tasks: f1-score (%) \(\pm\) standard deviation.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & density & coreness & cutratio \\ \hline ZO & \(\mathbf{98.4_{\pm 1.2}}\) & \(\mathbf{87.3_{\pm 15.0}}\) & \(\mathbf{93.0_{\pm 1.3}}\) \\ SZO & \(94.3_{\pm 6.9}\) & \(75.8_{\pm 7.0}\) & \(85.6_{\pm 2.5}\) \\ No & \(47.8_{\pm 2.9}\) & \(47.8_{\pm 5.3}\) & \(81.4_{\pm 1.5}\) \\ SubGNN & \(91.9_{\pm 0.6}\) & \(65.9_{\pm 3.1}\) & \(62.9_{\pm 1.3}\) \\ Sub2Vec & \(45.9_{\pm 1.2}\) & \(36.0_{\pm 1.9}\) & \(35.4_{\pm 1.4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on subgraph tasks: f1-score (%) \(\pm\) standard deviation.
34% score and SZO achieves 26% performance gain. Moreover, vanilla GNN augmented by labeling trick also outperforms GNN designed for subgraph on all datasets. Moreover, ZO outperforms SZO, which illustates that subset(1) labeling tricks captures pair-wise relation, while ZO can capture high-order relations better as shown in Section 6.3.1.
## 10 Conclusions
In this paper, we proposed a theory of using GNNs for multi-node representation learning. We first pointed out the key limitation of a common practice in previous works that directly aggregates node representations as a node-set representation. To address the problem, we proposed set labeling trick which gives target nodes distinct labels in a permutation equivariant way and characterized its expressive power. We further extended set labeling trick to poset and subset labeling trick, as well as extending graph to hypergraph. Our theory thoroughly discusses different variants and scenarios of using labeling trick to boost vanilla GNNs, and provides a solid foundation for future researchers to develop novel labeling tricks.
#### Acknowledgments
M. Zhang is supported by the NSF China (No.62276003). P. Li is supported by the National Science Foundation (NSF) award OAC-2117997.
## Appendix A Proofs
### Proof of Theorem 12
We restate Theorem 12: Given an NME GNN and an injective set aggregation function \(\operatorname{AGG}\), for any \(S,\boldsymbol{\mathsf{A}},S^{\prime},\boldsymbol{\mathsf{A}}^{\prime}\), \(\operatorname{GNN}(S,\boldsymbol{\mathsf{A}}^{(S)})=\operatorname{GNN}(S^{ \prime},\boldsymbol{\mathsf{A}}^{\prime(S^{\prime})})\Leftrightarrow(S, \boldsymbol{\mathsf{A}})\simeq(S^{\prime},\boldsymbol{\mathsf{A}}^{\prime})\), where \(\operatorname{GNN}(S,\boldsymbol{\mathsf{A}}^{(S)}):=\operatorname{AGG}( \{\operatorname{GNN}(i,\boldsymbol{\mathsf{A}}^{(S)})|i\in S\})\).
We need to show \(\operatorname{AGG}(\{\operatorname{GNN}(i,\boldsymbol{\mathsf{A}}^{(S)})|i \in S\})=\operatorname{AGG}(\{\operatorname{GNN}(i,\boldsymbol{\mathsf{A}}^{ \prime(S^{\prime})}|i\in S^{\prime}\})\Leftrightarrow(S,\boldsymbol{\mathsf{ A}})\simeq(S^{\prime},\boldsymbol{\mathsf{A}}^{\prime})\).
To prove \(\Rightarrow\), we notice that with an injective \(\operatorname{AGG}\),
\[\operatorname{AGG}(\{\operatorname{GNN}(i,\boldsymbol{\mathsf{A}}^{(S)}) )|i\in S\})=\operatorname{AGG}(\{\operatorname{GNN}(i,\boldsymbol{\mathsf{A}} ^{\prime(S^{\prime})}))|i\in S^{\prime}\})\] \[\implies\exists\ v_{1}\in S,v_{2}\in S^{\prime},\ \text{such that}\ \operatorname{GNN}(v_{1},\boldsymbol{\mathsf{A}}^{(S)})=\operatorname{GNN}(v_{2},\boldsymbol{\mathsf{A}}^{\prime(S^{\prime})}) \tag{11}\] \[\implies(v_{1},\boldsymbol{\mathsf{A}}^{(S)})\simeq(v_{2}, \boldsymbol{\mathsf{A}}^{\prime(S^{\prime})})\ \ \ \ (\text{because GNN is node- most-expressive})\] (12) \[\implies\exists\ \pi\in\Pi_{n},\ \text{such that}\ v_{1}=\pi(v_{2}), \boldsymbol{\mathsf{A}}^{(S)}=\pi(\boldsymbol{\mathsf{A}}^{\prime(S^{\prime})}). \tag{13}\]
Remember \(\boldsymbol{\mathsf{A}}^{(S)}\) is constructed by stacking \(\boldsymbol{\mathsf{A}}\) and \(\boldsymbol{\mathsf{L}}(S,\boldsymbol{\mathsf{A}})\) in the third dimension, where \(\boldsymbol{\mathsf{L}}(S,\boldsymbol{\mathsf{A}})\) is a tensor satisfying: \(\forall\pi\in\Pi_{n},\ (1)\ \boldsymbol{\mathsf{L}}(S,\boldsymbol{\mathsf{A}})=\pi( \boldsymbol{\mathsf{L}}(S^{\prime},\boldsymbol{\mathsf{A}}^{\prime})) \Rightarrow S=\pi(S^{\prime})\), and (2) \(S=\pi(S^{\prime}),\boldsymbol{\mathsf{A}}=\pi(\boldsymbol{\mathsf{A}}^{ \prime})\Rightarrow\boldsymbol{\mathsf{L}}(S,\boldsymbol{\mathsf{A}})=\pi( \boldsymbol{\mathsf{L}}(S^{\prime},\boldsymbol{\mathsf{A}}^{\prime}))\). With \(\boldsymbol{\mathsf{A}}^{(S)}=\pi(\boldsymbol{\mathsf{A}}^{\prime(S^{\prime})})\), we have both
\[\boldsymbol{\mathsf{A}}=\pi(\boldsymbol{\mathsf{A}}^{\prime}),\ \boldsymbol{\mathsf{L}}(S,\boldsymbol{\mathsf{A}})=\pi( \boldsymbol{\mathsf{L}}(S^{\prime},\boldsymbol{\mathsf{A}}^{\prime})).\]
Because \(\boldsymbol{\mathsf{L}}(S,\boldsymbol{\mathsf{A}})=\pi(\boldsymbol{\mathsf{L} }(S^{\prime},\boldsymbol{\mathsf{A}}^{\prime}))\Rightarrow S=\pi(S^{\prime})\), continuing from Equation (13), we have
\[\operatorname{AGG}(\{\operatorname{GNN}(i,\boldsymbol{\mathsf{A}} ^{(S)})|i\in S\})=\operatorname{AGG}(\{\operatorname{GNN}(i,\boldsymbol{ \mathsf{A}}^{\prime(S^{\prime})})|i\in S^{\prime}\})\] \[\implies\exists\ \pi\in\Pi_{n},\ \text{such that}\ \boldsymbol{\mathsf{A}}=\pi( \boldsymbol{\mathsf{A}}^{\prime}),\ \boldsymbol{\mathsf{L}}(S,\boldsymbol{\mathsf{A}})=\pi( \boldsymbol{\mathsf{L}}(S^{\prime},\boldsymbol{\mathsf{A}}^{\prime})) \tag{14}\] \[\implies\exists\ \pi\in\Pi_{n},\ \text{such that}\ \boldsymbol{\mathsf{A}}=\pi( \boldsymbol{\mathsf{A}}^{\prime}),\ S=\pi(S^{\prime})\] (15) \[\implies(S,\boldsymbol{\mathsf{A}})\simeq(S^{\prime}, \boldsymbol{\mathsf{A}}^{\prime}). \tag{16}\]
Now we prove \(\Leftarrow\). Because \(S=\pi(S^{\prime}),\boldsymbol{\mathsf{A}}=\pi(\boldsymbol{\mathsf{A}}^{ \prime})\Rightarrow\boldsymbol{\mathsf{L}}(S,\boldsymbol{\mathsf{A}})=\pi( \boldsymbol{\mathsf{L}}(S^{\prime},\boldsymbol{\mathsf{A}}^{\prime}))\), we have:
\[(S,\boldsymbol{\mathsf{A}})\simeq(S^{\prime},\boldsymbol{\mathsf{A}}^{ \prime})\] \[\implies\exists\ \pi\in\Pi_{n},\ \text{such that}\ S=\pi(S^{\prime}), \boldsymbol{\mathsf{A}}=\pi(\boldsymbol{\mathsf{A}}^{\prime}) \tag{17}\] \[\implies\exists\ \pi\in\Pi_{n},\ \text{such that}\ S=\pi(S^{\prime}), \boldsymbol{\mathsf{A}}=\pi(\boldsymbol{\mathsf{A}}^{\prime}),\boldsymbol{ \mathsf{L}}(S,\boldsymbol{\mathsf{A}})=\pi(\boldsymbol{\mathsf{L}}(S^{\prime}, \boldsymbol{\mathsf{A}}^{\prime}))\] (18) \[\implies\exists\ \pi\in\Pi_{n},\ \text{such that}\ S=\pi(S^{\prime}), \boldsymbol{\mathsf{A}}^{(S)}=\pi(\boldsymbol{\mathsf{A}}^{\prime(S^{\prime})})\] (19) \[\implies\exists\ \pi\in\Pi_{n},\ \text{such that}\ \forall v_{2}\in S^{\prime},v_{1}=\pi(v_{2})\in S, \operatorname{GNN}(v_{1},\boldsymbol{\mathsf{A}}^{(S)})=\operatorname{GNN}(v_{2},\boldsymbol{\mathsf{A}}^{\prime(S^{\prime})})\] (20) \[\implies\operatorname{AGG}(\{\operatorname{GNN}(v_{1}, \boldsymbol{\mathsf{A}}^{(S)})|v_{1}\in S\})=\operatorname{AGG}(\{ \operatorname{GNN}(v_{2},\boldsymbol{\mathsf{A}}^{\prime(S^{\prime})})|v_{2} \in S^{\prime}\}), \tag{21}\]
which concludes the proof.
### Proof of Theorem 13 and Theorem 29
As an \(h\)-layer 1-WL-GNN only encodes an \(h\)-hop neighbors for each node, we define locally \(h\)-isomorphism.
**Definition 34**: _For all \(S,\mathbf{\mathsf{A}},S^{\prime},\mathbf{\mathsf{A}}^{\prime}\), \((S,\mathbf{\mathsf{A}})\) and \((S^{\prime},\mathbf{\mathsf{A}}^{\prime})\) are locally \(h\)-isomorphic iff \((S,\mathbf{\mathsf{A}}_{S,h})\simeq(S^{\prime},\mathbf{ \mathsf{A}}_{S^{\prime},h})\), where \(\mathbf{\mathsf{A}}_{S,h}\) means the subgraph of \(\mathsf{A}\) induced by the node set \(\{v\in V|\exists u\in S,d_{\mathit{sp}}(u,v,\mathbf{\mathsf{A}})\leq h\}\), and \(d_{\mathit{sp}}(u,v,\mathbf{\mathsf{A}})\) means the shortest path distance between node \(u,v\) in graph \(\mathsf{A}\)._
We restate Theorem 13(Theorem 29): In any non-attributed graph with \(n\) nodes, if the degree of each node in the graph is between \(1\) and \(\left((1-\epsilon)\log n\right)^{1/(2h+2)}\) for any constant \(\epsilon>0\), then there exists \(\omega(n^{2\epsilon})\) many pairs of non-isomorphic links \((u,w),(v,w)\) such that an \(h\)-layer 1-WL-GNN gives \(u,v\) the same representation, while with zero-one labeling trick (subset zero-one labeling trick) the 1-WL-GNN gives \(u,v\) different representations. These two theorems can be proved together because the special cases we build can be solved by both of them.
**Proof**
Our proof has two steps. First, we would like to show that there are \(\omega(n^{\epsilon})\) nodes that are locally \(h\)-isomorphic to each other. Then, we prove that among these nodes, there are at least \(\omega(n^{2\epsilon})\) pairs of nodes such that there exists another node constructing locally \(h\) non-isomorphic links with either of the two nodes in each node pair.
**Step 1.** Consider an arbitrary node \(v\) and denote the node set induced by the nodes that are at most \(h\)-hop away from \(v\) as \(G_{v}^{(h)}\) (the \(h\)-hop enclosing subgraph of \(v\)). As each node is with degree \(d\leq\left((1-\epsilon)\log n\right)^{1/(2h+2)}\), then the number of nodes in \(G_{v}^{(h)}\), denoted by \(|V(G_{v}^{(h)})|\), satisfies
\[|V(G_{v}^{(h)})|\leq\sum_{i=0}^{h}d^{i}\leq d^{h+1}=\left((1-\epsilon)\log n \right)^{1/2}.\]
We set \(K=\max_{v\in V}|V(G_{v}^{(h)})|\) and thus \(K\leq\left((1-\epsilon)\log n\right)^{1/2}\).
Now we expand subgraphs \(G_{v}^{(h)}\) to \(\bar{G}_{v}^{(h)}\) by adding \(K-|V(G_{v}^{(h)})|\) independent nodes for each node \(v\in V\). Then, all \(\bar{G}_{v}^{(h)}\) have the same number of nodes, which is \(K\), though they may not be connected graphs. Next, we consider the number of non-isomorphic graphs over \(K\) nodes. Actually, the number of non-isomorphic graph structures over \(K\) nodes is bounded by
\[2^{\binom{K}{2}}\leq 2^{(1-\epsilon)\log n}=n^{1-\epsilon}. \tag{22}\]
Therefore, due to the pigeonhole principle, there exist \(\omega(n/n^{1-\epsilon})=\omega(n^{\epsilon})\) many nodes \(v\) whose \(\bar{G}_{v}^{(h)}\) are isomorphic to each other. Denote the set of these nodes as \(V_{\mathit{iso}}\), which consist of nodes that are all locally \(h\)-isomorphic to each other.
**Step 2.** Let us partition \(V_{\mathit{iso}}=\cup_{i=1}^{q}V_{i}\) so that for all \(i\in\{1,2,...,q\}\), nodes in \(V_{i}\) share the same first-hop neighbor sets. Then, consider any pair of nodes \(u,v\) such that \(u,v\) are from different \(V_{i}\)'s. Since \(u,v\) share identical \(h\)-hop neighborhood structures, an \(h\)-layer 1-WL-GNN will give them the same representation. Then, we may pick one \(u\)'s first-hop neighbor \(w\) that is not \(v\)'s first-hop neighbor. We know such \(w\) exists because of the definition of \(V_{i}\). As \(w\) is \(u\)'s first-hop neighbor and is not \(v\)'s first-hop neighbor, \((u,w)\) and \((v,w)\) are not isomorphic. With labeling trick, the \(h\)-layer 1-WL-GNN will give \(u,v\) different representations immediately after the first message passing round due to \(w\)'s distinct label. Therefore, we know such a \((u,w),(v,w)\) pair is exactly what we want.
Based on the partition \(V_{iso}\), we know the number of such non-isomorphic link pairs \((u,w)\) and \((v,w)\) is at least:
\[Y\geq\sum_{i,j=1,i\neq j}^{q}|V_{i}||V_{j}|=\frac{1}{2}\left[(\sum_{i=1}^{q}|V_{i }|)^{2}-\sum_{i=1}^{q}|V_{i}|^{2}\right]. \tag{23}\]
Because of the definitions of the partition, \(\sum_{i=1}^{q}|V_{i}|=|V_{iso}|=\omega(n^{\epsilon})\) and the size of each \(V_{i}\) satisfies
\[1\leq|V_{i}|\leq d_{w}\leq\left((1-\epsilon)\log n\right)^{1/(2h+2)},\]
where \(w\) is one of the common first-hop neighbors shared by all nodes in \(V_{i}\) and \(d_{w}\) is its degree.
By plugging in the range of \(|V_{i}|\), Eq.23 leads to
\[Y \geq\frac{1}{2}[(\sum_{i=1}^{q}|V_{i}|)^{2}-\sum_{i=1}^{q}|V_{i}| (\max_{j\in\{1,2,\ldots,q\}}|V_{j}|)]\] \[=\frac{1}{2}(\omega(n^{2\epsilon})-\omega(n^{\epsilon})\mathcal{O }\Big{(}\big{(}(1-\epsilon)\log n\big{)}^{1/(2h+2)}\Big{)}\] \[=\omega(n^{2\epsilon}),\]
which concludes the proof.
### Proof of Theorem 15
**Proof** This proof shares the same first step as Appendix A.2.
**Step 2.** Let us partition \(\mathbb{V}_{iso}=\bigcup_{i=1}^{q}\mathbb{V}_{i}\), nodes in each \(V_{i}\) share the same one-hop neighbor. Consider two nodes \(u\in\mathbb{V}_{i},v\in\mathbb{V}_{j},i\neq j\). There exists a node \(w\in N(u),w\notin N(v)\). Let \(\tilde{\mathbb{V}}_{u,v,w}\) denote \(\mathbb{V}-\{u,v,w\}-N(u)\). \(|\mathbb{V}_{v}|\geq n-3-\big{(}(1-\epsilon)\log n\big{)}^{1/(2h+2)}\). Consider arbitrary subset \(\mathbb{V}^{\prime}\) of \(\tilde{\mathbb{V}}_{u,v,w}\). Let \(\mathcal{S}_{1}\) denote the subgraph induced by \(\mathbb{V}^{\prime}\bigcup\{u,w\}\), \(\mathcal{S}_{2}\) denote the subgraph induced by \(\mathbb{V}^{\prime}\bigcup\{v,w\}\). The density of \(\mathcal{S}_{1}\) is higher than \(\mathcal{S}_{2}\). And 1-WL-GNN with zero-one labeling trick can fit density perfectly (Theorem 1 in (Wang and Zhang, 2022)), so 1-WL-GNN with labeling trick can distinguish \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\), while 1-WL-GNNs cannot.
The number of pair \((u,v,w)\) is \(w(n^{2\epsilon})\). Therefore, the number of these pairs of subgraphs is bounded by
\[w(n^{2\epsilon})2^{n-3-((1-\epsilon)\log n)^{1/(2h+2)}}=w(2^{n}n^{3\epsilon-1 }). \tag{24}\]
\(\blacksquare\)
### Proof of Theorem 22
This proof shares the same first step as Appendix A.2.
Number of link: the same as the step 2 in Appendix A.2.
Number of subgraph: similar to the step 2 in Appendix A.3. Let us partition \(\mathbb{V}_{iso}=\bigcup_{i=1}^{q}\mathbb{V}_{i}\), nodes in each \(V_{i}\) share the same one-hop neighbor. Consider two nodes
\(\mathbb{V}_{j},i\neq j\). There exists a node \(w\in N(u),w\notin N(v)\). Let \(\tilde{\mathbb{V}}_{u,v,w}\) denote \(\mathbb{V}-\{u,v,w\}-N(u)\). \(|\mathbb{V}_{v}|\geq n-3-\big{(}(1-\epsilon)\log n\big{)}^{1/(2h+2)}\). Consider arbitrary subset \(\mathbb{V}^{\prime}\) of \(\tilde{\mathbb{V}}_{u,v,w}\) and a partial order \(\leq_{\mathbb{V}^{\prime}}\). Let \(\mathcal{S}_{1}\) denote the subgraph induced by poset \(\big{(}(\mathbb{V}^{\prime}\bigcup\{u,w\}),\leq_{\mathbb{V}^{\prime}}\cup\{( u,a)|a\in sV^{\prime}\}\cup\{(w,a)|a\in sV^{\prime}\cup\{u\}\}\big{)}\), \(\mathcal{S}_{2}\) denote the subgraph induced by poset \(\big{(}\mathbb{V}^{\prime}\bigcup\{v,w\},\leq_{\mathbb{V}^{\prime}}\cup\{(v,a )|a\in sV^{\prime}\}\cup\{(w,a)|a\in sV^{\prime}\cup\{v\}\}\big{)}\). 1-WL-GNN with labeling trick can distinguish \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) as the edges between \((u,w)\) and \((v,w)\) are distinct, while 1-WL-GNNs cannot.
The number of pair \((u,v,w)\) is \(w(n^{2\epsilon})\). Therefore, the number of these pairs of subgraphs is bounded by
\[w(n^{2\epsilon})w(n-3-\big{(}(1-\epsilon)\log n\big{)}^{1/(2h+2)})!=w\Big{(} \big{(}(1-\epsilon)n\big{)}!\Big{)}. \tag{25}\]
### Proof of Proposition 14
As shown in Figure 0(a), 1-WL-GNN cannot count common neighbor and thus fail to implement \(h\). Now we prove that with zero-one labeling trick, 1-WL-GNN can implement \(h\).
Given a graph \(\blacktriangle\) and a node pair \((i,j)\), let \(z_{k}^{(k)}\) denote the embedding of node \(i\) at \(k^{\text{th}}\) message passing layer.
\[z_{k}^{(0)}=\begin{bmatrix}1\\ \delta_{ki}+\delta_{kj}\end{bmatrix}. \tag{26}\]
The first dimension is all 1 (vanilla node feature), and the second dimension is zero-one label.
The first layer is,
\[z_{k}^{(1)}=\begin{bmatrix}g_{1}(a_{k}^{(1)}[1])\\ g_{2}(a_{k}^{(1)}[1])\\ a_{k}^{(1)}[2]>2\end{bmatrix} \tag{27}\]
where \(a_{k}^{(1)}=\sum_{l\in N(k)}z_{l}^{(0)}\), [1] means the first element of vector, and [2] means the second element.
The second layer is
\[z_{k}^{(2)}=\begin{bmatrix}\sum_{l\in N(k)}z_{k}^{(1)}[3]z_{k}^{(1)}[2]\\ \sum_{l\in N(k)}(1-z_{k}^{(1)}[3])z_{k}^{(1)}[1]\end{bmatrix} \tag{28}\]
The pooling layer is
\[z_{ij}=f(\{z_{i}[2],z_{j}[2]\},\frac{z_{i}[1]+z_{j}[1]}{2}) \tag{29}\]
### Proof of Theorem 21
**Proof**\(\Leftarrow\): When \((S,\blacktriangle)\simeq(S^{\prime},\blacktriangle^{\prime})\), there exists a permutation \(\pi\), \(\pi(S)=S^{\prime},\pi(\blacktriangle)=\blacktriangle^{\prime}\).
\[\mathrm{GNN}(S,\mathbf{\mathsf{A}}^{(S)}) =\mathrm{AGG}(\{\mathrm{GNN}(v,\mathbf{\mathsf{A}}^{(S)}|v \in S\}) \tag{30}\] \[=\mathrm{AGG}(\{\mathrm{GNN}(\pi(v),\pi(\mathbf{\mathsf{A}$ }^{(S)}))|v\in S\})\] (31) \[=\mathrm{AGG}(\{\mathrm{GNN}(\pi(v),\mbox{\boldmath$\mathsf{A}}^ {\prime(S^{\prime})}|v\in S\})\] (32) \[=\mathrm{AGG}(\{\mathrm{GNN}(v^{\prime},\mathbf{\mathsf{ A}}^{\prime(S^{\prime})})|v^{\prime}\in S^{\prime}\})\] (33) \[=\mathrm{GNN}(S^{\prime},\mathbf{\mathsf{A}}^{\prime(S^{ \prime})})\] (34) \[\Rightarrow:\] \[\mathrm{GNN}(S,\mathbf{\mathsf{A}}^{(S)})=\mathrm{GNN}(S^{ \prime},\mathbf{\mathsf{A}}^{\prime(S^{\prime})}) \tag{35}\]
\[\mathrm{AGG}(\{\mathrm{GNN}(v,\mathbf{\mathsf{A}}^{(S)})|v\in S\}) =\mathrm{AGG}(\{\mathrm{GNN}(v^{\prime},\mathbf{\mathsf{A}}^{\prime (S^{\prime})})|v^{\prime}\in S^{\prime}\}) \tag{36}\]
As \(\mathrm{AGG}\) is injective, There exist \(v_{0}\in S,v_{0}^{\prime}\in S^{\prime}\),
\[\mathrm{GNN}(v_{0},\mathbf{\mathsf{A}}^{(S)})=\mathrm{GNN}(v_{0}^{ \prime},\mathbf{\mathsf{A}}^{(S^{\prime})}) \tag{37}\]
As \(\mathrm{GNN}\) is node most expressive,
\[\exists\pi,\pi(v_{0})=v_{0}^{\prime},\pi(\mathbf{\mathsf{A}})=\mathbf{\mathsf{A}}^{\prime},\pi(\mathbf{\mathsf{L}}(S,\mathbf{\mathsf{A}}))=\mathbf{\mathsf{L}}(S^{\prime},\mathbf{\mathsf{A}}^{\prime}).\]
Therefore, \(\pi(\mathbf{\mathsf{L}}(S,\mathbf{\mathsf{A}}))=\mathbf{\mathsf{L}}(S^{\prime},\mathbf{\mathsf{A}}^{\prime}))\).
### Proof of Theorem 25
**Proof**\(\Leftarrow\): When \((S,\mathbf{\mathsf{A}})\simeq(S^{\prime},\mathbf{\mathsf{A}}^ {\prime})\), there exists a permutation \(\pi\), \(\pi(S)=S^{\prime},\pi(\mathbf{\mathsf{A}})=\mathbf{\mathsf{A }}^{\prime}\).
\[\mathrm{AGG}\left(\left\{\mathrm{AGG}(\{\mathrm{GNN}(u,\mathbf{\mathsf{A}}^{(S-\{v\})})|u\in S\})|v\in S\right\}\right) \tag{38}\] \[=\mathrm{AGG}(\{\mathrm{AGG}(\{\mathrm{GNN}(\pi(u),\pi(\mathbf{\mathsf{A}}^{(S-\{v\})})|u\in S\})|v\in S\})\] (39) \[=\mathrm{AGG}(\{\mathrm{AGG}(\{\mathrm{GNN}(\pi(u),\mathbf{\mathsf{A}}^{\prime(\pi(S)-\{\pi(v)\})}|u\in S\})|v\in S\})\] (40) \[=\mathrm{AGG}(\{\mathrm{AGG}(\{\mathrm{GNN}(u^{\prime},\mathbf{\mathsf{A}}^{(S^{\prime}-\{v^{\prime}\})})|u^{\prime}\in S^{\prime}\})| v^{\prime}\in S^{\prime}\})\] (41) \[\Rightarrow:\] \[\mathrm{AGG}(\{\mathrm{AGG}(\{\mathrm{GNN}(u,\mathbf{ \mathsf{A}}^{(S-\{v\})})|u\in S\})|v\in S\})\] \[=\mathrm{AGG}(\{\mathrm{AGG}(\{\mathrm{GNN}(u^{\prime},\mathbf{\mathsf{A}}^{\prime(S^{\prime}-\{v^{\prime}\})})|u^{\prime}\in S^{ \prime}\})|v^{\prime}\in S^{\prime}\}).\]
As \(\mathrm{AGG}\) is injective,
\[\{\mathrm{AGG}(\{\mathrm{GNN}(u,\mathbf{\mathsf{A}}^{(S-\{v\})})|v \in S\})|u\in S\}=\{\{\mathrm{AGG}(\{\mathrm{GNN}(u^{\prime},\mathbf{ \mathsf{A}}^{(S^{\prime}-\{v^{\prime}\})})|v^{\prime}\in S^{\prime}\})\}. \tag{42}\]
There exist \(v_{0}\in S,v_{0}^{\prime}\in S^{\prime}\),
\[\mathrm{AGG}(\{\mathrm{GNN}(u,\mathbf{\mathsf{A}}^{(S-\{v_{0}\})})|u \in S\})=\mathrm{AGG}(\{\mathrm{GNN}(u^{\prime},\mathbf{\mathsf{A}}^{( S^{\prime}-\{v_{0}^{\prime}\})})|u^{\prime}\in S^{\prime}\}). \tag{43}\]
Similarly, there exists \(u_{0}^{\prime}\in S^{\prime}\)
\[\mathrm{GNN}(v_{0},\mathbf{\mathsf{A}}^{(S-\{v_{0}\})})=\mathrm{GNN}(u_ {0}^{\prime},\mathbf{\mathsf{A}}^{(S^{\prime}-\{v_{0}^{\prime}\})}). \tag{44}\]
As GNN is node most expressive,
\[\exists\pi,\pi(v_{0})=u^{\prime}_{0},\pi(\textbf{A})=\textbf{A}^{\prime},\pi( \textbf{L}(S-\{v_{0}\},\textbf{A}))=\textbf{L}(S^{\prime}-\{v^{\prime}_{0}\}, \textbf{A}^{\prime})).\]
Therefore, \(\pi(S-\{v_{0}\})=S^{\prime}-\{v^{\prime}_{0}\}\). Note that \(v_{0}\notin S-\{v_{0}\}\), so \(u^{\prime}_{0}=\pi(v_{0})\notin S^{\prime}-\{v^{\prime}_{0}\}\), while \(u^{\prime}_{0}\in S^{\prime}\), therefore \(u^{\prime}_{0}=v^{\prime}_{0}\).
Therefore, \(\pi(S)=S^{\prime}\), and \(\pi(\textbf{A})=\textbf{A}^{\prime}\), so \((S,\textbf{A})\simeq(S^{\prime},\textbf{A}^{\prime})\).
### Proof of Theorem 26
We prove it by contradiction: If \(\exists v_{0}\in S,v^{\prime}_{0}\in S^{\prime}\),
\[\operatorname{GNN}(S,\textbf{A}^{(S-\{v_{0}\})})=\operatorname{GNN}(S^{\prime },\textbf{A}^{\prime(S^{\prime}-\{v^{\prime}_{0}\})}) \tag{45}\]
Therefore, there exists \(u_{0}\in S,u^{\prime}_{0}\in S^{\prime}\)
\[\operatorname{GNN}(v_{0},\textbf{A}^{(S-\{v_{0}\})})=\operatorname{GNN}(u^{ \prime}_{0},\textbf{A}^{(S^{\prime}-\{v^{\prime}_{0}\})}). \tag{46}\]
As GNN is node most expressive,
\[\exists\pi,\pi(v_{0})=u^{\prime}_{0},\pi(\textbf{A})=\textbf{A}^{\prime},\pi( \textbf{L}(S-\{v_{0}\},\textbf{A}))=\textbf{L}(S^{\prime}-\{v^{\prime}_{0}\},\textbf{A}^{\prime})).\]
Therefore, \(\pi(S-\{v_{0}\})=S^{\prime}-\{v^{\prime}_{0}\}\). Note that \(v_{0}\notin S-\{v_{0}\}\), so \(u^{\prime}_{0}=\pi(v_{0})\notin S^{\prime}-\{v^{\prime}_{0}\}\), while \(u^{\prime}_{0}\in S^{\prime}\), therefore \(u^{\prime}_{0}=v^{\prime}_{0}\).
Therefore, \(\pi(S)=S^{\prime}\), and \(\pi(\textbf{A})=\textbf{A}^{\prime}\), so \((S,\textbf{A})\simeq(S^{\prime},\textbf{A}^{\prime})\), which contradicts to that \((S,\textbf{A})\not\simeq(S^{\prime},\textbf{A}^{\prime})\).
### Proof of Proposition 18
Due to the property 1 in Definition 16, \(\textbf{L}(S,\textbf{A})=\pi(\textbf{L}(S^{\prime},\textbf{A}^{\prime}))\Rightarrow S =\pi(S^{\prime})\). Therefore, for all \(v\in S\), \(\pi^{-1}(v)\in S\). Moreover, \(\forall v^{\prime}\in S^{\prime}\), \(\exists v\in S,\pi^{-1}(v)=v^{\prime}\).
Consider an edge \((u,v)\) in \(\mathcal{H}_{S}\). According to Definition 17, \(u\neq v\),\(u\leq_{S}v\), and there exists no node \(w\in S,w\notin u,v\) that \(u\leq_{S}w\) and \(w\leq_{S}v\). As \(\pi(S^{\prime})=S\), \(\pi^{-1}(u)\neq\pi^{-1}(v)\),\(\pi^{-1}(u)\leq_{S^{\prime}}\pi^{-1}(v)\), and there exists no node \(\pi^{-1}(w)\in S^{\prime},\pi^{-1}(w)\notin\pi^{-1}(u),\pi^{-1}(v)\) that \(\pi^{-1}(u)\leq_{S^{\prime}}\pi^{-1}(w)\) and \(\pi^{-1}(w)\leq_{S^{\prime}}\pi^{-1}(v)\). Therefore, when \(S=\pi(S^{\prime})\), for all edge \((u,v)\) in \(\mathcal{H}_{S}\), edge \((\pi^{-1}(u),\pi^{-1}(v))\) exists in \(\mathcal{H}_{S^{\prime}}\).
Similarly, as \(S^{\prime}=\pi^{-1}(S)\), for all edge \((\pi^{-1}(u),\pi^{-1}(v))\) in \(\mathcal{H}_{S^{\prime}}\), edge
\[((\pi^{-1})^{-1}(\pi^{-1}(u)),(\pi^{-1})^{-1}(\pi^{-1}(v)))=(u,v),\]
exists in \(\mathcal{H}_{S^{\prime}}\). So \(\mathcal{H}_{S}=\pi(\mathcal{H}_{S^{\prime}})\). Equivalently, for all \(v\in S^{\prime}\), \(\pi(v)\) is in \(S\), and \((\{v\},\mathcal{H}_{S^{\prime}})\simeq(\{\pi(v)\},\mathcal{H}_{S})\).
Assume that \(u,v\) are not isomorphic in \(S\), but \(\textbf{L}(S,\textbf{A})_{u,u,:}=\textbf{L}(S,\textbf{A})_{v,v,:}\). Define permutation \(\pi:V\to V\) as follows,
\[\pi(i)=\begin{cases}v&\text{if }i=u\\ u&\text{if }i=v\\ i&\text{otherwise}\end{cases}. \tag{47}\]
\(\pi(\textbf{L}(S,\textbf{A}))=\textbf{L}(S,\textbf{A})\Rightarrow\pi(S)=S \Rightarrow(v,\mathcal{H}_{S})\simeq(u,\mathcal{H}_{S})\). Equivalently, non-isomorphic nodes in the same hase diagram should have different labels.
### Proof of Theorem 32
The main gap between hypergraph isomorphism and corresponding graph isomorphism is that hypergraph permutation is composed of two permutation transforms node and edge order independently, while corresponding graph isomorphism is only related to one node permutation, so we first define ways to combine and split permutations.
Sorting of corresponding graph: Let \(I_{V}(IG_{H})=\{i|(IG_{H})_{i,i,d+1}=1\}\) denote nodes in \(G(H)\) corresponding to nodes in \(H\). Let \(I_{E}(IG_{H})=\{i|(IG_{H})_{i,i,d+1}=0\}\) denote the nodes representing hypergraph edges. We define a permutation \(\pi^{I_{V},I_{E}}\in\Pi_{n+m}\), \(\pi^{I_{V},I_{E}}\), \(\pi^{I_{V},I_{E}}(I_{V})=[n],\pi^{I_{V},I_{E}}(I_{E})=\{n+1,n+2,...,n+m\}\).
Concatenation of permutation: Let \(\pi_{1}\in\Pi_{n},\pi_{2}\in\Pi_{m}\). Their concatenation \(\pi_{1}|\pi_{2}\in\Pi_{m+n}\)
\[\pi_{1}|\pi_{2}(i)=\begin{cases}\pi_{1}(i)&i\leq n\\ n+\pi_{2}(i-n)&\text{otherwise}\end{cases} \tag{48}\]
When \(S_{1},S_{2}\) have different sizes, or \(H_{1}\), \(H_{2}\) have different number of nodes or hyperedges, two poset are non-isomorphic. So we only discuss the case that the poset and hypergraph sizes are the same. Let \(n,m\) denote the number of nodes and hyperedges in the hypergraph. Then the corresponding graph has \(n+m\) nodes.
We first prove \(\Rightarrow\): When \((S,H)\sim(S^{\prime},H^{\prime})\), according to Definition 30, there exists \(\pi_{1}\in\Pi_{n},\pi_{2}\in\Pi_{m},(\pi_{1},\pi_{2})(H)=H^{\prime},\pi_{1}(S )=S^{\prime}\). Then, \((\pi_{1}|\pi_{2})(IG_{H})=IG_{H^{\prime}}\) and \((\pi_{1}|\pi_{2})(S)=S^{\prime}\).
Then we prove \(\Leftarrow\): When \((S,IG_{H})\simeq(S^{\prime},IG_{H^{\prime}})\). We can first sort two incidence graph. Let \(\pi=\pi^{I_{V}(IG_{H}),I_{E}(IG_{H})}\) and \(\pi^{\prime}=\pi^{I_{V}(IG_{H^{\prime}}),I_{E}(IG_{H^{\prime}})}\). Then two posets and graphs are still isomorphic.
\[(\pi(S),\pi(IG_{H}))\simeq(\pi^{\prime}(S^{\prime}),\pi^{\prime}(IG_{H^{ \prime}})) \tag{49}\]
Therefore, \(\exists\pi_{0}\in\Pi_{n+m},\pi(S)=\pi_{0}(\pi^{\prime}(S^{\prime})),\pi(IG_{H} )=\pi_{0}(\pi^{\prime}(IG_{H^{\prime}}))\). Let \(\textbf{A},\textbf{A}^{\prime}\in\mathbb{R}^{(n+m)\times(n+m)\times d+1}\) denote the adjacency tensor of \(\pi(IG_{H}),\pi^{\prime}(IG_{H^{\prime}})\) respectively. Therefore,
\[\textbf{A}=\pi_{0}(\textbf{A}^{\prime})\Rightarrow\textbf{A}_{\pi_{0}(i),\pi_ {0}(i),d+1}=\textbf{A}^{\prime}_{i,i,d+1},\forall i\in\{1,2,...,m+n\}. \tag{50}\]
As the nodes in \(\textbf{A},\textbf{A}^{\prime}\) are sorted, \(\textbf{A}_{i,i,d+1}=1,\textbf{A}^{\prime}_{i,i,d+1}=1\) if \(i\leq n\), and \(\textbf{A}_{i,i,d+1}=0,\textbf{A}^{\prime}_{i,i,d+1}=0\) if \(i>n\). Therefore, \(\pi_{0}\) maps \(\{1,2,...,n\}\) to \(\{1,2,...,n\}\) and \(\{n+1,n+2,...,n+m\}\) to \(\{n+1,n+2,...,n+m\}\). Therefore, we can decompose \(\pi_{0}\) into two permutation \(\pi_{1},\pi_{2}\).
\[\pi_{1}(i)=\pi_{0}(i),i\in\{1,2,...,n\} \tag{51}\]
\[\pi_{2}(i)=\pi_{0}(i+n)-n,i\in\{1,2,...,m\} \tag{52}\]
Then, \(S=\pi_{1}(S^{\prime})\) and \(H=(\pi_{1},\pi_{2})(H^{\prime})\).
## Appendix B Experimental settings
Computing infrastructure.We leverage Pytorch Geometric and Pytorch for model development. All our models run on an Nvidia 3090 GPU on a Linux server.
Model Implementation.For undirected link prediction tasks, our implementation is based on the code of SEAL (Zhang and Chen, 2018), which segregates an ego subgraph from the whole graph for each link. For other tasks, our model runs on the whole graph. We use optuna to perform random search. Hyperparameters were selected to optimize scores on the validation sets. We will release the code later.
## Appendix C More details about the datasets
### Undirected Link Prediction
We use eight real-world datasets from SEAL (Zhang and Chen, 2018): USAir is a network of US Air lines. NS is a collaboration network of researchers. PB is a network of US political blogs. Power is an electrical grid of western US. Router is a router-level Internet. Ecoli is a metabolic network in E.coli. Cele is a neural network of C.elegans. Yeast is a protein-protein interaction network in yeast.
We also use OGB datasets (Hu et al., 2020): ogbl-ppa, ogbl-collab, ogbl-ddi, and ogbl-citation2. Among them, ogbl-ppa is a protein-protein association graph where the task is to predict biologically meaningful associations between proteins. ogbl-collab is an author collaboration graph, where the task is to predict future collaborations. ogbl-ddi is a drug-drug interaction network, where each edge represents an interaction between drugs which indicates the joint effect of taking the two drugs together is considerably different from their independent effects. ogbl-citation2 is a paper citation network, where the task is to predict missing citations. We present the statistics of these datasets in Table 6. More information about these datasets can be found in (Hu et al., 2020).
### Directed Link Prediction
We use the same settings and datasets as He et al. (2022). The task is to predict whether a directed link exists in a graph. Texas, Wisconsin, and Cornell consider websites as nodes
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & **\#Nodes** & **\#Edges** & **Avg. node deg.** & **Split ratio** & **Metric** \\ \hline USAir & 332 & 2,126 & 12.81 & 0.85/0.05/0.10 & auroc \\ NS & 1,589 & 2,742 & 3.45 & 0.85/0.05/0.15 & auroc \\ PB & 1,222 & 16,714 & 27.36 & 0.85/0.05/0.15 & auroc \\ Yeast & 2,375 & 11,693 & 9.85 & 0.85/0.05/0.15 & auroc \\ C.ele & 297 & 2,148 & 14.46 & 0.85/0.05/0.15 & auroc \\ Power & 4,941 & 6,594 & 2.67 & 0.85/0.05/0.15 & auroc \\ Router & 5,022 & 6,258 & 2.49 & 0.85/0.05/0.15 & auroc \\ E.coli & 1,805 & 14,660 & 16.24 & 0.85/0.05/0.15 & auroc \\ ogbl-ppa & 576,289 & 30,326,273 & 105.25 & fixed & Hits@100 \\ ogbl-collab & 235,868 & 1,285,465 & 10.90 & fixed & Hits@50 \\ ogbl-ddi & 4,267 & 1,334,889 & 625.68 & fixed & Hits@20 \\ ogbl-citation2 & 2,927,963 & 30,561,187 & 20.88 & fixed & MRR \\ \hline \hline \end{tabular}
\end{table}
Table 6: Statistics and evaluation metrics of undirected link prediction datasets.
and links between websites as edges. Cora-ML and CiteSeer are citation networks. Telegram is an influence graph between Telegram channels. Their statistics are shown in Table 7.
### Hyperedge prediction datasets
We use the datasets and baselines in (Srinivasan et al., 2021). NDC-c (NDC-classes) and NDC-s (NDC-substances) are both drug networks. NDC-c takes each class label as a node and the set of labels applied to a drug as a hyperedge. NDC-s takes substances as nodes and the set of substances contained in a drug as a hyperedge. Tags-m (tags-math-sx) and tags-a (tags-ask-ubuntu) are from online Stack Exchange forums, where nodes are tags and hyperedges are sets of tags for the same questions. Email-En (email-Enron) and email-Eu are two email networks where each node is a email address and email hyperedge is the set of all addresses on an email. Congress (congress-bills) takes Congress members as nodes, and each hyperedge corresponds to the set of members in a committee or cosponsoring a bill. Their statistics are shown in Table 8.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & **\#Nodes** & **\#Edges** & **Avg. node deg.** & **Split ratio** & **Metric** \\ \hline wisconsin & 251 & 515 & 4.10 & 0.80/0.05/0.15 & accuracy \\ cornell & 183 & 298 & 3.26 & 0.80/0.05/0.15 & accuracy \\ texas & 183 & 325 & 3.55 & 0.80/0.05/0.15 & accuracy \\ cora\_ml & 2,995 & 8,416 & 5.62 & 0.80/0.05/0.15 & accuracy \\ telegram & 245 & 8,912 & 72.75 & 0.80/0.05/0.15 & accuracy \\ citeseer & 3,312 & 4,715 & 2.85 & 0.80/0.05/0.15 & accuracy \\ \hline \hline \end{tabular}
\end{table}
Table 7: Statistics and evaluation metrics of directed link prediction datasets.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **\#Nodes** & **\#Hyperdges** & **Split ratio** & **Metric** \\ \hline NDC-c & 6,402 & 1,048 & 5-fold & f1-score \\ NDC-s & 49,886 & 6,265 & 5-fold & f1-score \\ tags-m & 497,129 & 145,054 & 5-fold & f1-score \\ tags-a & 591,904 & 169,260 & 5-fold & f1-score \\ email-En & 4,495 & 1,458 & 5-fold & f1-score \\ email-EU & 85,109 & 24,400 & 5-fold & f1-score \\ congress & 732,300 & 83,106 & 5-fold & f1-score \\ \hline \hline \end{tabular}
\end{table}
Table 8: Statistics and evaluation metrics of directed link prediction datasets.
### Subgraph Prediction Tasks
Following (Wang and Zhang, 2022), we use three synthetic datasets: density, cut ratio, coreness. The task is to predict the corresponding properties of randomly selected subgraphs in random graphs. Their statistics are shown in Table 9.
|
2307.13011 | Maximal Independent Sets for Pooling in Graph Neural Networks | Convolutional Neural Networks (CNNs) have enabled major advances in image
classification through convolution and pooling. In particular, image pooling
transforms a connected discrete lattice into a reduced lattice with the same
connectivity and allows reduction functions to consider all pixels in an image.
However, there is no pooling that satisfies these properties for graphs. In
fact, traditional graph pooling methods suffer from at least one of the
following drawbacks: Graph disconnection or overconnection, low decimation
ratio, and deletion of large parts of graphs. In this paper, we present three
pooling methods based on the notion of maximal independent sets that avoid
these pitfalls. Our experimental results confirm the relevance of maximal
independent set constraints for graph pooling. | Stevan Stanovic, Benoit Gaüzère, Luc Brun | 2023-07-24T13:47:30Z | http://arxiv.org/abs/2307.13011v1 | # Maximal Independent Sets for Pooling in Graph Neural Networks
###### Abstract
Convolutional Neural Networks (CNNs) have enabled major advances in image classification through convolution and pooling. In particular, image pooling transforms a connected discrete lattice into a reduced lattice with the same connectivity and allows reduction functions to consider all pixels in an image. However, there is no pooling that satisfies these properties for graphs. In fact, traditional graph pooling methods suffer from at least one of the following drawbacks: Graph disconnection or overconnection, low decimation ratio, and deletion of large parts of graphs. In this paper, we present three pooling methods based on the notion of maximal independent sets that avoid these pitfalls. Our experimental results confirm the relevance of maximal independent set constraints for graph pooling.
Keywords:Graph Neural Networks Graph Pooling Graph Classification Maximal Independent Set Edge Selection
## 1 Introduction
Convolutional Neural Networks (CNNs) achieved major advances in computer vision by learning abstract representations of images thought convolution and pooling. A convolution is a linear filter applied to each pixel of an image which combines its value with the one of its surrounding. The resulting value is usually transformed via a non linear function. The pooling step reduces the size of an image by grouping a connected set of pixels, usually a small squared window, in a single pixel whose value is computed from the ones of window's pixel. Graph Neural Networks (GNNs) take their inspiration from CNNs and aim at transferring advances performed on images to graphs. However, most of CNNs use images with a fixed structure (shape). While using GNN both the structure of a graph and its content varies from one graph to another. Convolution and pooling operations must thus be adapted for graphs.
A GNN may be defined as a sequence of simple graphs \((\mathcal{G}^{(0)},\ldots,\mathcal{G}^{(m)})\) where each \(\mathcal{G}^{(l)}=(\mathcal{V}^{(l)},\mathcal{E}^{(l)})\) is produced by layer \(l\) from \(\mathcal{G}^{(l-1)}\). Sets \(\mathcal{V}^{(l)}\) and \(\mathcal{E}^{(l)}\) denote respectively the set of vertices and the set of edges of the graph. Given \(n_{l}=|\mathcal{V}^{(l)}|\), the graph \(\mathcal{G}^{(l)}\) may be alternatively defined as \(\mathcal{G}^{(l)}=(\mathbf{A}^{(l)},\mathbf{X}^{(l)})\) where \(\mathbf{A}^{(l)}\in\mathbb{R}^{n_{l}\times n_{l}}\)
is the weighted adjacency matrix of \(\mathcal{G}^{(l)}\) while \(\mathbf{X}^{(l)}\in\mathbb{R}^{n_{l}\times f_{l}}\) encodes the nodes' attributes of \(\mathcal{G}^{(l)}\) whose dimension is denoted by \(f_{l}\). Each line \(u\) of \(\mathbf{X}^{(l)}\) encodes the feature of the vertex \(u\) and is denoted by \(x_{u}^{(l)}\).
The final graph \(G^{(m)}\) of a \(GNN\) is usually followed by a Multi-Layer Perceptron (MLP) applied on each vertex for a node prediction task or by a global pooling followed by a MLP for a global graph classification task.
Graph convolution.This operation is mainly realized by a message passing mechanism and allows to learn a new representation for each node by combining the information of the mentioned node and its neighborhood. The neighborhood information is obtained by aggregating all the adjacent nodes information. Therefore, the message passing mechanism can be expressed as follows [8]:
\[\mathbf{x}_{u}^{(l+1)}=UPDATE^{(l)}(\mathbf{x}_{u}^{(l)},AGGREGATE^{(l)}(\{ \mathbf{x}_{v}^{(l)},\forall v\in\mathcal{N}(u)\})) \tag{1}\]
where \(\mathcal{N}(u)\) is the neighborhood of \(u\) and \(UPDATE\), \(AGGREGATE\) correspond to differentiable functions.
Let us note that convolution operations should be permutation equivariant, i.e. for any permutation matrix \(P\in\{0,1\}^{n_{l}\times n_{l}}\) defined at level \(l\), if \(f\) denotes the convolution defined at this layer we must have: \(f(PX^{(l)})=Pf(X^{(l)})\). Note that this last equation, together with equation 1, hides the matrix \(\mathbf{A}^{(l)}\) which nevertheless plays a key role in the definition of the \(AGGREGATE\) function by defining the neighborhood of each node.
Global pooling.For graph level tasks, a fixed size vector needs to be sent to the MLP. However, due to the variable sizes of graphs within a dataset, global pooling must aggregate the whole graph information into a fixed size vector. This operation can be performed by basic operators like sum, mean or maximum. Let note us that more complex aggregation strategies [19] also exist. To insure that two isomorphic graphs have the same representation, global pooling must be invariant to permutations, i.e. for any permutation matrix \(P\), defined at layer \(l\) we must have \(g(PX^{(l)})=g(X^{(l)})\) where \(g\) denotes the global pooling operation.
Hierarchical pooling.Summing up a complex graph into a fixed size vector leads necessarily to an important loss of information. The basic idea to attenuate this loss consists in gradually decreasing the size of the input graph thanks to pooling steps inserted between convolution layers. The resulting smaller final graph induces a reduced loss of information in the final global pooling step. This type of method is called a hierarchical pooling [12, 18]. The hierarchical pooling step, as the convolution operation should be permutation equivariant in order to keep information localised on desired nodes. Conversely, global pooling must be permutation invariant since it computes a graph level representation. Let note that, similar to CNNs, the reduced graph leads to a reduction of parameters in the next convolution. However, this reduction is mitigated by the learned part of hierarchical pooling. Moreover, let us consider a line graph with a signal optimally sampled on its vertices. As shown by [2], most of GNN correspond to a low pass filter. Applying a GNN on this line graph, hence decreases the maximal
frequency of our signal on vertices producing an over sampling according to the Nyquist theorem. More details on optimal sampling on graphs may be found in [1, 15].
Given a graph \(\mathcal{G}^{(l)}=(\mathbf{A}^{(l)},\mathbf{X}^{(l)})\) defined at layer \(l\) and its reduced version \(\mathcal{G}^{(l+1)}=(\mathbf{A}^{(l+1)},\mathbf{X}^{(l+1)})\) defined at level \(l+1\), the connection between \(\mathcal{G}^{(l)}\) and \(\mathcal{G}^{(l+1)}\) is usually insured by the reduction matrix \(\mathbf{S}^{(l)}\in\mathbb{R}^{n_{l}\times n_{l+1}}\) where \(n_{l}\) and \(n_{l+1}\) denote respectively the sizes of \(\mathcal{G}^{(l)}\) and \(\mathcal{G}^{(l+1)}\). If \(\mathbf{S}^{(l)}\) is a binary matrix, each column of \(\mathbf{S}^{(l)}\) encodes the vertices of \(\mathcal{G}^{(l)}\) which are merged into a single vertex at layer \(l+1\). If \(\mathbf{S}^{(l)}\) is real, each line of \(\mathbf{S}^{(l)}\) encodes the distribution of each vertex of \(\mathcal{G}^{(l)}\) over the vertices of \(\mathcal{G}^{(l+1)}\). In both cases, we require \(\mathbf{S}^{(l)}\) to be line-stochastic.
Given \(\mathcal{G}^{(l)}=(\mathbf{A}^{(l)},\mathbf{X}^{(l)})\) and \(\mathbf{S}^{(l)}\), the feature matrix \(\mathbf{X}^{(l+1)}\) of \(\mathcal{G}^{(l+1)}\) is defined as follows:
\[X^{(l+1)}=S^{(l)\top}X^{(l)} \tag{2}\]
This last equation defines the attribute of each surviving vertex \(v_{i}\) as a weighted sum of the attributes of the vertices \(v_{j}\) of \(\mathcal{G}^{(l)}\) such that \(\mathbf{S}^{(l)}_{ji}\neq 0\).
The adjacency matrix of \(\mathcal{G}^{(l+1)}\) is defines by:
\[A^{(l+1)}=S^{(l)\top}A^{(l)}S^{(l)} \tag{3}\]
Let us suppose that \(\mathbf{S}^{(l)}\) is a binary matrix. Each entry \((i,j)\) of \(\mathbf{A}^{(l+1)}\) defined by equation 3 is equal to \(\sum_{r,s}^{n_{l}}\mathbf{A}^{(l)}_{r,s}\mathbf{S}^{(l)}_{r,i}\mathbf{S}^{(l)} _{s,j}\). Hence two surviving vertices \(i\) and \(j\) are adjacent in \(\mathcal{G}^{(l+1)}\) if it exists at least two adjacent non surviving vertices \(r\) and \(s\) such that \(r\) is merged onto \(i\) (\(\mathbf{S}^{(l)}_{r,i}=1\)) and \(s\) onto \(j\) (\(\mathbf{S}^{(l)}_{s,j}=1\)).
Pooling methodsThere are two main families of pooling methods. The first family, called Top-\(k\) methods [7, 12], is based on a selection of relevant vertices based on a learned criteria. The second family is based on node's clustering methods as in DiffPool [18].
Top-k methods such as gPool [7] learn a score attached to each vertex by computing the scalar product between the vertex's attributes and one or several learned vectors. Alternatively, a GNN can be used to compute a relevance vector for each vertex as in SagPool [12]. Next, a fixed ratio pooling is used to select the \(k\) vertices with a highest score. Unselected vertices are dropped. In this case, two surviving vertices in the reduced graph will be adjacent only if they were adjacent before the reduction. This last point may result in the creation of disconnected reduced graphs. This disconnection may be avoided by increasing the density of the graph, using power 2 or 3 of its adjacency matrix or by using the Kron's reduction [3] instead of equation 3. Nevertheless, let us note that simply discarding all non surviving vertices leads to an important loss of information. We proposed in a previous contribution [14], a top-k pooling method called MIVSPool which avoids such drawbacks by using a maximal independent vertex set and graph contraction operations.
Clustering based methods learn explicitly or implicitly the matrix \(\mathbf{S}^{(l)}\) which encodes the reduction of a set of vertices at level \(l\) into a single vertex at level \(l+1\). Methods (eg. [18]) learning \(\mathbf{S}^{(l)}\) explicitly have to use a predetermined number of clusters. This last point forbids the use of graphs of different sizes. Additionally, these methods generally result in dense matrices \(\mathbf{S}^{(l)}\) which then induce dense adjacency matrices at
level \(l+1\) (equation 3). As a consequence, graphs produced by these pooling methods have a density close to 1 (i.e. a complete graph or an almost complete graph).
An alternative strategy consists in learning \(\mathbf{S}^{(l)}\) only implicitly. Graph pooling such as the maximal matching method used in EdgePool [4] may be associated to this strategy. A maximal matching of a graph \(\mathcal{G}^{(l)}=(\mathcal{V}^{(l)},\mathcal{E}^{(l)})\) is a subset \(M\) of \(\mathcal{E}^{(l)}\), where no two edges are incident to a same vertex, and every edge in \(\mathcal{E}^{(l)}\setminus M\) is incident to one of the two endpoints of an edge in \(M\). EdgePool is based on a maximal weighted matching technique, i.e. a maximal matching of maximal weight. The weight of each edge, called its score, is learned using the attributes of its two end points. The selected edges are then contracted to form a specific cluster. Note that the use of a maximal weighted matching may result in some vertices not incident to any selected edges. These vertices are left unchanged. The sequential algorithm [4] has been parallelized by Landolfi [11]. Unlike EdgePool, Landolfi [11] learns a score attached to each vertex and sort all the vertices of the graph according to their score. The weight of each edge is then defined from a combination of the rank of its incident nodes. The similarity between two adjacent vertices is in this case not taken into account. Moreover, both EdgePool and Landolfi [11] have a decimation ratio lower than 50%, which suggests the need for more pooling steps or a poor abstraction in the final graph of the GNN.
In this paper, we propose an unified family of graph pooling methods which maintains a decimation ratio of approximately 50%, while simultaneously preserving both the structure of the original graph and its attribute information. We achieve this by using a Maximal Independent Set (MIS) [9] to select surviving edges that are evenly distributed throughout the graph, and by assigning non-surviving elements to those that do survive. As a result, we avoid any subsampling or oversampling issues that may arise (see Figure 2). The source code of the paper is available on the CodeGNN ANR Project Git repository: [https://scm.univ-tours.fr/projectspublics/lifat/codegnn](https://scm.univ-tours.fr/projectspublics/lifat/codegnn).
Figure 1: General architecture of our GNN. Each block is composed of a convolution layer followed by a pooling layer. Features learned after each block are sent to the next block and a Readout layer. The \(K\) vectors resulting from each Readout are concatenated to have several levels of description of the graph and, finally, the concatenation is sent to a Multi-Layer Perceptron.
## 2 Maximal Independent Sets and Graph Poolings
### Maximal Independent Set (MIS) and Meer's algorithm
Definition: Let \(\mathcal{X}\) be a finite set and \(\mathcal{N}\) a neighborhood function defined on \(\mathcal{X}\) such that the neighborhood of each element includes the element itself. A subset \(\mathcal{J}\) of \(\mathcal{X}\) is a Maximal Independent Set (MIS) if the two following equations are fulfilled:
\[\forall(x,y)\in\mathcal{J}^{2}:x\notin\mathcal{N}(y) \tag{4}\] \[\forall x\in\mathcal{X}-\mathcal{J},\exists y\in\mathcal{J}:x\in \mathcal{N}(y) \tag{5}\]
The elements of \(\mathcal{J}\) are called the surviving elements or survivors. Equations (4) and (5) respectively states that two surviving elements can't be neighbors and each non-surviving element has to be in the neighborhood of at least one element of \(\mathcal{J}\). These two equations can be interpreted as a subsampling operation where Equation (4) is a condition preventing the oversampling (two adjacent vertices cannot be selected) while Equation (5) prevents subsampling: Any non-surviving element is at a distance 1 from a surviving one.
A way to compute a MIS is the Meer's algorithm [13] which only involves local computations and is therefore parallelizable. This algorithm attaches a variable to each element. Let us denote by \(\mathcal{J}\) the current maximal independent set at an iteration of the algorithm, and let us additionally consider the value \(v_{x}\) attached to an element \(x\). Then \(x\) is added to \(\mathcal{J}\) at current iteration if \(v_{x}\) is maximal among the values of \(\mathcal{N}(x)-\mathcal{N}(\mathcal{J})\), where \(\mathcal{N}(\mathcal{J})\) denotes \(\mathcal{J}\) and its neighbors. Meer's algorithm provides thus a maximal matching such that each of its element is a local maxima at a given step of the algorithm. We can thus interpret the resulting set as a maximal weight independent set.
Assignment of non-surviving elements.After a MIS, \(\mathcal{X}\) is split in two subsets: the surviving elements contained in the set \(\mathcal{J}\) and the non-surviving elements contained in \(\mathcal{X}-\mathcal{J}\). Simply considering \(\mathcal{J}\) as a digest of \(X\) may correspond to an important loss of information which simply discards \(\mathcal{X}-\mathcal{J}\). In order to avoid such a loss we allow each non surviving element contained in \(\mathcal{X}-\mathcal{J}\) to transfer its information to a survivor.
Figure 2: General proposition of our three graph poolings. Each edge is associated to a similarity score (Section 2). Based on this similarity, a MIS on edge is computed from which a reduction matrix \(S\) is derived. Applying \(S\) to both feature and structure leads to a reduced graph \(G^{(l+1)}\).
The possibility of such a transfer is insured thanks to equation 5 which states that each non surviving element is adjacent to at least one survivor. We can thus associate to any non surviving element \(x_{j}\) a surviving neighbor denoted by \(\sigma(x_{j})\). At layer \(l\), the global assignment of non-surviving elements onto surviving ones is encoded by the reduction matrix \(\mathbf{S}^{(l)}\in\mathbb{R}^{n_{l}\times n_{l+1}}\) such that :
\[\mathbf{S}^{(l)}_{ii}=1\quad\forall x_{i}\in\mathcal{J}\text{ and }\mathbf{S}^{(l)}_{j \sigma(j)}=1\quad\forall x_{j}\in\mathcal{X}-\mathcal{J} \tag{6}\]
with \(\mathbf{S}^{(l)}_{ij}=0\) otherwise.
### Maximal Independent Sets for Graph Pooling
Based on the work [9] defined within the image partitioning framework we introduce in the following, three adaptations of these methods in order to define learnable pooling steps. In the following sections, the adjacency matrix \(\mathbf{A}^{(l+1)}\) is obtained from \(\mathbf{A}^{(l)}\) and a binary version of \(\mathbf{S}^{(l)}\) using equation 3.
Maximal Independent Edge Set.Most of pooling methods are based on a projection score for each vertex. This strategy is based on the assumption that we can learn features characterizing relevant vertices for a given classification task. However, even if this hypothesis holds, two adjacent vertices may have similar scores and the choice of the survivor is in this case arbitrary. An alternative strategy consists in merging similar
Figure 3: Schema of our proposed methods on a toy graph. Number on each edge corresponds to its score \(s\) and the bold edges indicates the surviving ones. Each group of vertices with the same color represent a cluster. Figures 2(a) and 2(b) are common steps for MIES and MIESCut.
nodes. Given a GNN with hierarchical pooling, the graph sequence corresponds to an increasing abstraction from the initial graphs. Consequently, vertices encoded at each layer of the GNN encode different types of information. Based on this observation, we decided to learn a similarity measure between adjacent vertices at each layer. Inspired by [16], we define the similarity at layer \(l\) between two adjacent vertices \(u\) and \(v\) as \(s_{uv}^{(l)}=exp(-\|\mathbf{W}^{(l)}.(x_{u}-x_{v})\|)\) where \(x_{u}\) and \(x_{v}\) are the features of vertices \(u\) and \(v\), \(\mathbf{W}^{(l)}\) is a learnable matrix and \(\|.\|\) is the \(L_{2}\) norm.
Given the maximal weighted matching \(\mathcal{J}^{(l)}\) defined at level \(l\), each vertex of \(\mathcal{G}^{(l)}\) is incident to at most one edge of \(\mathcal{J}^{(l)}\). If \(u\in\mathcal{V}^{(l)}\) is not incident to \(\mathcal{J}^{(l)}\) its features are just duplicated at the next layer. Otherwise, \(u\) is incident to one edge \(e_{uv}\in\mathcal{J}^{(l)}\) and both \(u\) and \(v\) are contracted at the next layer. Since \(u\) and \(v\) are supposed to be similar the attributes of the vertex encoding the contraction of \(u\) and \(v\) at the next layer must be symmetrical according to \(u\) and \(v\). To do so, we first define the attribute of \(e_{uv}\) as
\[x_{uv}=\frac{1}{2}(x_{u}^{(l)}+x_{v}^{(l)}) \tag{7}\]
where \(x_{u}\) and \(x_{v}\) are the features of vertices \(u\) and \(v\). The attribute of the merged vertex is then defined as \(s_{uv}x_{uv}\).
An equivalent update of the attributes of the reduced graph may be obtained by computing the matrix \(\mathbf{S}^{(l)}\) encoding the transformation from \(\mathcal{G}^{(l)}\) to \(\mathcal{G}^{(l+1)}\). This matrix can be defined as \(\mathbf{S}_{ii}^{(l)}=1\) if \(i\) is not incident to \(\mathcal{J}^{(l)}\), and by selecting arbitrary one survivor among \(\{u,v\}\) if \(e_{uv}\in\mathcal{J}^{(l)}\). If \(u\) is selected we set \(\mathbf{S}_{uu}^{(l)}=\mathbf{S}_{vu}^{(l)}=\frac{1}{2}s_{uv}\). All remaining entries of \(\mathbf{S}^{(l)}\) are set to \(0\). Matrix \(\mathbf{X}^{(l+1)}\) can then be obtained using equation 2. We call this method MIESPool and the main steps are presented in Figures 2(a) to 2(c).
Maximal Independent Edge Set with Cut (MIESCut)Graph reduction through maximal weighted matching has two main drawbacks within the GNN framework. First, a maximal matching may produce many vertices not adjacent to the set of selected edges. Such vertices are just copied to the next level which induce a low decimation ratio (lower than \(50\%\)). Given that, the number of layers of a GNN is usually fixed, this last drawback may produce a graph with an insufficient level of abstraction at the final layer of the GNN. Secondly, only the score of the selected edges are used to compute the reduced attributes. This last point reduces the number of scores used for the back-propagation and hence the quality of the learned similarity measures.
As in the previous section, let us denote by \(\mathcal{J}^{(l)}\) the maximal weighted matching defined at layer \(l\). By definition of a maximal weighted matching, each vertex not incident to \(\mathcal{J}^{(l)}\) is adjacent to at least one vertex which is incident to \(\mathcal{J}^{(l)}\). Following [9], we increase the decimation ratio, by attaching isolated vertices to contracted ones. This operation is performed by selecting for each isolated vertex \(u\) the edge \(e_{uv}\) such that \(s_{uv}\) is maximal and \(v\) is incident to \(\mathcal{J}^{(l)}\).
This operation provides a spanning forest of \(\mathcal{G}^{(l)}\) composed of isolated edges, trees of depth one (called stars) with one central vertex and paths of length 3. This last type of tree corresponds to a sequence of \(4\) vertices with strong similarities between any pair of adjacent vertices along the paths. However, merging all \(4\) vertices into a single
one, suppose implicitly to apply twice an hypothesis on the transitivity of our similarity measure: more precisely the fact that the two extremities of the paths are similar is not explicitly encoded by our selection of edges. In order to avoid such assumption we remove the central edge of such paths from the selection in order to obtain two isolated edges (see Figures 2(d) to 2(f)).
Let us denote by \({\mathcal{J^{\prime}}}^{(l)}\) the resulting set of selected edges which forms a spanning forest of \(\mathcal{G}^{(l)}\) composed of isolated edges and stars. Concerning the definition of \(\mathbf{S}^{(l)}\), we apply the same procedure than in the previous section for isolated edges. For stars, we select the central vertex as the surviving vertex. Let us denote by \(u\) such a star's center. We then set \(\mathbf{S}^{(l)}_{uu}=\frac{1}{2}\) and \(\mathbf{S}^{(l)}_{vu}=\frac{1}{2M}s_{uv}\) for any \(v\) such that \(e_{uv}\in{\mathcal{J^{\prime}}}^{(l)}\) where \(M\) is a normalizing factor defined as: \(M=\sum_{v|e_{uv}\in{\mathcal{J^{\prime}}}^{(l)}}s_{uv}\). The computation of the attributes of the reduced graph using equation 2 and matrix \(\mathbf{S}^{(l)}\) is equivalent to compute for each star's center \(u\), the sum, weighted by the score, of the edges' attributes (equation 7) incident to \(u\) and belonging to \({\mathcal{J^{\prime}}}^{(l)}\):
\[x^{(l+1)}_{u}=\frac{1}{\sum_{v|e_{uv}\in{\mathcal{J^{\prime}}}^{l}}s_{uv}}\sum _{v|e_{uv}\in{\mathcal{J^{\prime}}}^{l}}s_{uv}x^{(l)}_{uv} \tag{8}\]
Maximal Independent Directed Edge Set.The definition of a spanning forest composed of isolated edges and stars is obtained in three steps by MIESCut: The definition of a maximal weight matching, the attachment of isolated vertices and the cut of all paths of length 3. Following [9], we propose to use the Maximal Independent Directed Edge set (MIDES) reduction scheme which obtains the same type of spanning forest in a single step. This reduction scheme is based on a decomposition of the edges \(e_{uv}\) of the undirected graphs in two oriented edges \(e_{u\to v}\) and \(e_{v\to u}\). The neighborhood of an oriented edge \(\mathcal{N}(e_{u\to v})\) is defined as the union of the sets of edges leaving \(u\), arriving on \(u\) and leaving \(v\). Given \(\mathcal{G}^{(l)}\) defined at layer \(l\) we formally have:
\[\mathcal{N}^{(l)}(e_{u\to v})=\{e_{u\to v^{\prime}}\in\mathcal{E}^{(l)}\} \cup\{e_{v^{\prime}\to u}\in\mathcal{E}^{(l)}\}\cup\{e_{v\to v^{\prime}}\in \mathcal{E}^{(l)}\} \tag{9}\]
The main difference between the neighborhoods defined by equation 9 and the one of MIES is that we do not include in the neighborhood edges arriving on \(v\). This asymmetry allows the creation of stars centered on \(v\). The MIDES algorithm computes a MIS on the edge set using the neighborhood defined by (9) (see Figures 2(g) to 2(i)).
At layer \(l\), applying a MIDES on \(\mathcal{G}^{(l)}\) requires to define a score function on directed edges. We propose to use \(s_{uv}=exp(-\|W.(x_{u}-x_{v})+b\|)\) where the bias term \(b\) allows to introduce an asymmetry so that \(s_{uv}\neq s_{vu}\) if \(x_{u}\neq x_{v}\).
Let us denote by \(\mathcal{D}^{(l)}\) the set of directed edges produced by a MIDES on \(\mathcal{G}^{(l)}\) using our scoring function. The set \(\mathcal{D}^{(l)}\) defines on \(\mathcal{G}^{(l)}\) a spanning forest composed of isolated vertices, isolated edges and stars [9].
For an isolated vertex \(u\) we duplicate this vertex at the next layer and copy its attributes. We thus set \(\mathbf{S}^{(l)}_{uu}=1\).
For an isolated directed edge \(e_{u\to v}\in\mathcal{D}^{(l)}\) we select \(v\) as a surviving vertex and set \(\mathbf{S}^{(l)}_{vv}=\frac{s_{uv}}{M}\) and \(\mathbf{S}^{(l)}_{uv}=\frac{s_{vu}}{M}\) where \(M=s_{uv}+s_{vu}\). This setting corresponds to the following update of the attributes: \(x^{(l+1)}_{v}=(s_{uv}.x^{(l)}_{v}+s_{vu}.x^{(l)}_{u})/(s_{uv}+s_{vu})\). Let us
note that since \(e_{u\to v}\in\mathcal{D}^{(l)}\) we have \(s_{uv}>s_{vu}\). The previous formula put thus more weight on the surviving vertex \(v\). This update may be considered as a generalization of equation 7 using the asymmetric scores \(s_{uv}\) and \(s_{vu}\).
A star within the MIDES framework is defined by a set of edges \(e_{w\to v}\) of \(\mathcal{D}^{(l)}\) arriving on the same vertex \(v\). We then set \(v\) as survivor and generalize the update of the attributes defined for isolated edges by setting \(\mathbf{S}_{vv}^{(l)}=\frac{1}{N}\sum_{u|e_{u\to v}\in\mathcal{D}^{(l)}} \frac{s_{uv}}{M_{u}}\) and \(\mathbf{S}_{uv}^{(l)}=\frac{1}{N}\frac{s_{vu}}{M_{u}}\) for all \(u\) such that \(e_{u\to v}\in\mathcal{D}^{(l)}\) where \(M_{u}=s_{uv}+s_{vu}\) and \(N\) is the number of such \(u\). Such a definition of \(\mathbf{S}^{(l)}\) is equivalent to set the updated attribute of \(v\) as the mean value of its incident selected edges:
\[x_{v}^{(l+1)}=\frac{1}{N}\sum_{u|e_{u\to v}\in\mathcal{D}^{(l)}}\frac{s_{uv}x _{v}^{(l)}+s_{vu}x_{u}^{(l)}}{s_{uv}+s_{vu}}\text{ with }N=|\{u\in\mathcal{V}^{(l)}|e_{u\to v}\in\mathcal{D}^{(l)}|.\]
## 3 Experiments
Datasets.We evaluate our contribution to a bio-informatics and a social dataset called respectively D&D [5] and REDDIT-BINARY [17] whose statistics are reported on Table 1. The aim of D&D is to classify proteins as either enzyme or non-enzyme. Nodes represent the amino acids and two nodes are connected by an edge if they are less than 6 Angstrom apart. REDDIT-BINARY is composed of graphs corresponding to online discussions on Reddit. In each graph, nodes represent users, and there is an edge between them if at least one of them respond to the other's comment. A graph is labeled according to whether it belongs to a question/answer-based community or a discussion-based community.
Model Architecture and Training Procedure.Our model architecture is composed of \(K\) blocks where each block consists of a GCN [10] convolution layer followed by a pooling layer. The vector resulting of each pooling operation is then sent to the next block (if it exists) and a Readout layer. A Readout layer concatenates the average and the maximum of vertices' features matrix \(\mathbf{X}^{(l)}\) and these \(K\) concatenations are themselves concatenated and sent to a Multi-Layer Perceptron (MLP). The MLP is composed of three fully connected layers and a dropout is applied between each of them. Finally, a Softmax layer is used to determine the binary class of graphs. Note that no batch normalization is applied (Figure 1).
To evaluate our model, we use the training procedure proposed by [6]. This procedure performs an outer 10-fold cross-validation (CV) to split the dataset into ten training and
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & **\#Graphs** & **\#Classes** & **Avg**\(|\mathcal{V}|\) & **Avg**\(|\mathcal{E}|\) \\ \hline D\&D [5] & \(1178\) & \(2\) & \(284\pm 272\) & \(715\pm 694\) \\ REDDIT-BINARY [17] & \(2000\) & \(2\) & \(430\pm 554\) & \(498\pm 623\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of datasets
test sets. For each outer fold, another 10-fold CV (inner) is applied to the training set to select the best hyperparameter configuration. Concerning hyperparameters, learning rate is set to \(10^{-3}\), weight decay to \(10^{-4}\) and batch size to \(512\). Other hyperparameters are tuned using a grid search to find the best configuration. Possible values for the hidden layers sizes are \(\{64,128\}\), dropout ratio is chosen within \(\{0.2,0.5\}\) and the number of blocks \(K\) between 1 and 5. We use the Adam optimizer and maximal number of epochs is set to \(1000\) with an early stopping strategy if the validation loss has not been improved 100 epochs. For EdgePool, due to time constraints, we fixed the hidden layers sizes at 128 and the dropout ratio at 0.5.
We compare, in Table 2, our methods to five state-of-art methods: Baseline (\(K\) blocks of GCN [10]), gPool [7], SagPool [12], EdgePool [4] and MIVSPool [14], our previous MIS method. First, we note that the baseline obtains quite good results while not implementing any pooling strategy. It highlights the fact that defining a good pooling operation is not trivial. State-of-the-art methods mostly fail at this task, certainly due to the significant loss of information resulting from the hard selection of surviving vertices using a top\(-k\) strategy. This hypothesis is confirmed by the better results obtained by MIVSPool. Let us note also that for D& D, based on T-tests with a significance level of 5%, the average accuracy of EdgePool is statistically lower than the ones of MIS methods. Second, we can observe that the strategies combining edge selection methods and MIS (MIESPool, MIESCutPool, MIDESPool) achieve either the highest or the second highest performances. This empirical results tend to demonstrate that the selection on edges may be most relevant, and that a MIS strategy improves the effectiveness of the pooling over EdgePool. Finally, best results are obtained by different MIS strategies, hence indicating that the right MIS strategy may be dataset dependant. This hypothesis has to be tested using more extensive hyperparameters selection.
## 4 Conclusion
Graph poolings based on Maximal Independent Sets (MIS) allow, unlike state-of-art methods, to maintain a fixed decimation ratio close to 50%, to preserve vertex information and to avoid subsampling and oversampling. Results obtained by our three methods
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Methods** & **D\&D[5]** & **REDDIT-BINARY [17]** \\ \hline Baseline & \(76.29\pm 2.33\) & \(87.07\pm 4.72\) \\ gPool [7] & \(75.61\pm 2.74\) & \(84.37\pm 7.82\) \\ SagPool [12] & \(76.15\pm 2.88\) & \(85.63\pm 6.26\) \\ EdgePool [4] & \(72.59\pm 3.59\) & \(87.25\pm 4.78\) \\ MIVSPool [14] & \(76.35\pm 2.09\) & \(\mathbf{88.73\pm 4.43}\) \\ MIESPool & \(77.17\pm 2.33\) & \(88.08\pm 4.55\) \\ MIESCutPool & \(\mathbf{77.74\pm 2.85}\) & \(86.47\pm 4.57\) \\ MIDESPool & \(76.52\pm 2.21\) & \(88.40\pm 4.74\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average classification accuracies obtained by different pooling methods. Highest and second highest accuracies are respectively in **bold** and blue. \(\pm\) indicates the \(95\%\) confidence interval of classification accuracy.
based on MIS confirm the interest of this approach but further investigations on other datasets are needed to conclude on the effectiveness of our methods. The design of alternative similarity scores also corresponds to a promising line of research.
Acknowledgements:The work reported in this paper was supported by French ANR grant #ANR-21-CE23-0025 CoDeGNN and was performed using HPC resources from GENCI-IDRIS (Grant 2022-AD011013595) and computing resources of CRIANN (Grant 2022001, Normandy, France).
|
2308.06299 | Defensive Perception: Estimation and Monitoring of Neural Network
Performance under Deployment | In this paper, we propose a method for addressing the issue of unnoticed
catastrophic deployment and domain shift in neural networks for semantic
segmentation in autonomous driving. Our approach is based on the idea that deep
learning-based perception for autonomous driving is uncertain and best
represented as a probability distribution. As autonomous vehicles' safety is
paramount, it is crucial for perception systems to recognize when the vehicle
is leaving its operational design domain, anticipate hazardous uncertainty, and
reduce the performance of the perception system. To address this, we propose to
encapsulate the neural network under deployment within an uncertainty
estimation envelope that is based on the epistemic uncertainty estimation
through the Monte Carlo Dropout approach. This approach does not require
modification of the deployed neural network and guarantees expected model
performance. Our defensive perception envelope has the capability to estimate a
neural network's performance, enabling monitoring and notification of entering
domains of reduced neural network performance under deployment. Furthermore,
our envelope is extended by novel methods to improve the application in
deployment settings, including reducing compute expenses and confining
estimation noise. Finally, we demonstrate the applicability of our method for
multiple different potential deployment shifts relevant to autonomous driving,
such as transitions into the night, rainy, or snowy domain. Overall, our
approach shows great potential for application in deployment settings and
enables operational design domain recognition via uncertainty, which allows for
defensive perception, safe state triggers, warning notifications, and feedback
for testing or development and adaptation of the perception stack. | Hendrik Vogt, Stefan Buehler, Mark Schutera | 2023-08-11T07:45:36Z | http://arxiv.org/abs/2308.06299v1 | # Defensive Perception: Estimation and Monitoring of Neural Network Performance under Deployment
###### Abstract
In this paper, we propose a method for addressing the issue of unnoticed catastrophic deployment and domain shift in neural networks for semantic segmentation in autonomous driving. Our approach is based on the idea that deep learning-based perception for autonomous driving is uncertain and best represented as a probability distribution. As autonomous vehicles' safety is paramount, it is crucial for perception systems to recognize when the vehicle is leaving its operational design domain, anticipate hazardous uncertainty, and reduce the performance of the perception system. To address this, we propose to encapsulate the neural network under deployment within an uncertainty estimation envelope that is based on the epistemic uncertainty estimation through the Monte Carlo Dropout approach. This approach does not require modification of the deployed neural network and guarantees expected model performance. Our _defensive perception envelope_ has the capability to estimate a neural network's performance, enabling monitoring and notification of entering domains of reduced neural network performance under deployment. Furthermore, our envelope is extended by novel methods to improve the application in deployment settings, including reducing compute expenses and confining estimation noise. Finally, we demonstrate the applicability of our method for multiple different potential deployment shifts relevant to autonomous driving, such as transitions into the night, rainy, or snowy domain. Overall, our approach shows great potential for application in deployment settings and enables operational design domain recognition via uncertainty, which allows for defensive perception, safe state triggers, warning notifications, and feedback for testing or development and adaptation of the perception stack.
39\({}^{\text{th}}\) Conference on Uncertainty in Artificial Intelligence, within the Workshop on Epistemic Uncertainty
in Artificial Intelligence (E-pi UAI 2023), Pittsburgh, USA.
## 1 Introduction
The ability to accurately perceive semantic information is critical in autonomous driving and automotive vision. Neural networks can perform this task through semantic segmentation, where a neural network assigns a class label to each pixel of an input image [22]. State-of-the-art semantic segmentation models have achieved high performance across a wide range of scenarios and domains [17, 23].
Pixel-wise semantic segmentation is the task of assigning every pixel of the input image a class prediction. Because every pixel is classified with a value, semantic segmentation is capable of drawing fine outlines, generating high-semantic information central to modern perception within autonomous driving. In this work, we utilize DeeplabV3+ [2] as a base model for pixel-wise semantic segmentation and modify it for use with Monte Carlo Dropout. By inserting dropout layers [1] after every 2D convolution layer, we demonstrate that any model can be modified to perform Monte Carlo Dropout, even after training.
Deploying neural networks as part of the perception system for an autonomous vehicle requires a clear understanding of the operational design domain (ODD) in which the vehicle will operate. Therefore, the ODD is defined explicitly, and data is gathered and used to optimize and validate the neural network for this specific domain. However, in real-world applications, deep convolutional neural networks (CNNs) may be exposed to substantially different data from the training data, leading to a phenomenon known as deployment shift [14, 15]. This deployment shift can occur due to various reasons like a change in time of day, weather, landmarks, object appearance, and traffic conditions, making it impossible to consider every possible scenario, use case, and road condition while defining the ODD
[Berman, 2019]. The limitations of ODD definitions in ensuring safety in autonomous driving are highlighted by the effect that it cannot cover every possibility of change in the real world, which may cause severe security risk and fatalities [Banks et al., 2018, Hullermeier and Waegeman, 2021, National Highway Traffic Safety Administration, 2022]. Hence, there is an emerging need for autonomous systems to recognize and understand when they are in unknown and potentially unsafe situations, especially during inference when the system has been deployed. Safety-inducing strategies that detect unknown situations and transition the system into a safe state referred to as defensive perception are required. The perception system needs to have a mechanism to detect when it is leaving the ODD and anticipate hazardous uncertainty, which reduces the performance of the perception system.
As previously stated, deployment shift, the phenomenon of a neural network being exposed to data from a different domain than its training data, can occur in real-world applications of autonomous vehicles. However, detecting deployment shift is difficult due to the lack of correlation between deployment shift and drop in prediction confidence, as highlighted by Nguyen et al. in [Nguyen et al., 2015].
Several state-of-the-art approaches have been proposed to address this issue of out-of-domain detection and uncertainty estimation in deep neural networks as discussed in [Gawlikowski et al., 2021]. For instance, Du et al. proposed Virtual Outlier Synthesis (VOS) [Du et al., 2022], a method that synthesizes outliers for additional training to generate a clear boundary between in- and out-of-domain data. Another approach [Chan et al., 2021] shows that by retraining a semantic segmentation model on unknown objects to maximize the prediction's softmax entropy, the uncertainty of specific out-of-domain object instances can be detected. Additionally, deploying auxiliary models such as an Essence Neural Network [Blazek and Lin, 2021], a Posterior Neural Network [Charpentier et al., 2020], or Meta Classifiers [Rottmann et al., 2018] enables the estimation of domain affiliation or uncertainty of a sample's prediction.
While these approaches may require additional models, architectural adaptions [Sensoy et al., 2018] or dedicated training processes [Van Amersfoort et al., 2020], another alternative is to use minimally invasive approaches such as Monte Carlo Dropout [Rottmann and Schubert, 2019] for estimating epistemic uncertainty in deep neural networks.
Conventional neural networks struggle to express prediction confidence, especially when leaving the source domain they have been trained on.
As dropout at first was introduced to prevent a neural network from overfitting and thus was only applied at training to generalize the model's prediction [Hinton et al., 2012] especially Monte Carlo Dropout later was introduced as a method to measure the model's uncertainty of non-Bayesian neural networks by applying the Monte Carlo Dropout also during inference and determine its predictive distribution [Gal and Ghahramani, 2016].
Monte Carlo Dropout mimics multiple sub-network prediction distributions \(q(\mathbf{y}|\mathbf{x})\) by deploying dropout layers throughout the complete network and multiple forward passes \(T\). While \(W_{i}\) denotes the networks weight matrices and \(L\) enumerates the model's layer. The deviations in the sub-networks' predictions \(\hat{\mathbf{y}}\) are then utilized to express the epistemic uncertainty of the entire model on a single frame \(\mathbf{x}\) - referred to as Monte Carlo Dropout, giving the estimated approximate predictive distribution:
\[\mathbb{E}_{q(\mathbf{y}|\mathbf{x})}(\mathbf{y})\approx\frac{1}{T}\sum_{t=1}^ {T}\hat{\mathbf{y}}(\mathbf{x},\mathbf{W}_{1}^{t},\dots,\mathbf{W}_{L}^{t}). \tag{1}\]
In this paper, we propose a method for estimating epistemic uncertainty during the inference of semantic neural networks for autonomous driving using Monte Carlo Dropout, as the widely accepted uncertainty measurement at inference [Labach et al., 2019].
In contrast to the other state of the art uncertainty measurements mentioned above using other techniques than the Monte Carlo Dropout, the here presented approach aims to be applicable on any neural network at inference and in real time without making changes at training and or test time as well as it does not require additional data.
Incorporation into our _defensive perception envelope_, which monitors uncertainty during deployment, demonstrates that epistemic uncertainty can serve as a proxy for model performance. This novel approach allows us to detect when the system is operating outside of its intended domain and provides an online cue for prediction performance. Furthermore, by imposing thresholds on the uncertainty value, we can define triggers that can be used to implement safety measures such as warning notifications for the driver or even transitions into a _safe state_ where the vehicle engages other safety systems and for example, reduces its speed. The main contributions of this paper are:
* A safety envelope that integrates Monte Carlo Dropout [Labach et al., 2019] into semantic segmentation for autonomous driving scenarios.
* A novel entropy measure that captures model performance and domain shifts during deployment and inference.
* An adaptation of the Monte Carlo Dropout method that utilizes rolling forward passes to improve computational efficiency during deployment.
## 2 Novel Methods and Metrics
### Novel Concept for Defensive Perception
In the following, we present a detailed overview of our proposed method for estimating uncertainty and monitoring the performance of neural networks during deployment. Our approach utilizes a _defensive perception envelope_, which is wrapped around a given perception algorithm. Typically, the performance of a neural network is evaluated by comparing its predictions to manually labeled data (ground truth). However, such labeled data is unavailable during online inference in autonomous driving. To address this, our _defensive perception envelope_ indirectly estimates the neural network's performance using Monte Carlo Dropout, enabling real-time performance estimation during deployment. Fig. 1 illustrates the schematic of our proposed framework.
**Training and Validation** The base of our approach is a perception neural network \(\theta_{S}\) that solves a task such as pixel-wise semantic segmentation. The neural network is therefore trained, validated, and released for deployment in the source domain \(S\) with samples \(\mathbf{x}_{S}\) from the said domain.
**Model Deployment** For deployment, the model is modified by inserting dropout layers [1] after every 2D convolution layer. The modification prepares the neural network for the Monte Carlo Dropout approach of our _defensive perception envelope_.
**Data with Deployment Shift** During deployment, due to a domain shift, the neural network is prone to encounter samples \(\mathbf{x}_{S+1}\), which are outside of the source domain. There are numerous reasons for a domain shift, including deployment shifts and other ODD shifts that have not been considered during training and validation. The emerging shifts and the resulting potential drop in performance are critical as they occur silently and result in unnoticed catastrophic deployment.
**Defensive Perception Envelope** Monte Carlo Dropout determines the uncertainty \(u_{t}\) for a sample \(\mathbf{x}_{t}\) at time \(t\). The uncertainty is calculated by multiple forward passes \(n\) of the same sample while randomly dropping different weights. The fluctuation in the predictions \(\hat{\mathbf{y}}_{t}\in\{\hat{\mathbf{y}}_{t,1},\dots,\hat{\mathbf{y}}_{t,n}\}\) is mapped to an uncertainty metric (see Subsec. 2.2). The _defensive perception envelope_ is configured permissive or stringent by introducing uncertainty thresholds based on the system characteristics. The uncertainty threshold can be based on the uncertainty value distribution computed on the overall data. Further, multiple thresholds enable triggers for multiple stages of defensive reactions, such as system notification, vehicle slow-down, and the safe-state transition.
### Pseudo Cross-Entropy for Uncertainty During Inference
At the core of our uncertainty metric resides the cross-entropy \(CE\) metric. As input, the cross-entropy expects a probability distribution \(\mathbf{q}\) of the prediction vector \(\hat{\mathbf{y}}\), in the form of a normalized exponential function over all predictable classes \(c\in\mathbf{C}\), such as given by a softmax layer,
\[\mathbf{q}_{c}=\frac{e^{\hat{\mathbf{y}}_{c}}}{\sum_{i}^{\mathbf{C}}e^{\hat{ \mathbf{y}}_{i}}}. \tag{2}\]
The entropy \(\mathbf{H}\) of a prediction vector \(\mathbf{q}\) is calculated by multiplication with the true distribution \(\mathbf{p}\). During deployment, a true distribution is not given; thus, the approach makes use of a pseudo ground truth approximation \(\mathbf{p}_{i}\approx\tilde{\mathbf{y}}_{i}^{\prime}\),
\[H(\mathbf{p},\mathbf{q})=-\sum_{i}^{\mathbf{C}}\tilde{\mathbf{y}}_{i}^{\prime }\;log(\mathbf{q}_{i}). \tag{3}\]
Assuming that a true prediction is linked with a single class, the pseudo ground truth is approximated as a one-hot-encoding of the prediction vector \(\mathbf{q}\), resulting in the pseudo cross-entropy \(CE^{\prime}\),
Figure 1: **System Flow Overview** - within an autonomous driving platform supported by a perception stack, the perception model \(\theta_{S}\) is trained and validated for a given source domain \(S\). Deployed in the vehicle the _defensive perception envelope_ generates five outputs \(\mathcal{Y}=\{\hat{\mathbf{y}}_{1},\dots,\hat{\mathbf{y}}_{5}\}\) with Monte Carlo Dropout. Suppose a sample with a domain shift \(\mathbf{x}_{S+1}\) is fed to the perception model \(\theta_{S}\), the uncertainty of the output vectors for this sample rises, and the _defensive perception envelope_ informs the system that it enters a domain with high uncertainty. These notifications are triggered by a pre-selected threshold \(\sigma\).
\[CE^{\prime}=-log(\frac{e^{\delta_{e}}}{\sum_{i}^{\mathcal{C}}\nicefrac{{\mathbf{e} }}{{\mathbf{\hat{\mathit{s}}}_{i}}}}). \tag{4}\]
In order to deploy the pseudo cross-entropy as an uncertainty measure of a neural network's prediction, further requirements need to be fulfilled:
* The pseudo cross-entropy needs to depict the entropy emerging from multiple forward passes \(T\).
* The entropy needs to be independent of the number of forward passes \(n\).
* Entropy should follow an exponential function to smooth uncertainty for small deviations while upscaling larger deviations.
* For comparability, the range of values needs to be confined to \(CE^{\prime}\in[0,1]\).
To measure the uncertainty over multiple forward passes, the one-hot encoded output vector \(\mathbf{y}_{fwp}\) from each forward pass (\(fwp\)) is taken, and the hits for each class are accumulated. The retrieved vector \(\mathbf{v}_{hC}\) shows the distribution of hits over all classes for the number of applied forward passes. As only the classes, which are predicted in at least one of the forward passes, are of interest, any class with \(0\) hits is removed from this vector so that \(\mathbf{v}_{hC}\setminus\{0\}\), and respective a class \(i\) is represented by \(\mathcal{C}_{h}\).
From this vector, the class with the maximum number of hits is assumed to be the true class, hence a pseudo ground truth \(max(\mathbf{v}_{hC})\) (equal to \(\mathbf{p}\) in Eq. 2.2) is determined. In the case of two or more classes having the maximum number of hits, the class with the lowest index is taken as the pseudo ground truth, following the implementation of the used \(argmax\) function, provided by the python library NumPy [1].
The here presented formula fulfilling the requirements mentioned above is:
\[CE_{u}=1-\frac{\exp(\frac{max(\mathbf{v}_{hC})}{n_{fwp}})}{\sum_{i}^{\mathcal{ C}_{h}}\exp(\frac{\mathbf{v}_{hC}(i)}{n_{fwp}})}. \tag{5}\]
For intuitive readability, the uncertainty measurement is subtracted from one to have a low score near zero when the uncertainty of the neural network is low and a value near one when the uncertainty is critical. For semantic segmentation, the classification is done pixel-wise; hence the uncertainty is calculated on every pixel of the given input frame. In order to obtain the frame's overall uncertainty, the mean of all pixel's uncertainties is determined.
### Rolling Monte Carlo Dropout
The frame rate of advanced driver assistance systems (ADAS) or autonomous vehicle perception systems must be high enough to allow the system to react on time to its surrounding environment. As a result, consecutive frames in a sequence \(\mathcal{S}\) tend to be similar (see Subsec. 3.3). In order to reduce computational effort and increase the efficiency of the implemented _safety envelope_, we introduce the Rolling Monte Carlo Dropout method.
This method is based on the idea of a sliding window over the sequence \(\mathcal{S}\). Instead of applying Monte Carlo Dropout on a single image multiple times (shown in Fig. 1), the Monte Carlo Dropout is applied to a sequence of consecutive images. Sequential data allows the calculation of the modified categorical cross entropy \(u_{t}\) for the Rolling Monte Carlo Dropout by replacing the number of forward passes \(n_{fwp}\) with the number of images \(n_{img}\) within the stride of the defined sliding window.
\[\mathcal{S}_{t}=\{\mathbf{x}_{t-n}\dots,\mathbf{x}_{t}\} \tag{6}\]
\[u_{t}=\sum_{\mathbf{x}\in\mathcal{S}_{t}}CE_{u}(\mathbf{x}) \tag{7}\]
The uncertainty of our measurement increases when applying the Rolling Monte Carlo Dropout method to a sequence of consecutive images rather than a single image due to the induced aleatoric uncertainty. Hence, the Rolling Monte Carlo Dropout method cannot be applied to an arbitrary number of consecutive images. Instead, the number \(n\) of images in a sequence or window is constrained by the speed range of the ego vehicle and the sensor's sampling rate, influencing the magnitude of the aleatoric uncertainty. Overall, using the Rolling Monte Carlo Dropout improves the efficiency of the _defensive perception envelope_ by reducing the required number of forward passes while maintaining the model's accuracy. The operational capabilities and boundaries are the subject of study in the following experiments (see Sec. 3).
## 3 Experiments
In this study, we conduct experiments using two datasets: the MNIST dataset and the BDD10K dataset. The MNIST dataset, representing a simple machine learning task, is used to provide a general proof of concept for the proposed metric. The BDD10K dataset, represents a more complex task - semantic segmentation - and serves as a real-world example in the context of autonomous driving.
**MNIST**[1] consists of 70,000 handwritten numbers containing the digits zero through nine and accordingly labeled, suitable for a classification task. MNIST is deployed as a toy problem, being well-known, easy to interpret, and comprehensible.
**BDD10K**[Yu et al., 2020] contains 9.000 images labeled pixel-wise for semantic segmentation. The images are mostly non-sequential and suitable for single-frame prediction only. The dataset is chosen for its diversity, including images from different times of day, weather conditions, and scenarios. This allows for domain shift experiments. Further, there are unlabeled video sequences that can be used for deployment and runtime experiments.
The models for each experiment are only trained on a defined source domain: for the MNIST dataset, this is non-rotated numbers, and for the BDD10K dataset, it is images recorded during the day or labeled as clear. Any data that is not part of the source domain is framed and subsequently interpreted as a domain shift. This data is then only used at inference time for validation and test purposes.
The experiments, particularly those related to semantic segmentation, are designed to demonstrate the validity of the proposed metrics and the methods underlying the _defensive perception envelope_ (see Fig. 1. To apply the proposed approach, the neural network must be trained on explicit classes and should not contain any general unknown or misc class, as proposed in [Zhang and LeCun, 2017]. The base dataset for these experiments is the BDD10K dataset, and the task is semantic segmentation. Training is conducted on NVIDIA P100 GPUs with Intel Xeon E5-2667 v4 CPUs. For details on neural network architecture, implementation, and tooling, see our repository1, including a jupyter notebook kick-start demo.
Footnote 1: Defensive Perception Repository (ours): [https://osf.io/fjxw3/](https://osf.io/fjxw3/)
### How confident should we be about confidence?
In the first experiment, we introduce a domain shift to the MNIST dataset by anticlockwise rotating the given samples in five-degree increments up to a total of 90 degrees. The classifier, which is only trained on the source domain (non-rotated samples), is equipped with dropout layers. The uncertainty (as defined in Subsec. 2.2) of each domain is calculated using Monte Carlo Dropout on 20 forward passes for each sample. Since the labels and correct class for the out-of-domain samples are available, we compare the uncertainty and performance estimation to the prediction error and model confidence derived from the maximum one-hot encoded output vector of the model's prediction.
Both the error and uncertainty increase with the rotation (see Fig. 2). It is worth noting that even though the model's performance on the out-of-domain data drops by over 90%, its confidence merely drops by 14%.
This behavior is substantiated by Spearman's rank correlation coefficients [Spearman, 1904]. While the uncertainty and the model error have a correlation coefficient of \(0.93\), the correlation coefficient of the maximum one hot encoded values with respect to the error is nine percentage points lower.
This supports the findings of Nguyen et al. [Nguyen et al., 2015], stating that a model's confidence does not reliably detect out-of-domain data. On the contrary, our proposed uncertainty metric provides a reliable performance estimation that reflects the model's uncertainty, as evidenced by its strong correlation with the model's error. This is an important finding, as in real-world deployment scenarios, labels are typically not provided during inference, making it difficult to determine a model's performance by means of conventional offline validation.
### Uncertainty is able to depict out-of-domain performance
Our second experiment demonstrates the effectiveness of our proposed performance estimation method in the challenging task of semantic segmentation, where real domain shifts present in the BDD10K dataset are deployed. Specifically, we use the domain shifts from day to night, day to dawn, and clear to rainy or clear to snowy.
To evaluate the performance of our model, we train it solely on the respective source domains (day or clear) and add dropout layers during inference to enable the use of the Monte Carlo Dropout method for our proposed performance estimation. After training, the uncertainty for every sample of the source domain and the shifted domain using the Monte Carlo Dropout with a dropout rate of 0.2 and five
Figure 2: This graph shows the uncertainty on the MNIST test data anticlockwise rotated up to \(90^{\circ}\), computed with 20 forward passes and a dropout rate of 0.4. The model was trained on the MNIST [LeCun et al., 2010] training data set without any applied rotation.
Figure 3: Figure (a) displays randomly selected images of the BDD10K dataset from the day and night domain. Additionally, the resulting prediction of the model, the corresponding ground truth labels, and the heat map of the uncertainty values are depicted. In (b), the correlations between the model error and the computed uncertainty for the following domains are presented: day (source) - night (out-of-domain), day (source) - dawn (out-of-domain), clear (source) - rainy (out-of-domain) and clear (source) - snowy (out-of-domain). In each experiment, the model was trained solely on the respective source domain (day or clear). The uncertainty was calculated using five forward passes and a dropout rate of \(0.2\). For improved visibility in the plots, the uncertainty values are scaled as follows, the mean error is divided by the mean uncertainty value based on the source domain data, and the ratio is applied as a factor to each uncertainty value.
forward passes per sample is calculated. The uncertainty is calculated for each pixel.
It can be depicted with a heat map (see Fig. 3) as done here for two randomly selected images, one from the source domain day and the other from the shifted domain night.
On both heat maps (day and night), high uncertainty is present along the edges of the segmented objects and areas. Further, in the night domain areas that experience information loss due to the night domain's characteristics, this is visible in large parts of the sky and the poorly illuminated driveable space. Differing pixel class predictions cause uncertainty increases during multiple forward passes - this provides an example of how a _defensive perception envelope_ detects the night sample as out-of-domain (see the proposed system flow overview visualized in Fig. 1). Based on the quantification of the derived uncertainty value, the system can enact a safety countermeasure.
As the ground truth label for each sample of the source domain and the shifted domain is available, it is possible to calculate the true model error for each sample. Spearman's rank correlation coefficient [14] is calculated over all samples of the source and out-of-domain data to examine the relation between the true model error and our proposed performance estimation. The validation reveals a strong correlation between the model's error and the performance estimation. For the day domain, the correlation coefficient is \(0.68\), while for the clear domain, it is \(0.74\). These source domain correlation coefficients can further serve as a reverence for the out-of-domain correlations.
Thereupon, for the out-of-domain data, strong correlations are confirmed: A correlation coefficient for the night domain of \(0.77\) and \(0.68\) for the dawn domain, as well as \(0.71\) for the rainy domain and \(0.66\) for the snowy domain, shows that the here proposed performance estimation reflects the model's error.
It is to be highlighted that the here presented technique tends to underestimate the prediction error due to a correlation coefficient below one between the model's error and the uncertainty across all examined domains.
This finding is significant for systems under deployment, as this shows that the novel approach can provide a proxy for model performance without the need for ground truth labels at runtime. Furthermore, our proposed uncertainty estimation method reliably approximates the model's prediction error, particularly on out-of-domain data (as evident in Fig. 3).
### Sequential Data Allows for Compute Efficient Uncertainty Estimation
As employed in the experiments above, the vanilla Monte Carlo dropout method can become computationally expensive as it requires multiple forward passes per frame to determine the model's uncertainty on a given sample (see Fig. 1). Under deployment, perception systems are heavily limited by runtime requirements, facing limited computational resources [10].
To address this issue, we propose a compute-efficient solution by taking advantage of the sequential nature of the input data - as it is known from a camera stream within autonomous vehicles. Furthermore, assuming that the frame rate of the input data is high enough, the differences between subsequent frames can be neglected. Thus, to obtain a sample's uncertainty and performance estimation, instead of multiple forward passes on each frame, we process subsequent frames by applying rolling forward passes (see Rolling Monte Carlo Dropout in Subsec. 2.3).
The model is again trained on the BDD10K training data set, and the experiments are executed on the 20 provided video sequences of the BDD10K. These video sequences are recorded at different locations with various driving velocities. At the same time, the frame rate is constant at 30 frames per second for each video sequence. In addition to verifying the assumption of the similarity of subsequent frames, it is further evaluated how the frame rate and stride of rolling forward passes affect the model's uncertainty. The stride with this defines the number of images that are considered to calculate the uncertainty and correspond to the number of forward passes in the vanilla Monte Carlo Dropout.
For 30 frames per second, the highest possible in our experiment, the uncertainty with three forward passes of the vanilla Monte Carlo Dropout is \(0.022\). In contrast, for three rolling forward passes, the uncertainty at \(30\) frames per second is \(0.033\) - a minor deviation of \(0.011\) in the uncertainty value (see Fig. 4). The slight increase in the uncertainty needs an educated trade-off against the advantage of square savings \(O(n^{2})\to O(n)\) in computing efforts. Accordingly, for the vanilla Monte Carlo dropout, three forward passes must be applied on each of the three consecutive frames, so nine forward passes in total. In contrast, for the Rolling Monte Carlo Dropout, only three forward passes are necessary to cover the three considered frames.
Furthermore, it is shown that the uncertainty is more sensitive to the stride of the Rolling Monte Carlo Dropout than to the number of forward passes of the vanilla Monte Carlo Dropout. For a larger number of forward passes, the uncertainty slightly increases from \(0.032\) (three forward passes) to \(0.035\) (nine forward passes). For an increasing stride, the
uncertainty increases from \(0.033\) (stride of three frames) to \(0.065\) (stride of nine frames) and thus almost doubles.
The frame rate only influences the Rolling Monte Carlo Dropout. The frame rate does not affect the vanilla Monte Carlo Dropout, as its uncertainty calculation is based on a single frame. The uncertainty curvature for Rolling Monte Carlo Dropout is characteristically decreasing: The lower the frame rate, the higher the uncertainty as the difference between consecutive frames increases. For a stride of three, the uncertainty increases from \(0.032\) up to \(0.058\), and for a stride of nine, from an uncertainty of \(0.065\) to an uncertainty of \(0.118\).
In conclusion, our results indicate that by applying Rolling Monte Carlo Dropout on consecutive data with a high frame rate, square the computational effort is reduced. This finding is of major importance as computational effort is a very limited resource in autonomous driving systems. Therefore, the following section investigates the effect of the dropout rate on the inference performance.
### There is no need for a pure inference pass
Dropout layers are essential for applying the (Rolling) Monte Carlo Dropout to estimate a model's uncertainty during deployment. Using the novel Rolling Monte Carlo Dropout approach, reducing the number of necessary forward passes to only one per frame is possible. As is, another forward pass without dropout is still needed to yield the semantic segmentation output of the perception stack. However, in case the dropout does not reduce the quality of the semantic segmentation during inference, the output of the forward pass with Monte Carlo Dropout can directly be used as the output of the perception stack, which would further half the remaining computational efforts.
In order to determine whether dropout affects the model's prediction performance, a model is trained on the day domain of the BDD10K training dataset. Subsequently, the model is validated on the validation data set by comparing different dropout rates against the induced error, see Tab. 1.
The results show that up to a dropout rate of 0.4, the error induced by dropout is smaller than \(1\%\). For a dropout rate of up to \(0.6\), the error is still around \(1\%\). However, the error rises to \(0.34\) for a dropout rate of 0.9. For the here presented data set, the dropout rate within a range of 0.1 to 0.6
\begin{table}
\begin{tabular}{c||c|c|c|c|c|c|c|c|c} dropout rate & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline error & 0.21 & 0.21 & 0.21 & 0.21 & 0.21 & 0.22 & 0.22 & 0.23 & 0.25 & 0.34 \\ \end{tabular}
\end{table}
Table 1: The table presents the model’s error based on the applied dropout rate, rounded to two significant decimals. When no dropout is applied - the dropout rate is 0 - the model’s error is \(0.21\). This error serves as benchmark for comparing the impact of different dropout rates on the model’s performance.
Figure 4: This graph compares the influence of the frame rates on the uncertainty. As a baseline (dotted lines), the frame rate independent Monte Carlo Dropout with a different number of forward passes (fwp) is considered; thus, the uncertainty is calculated on only one image. The Rolling Monte Carlo Dropout (solid lines) applies rolling forward passes (r-fwp) and calculates the uncertainty on subsequent frames, which leads to a frame rate dependency. The model for this graph was trained on the BDD10k training set, and the uncertainty was calculated on the provided 20 video sequences from the BDD10k dataset. A fixed dropout rate of 0.2 is used.
can be safely used while maintaining the functionality of the semantic segmentation and, at the same time, reducing the computational effort within the _defensive perception envelope_.
## 4 Conclusion
In this paper, we proposed a method for addressing the issue of unnoticed catastrophic deployment and domain shift in neural networks for semantic segmentation in autonomous driving. Our approach is based on the idea that deep learning-based perception for autonomous driving is uncertain and best represented as a probability distribution. Furthermore, we demonstrated the applicability of our method for multiple different potential deployment shifts relevant to autonomous driving, such as entering for the model unknown domains such as night, dawn, rainy, or snowy.
Our _defensive perception envelope_ encapsulates the neural network under deployment within an envelope based on the epistemic uncertainty estimation through the Monte Carlo Dropout approach. This approach does not require modification of the deployed neural network and has been shown to guarantee expected model performance. In addition, it estimates a neural network's performance, enabling monitoring and notification of entering domains of reduced neural network performance under deployment.
Furthermore, our envelope is extended by novel methods to improve the application in deployment settings, such as Rolling Monte Carlo Dropout, including reducing compute expenses and confining estimation noise. Finally, by enabling operational design domain recognition via uncertainty, our approach potentially allows for customized defensive perception, safe-state triggers, warning notifications, and feedback for testing or development of the perception stack.
The safety of autonomous vehicles is of paramount importance, and the ability to detect and respond to domain shifts is critical. Our approach shows great potential for application in deployment settings and has the capability to improve the overall safety and performance of autonomous driving systems. By making the source code publicly available, we hope to spark further research in this direction.
## Acknowledgements
We want to thank our fellow researchers at Karlsruhe Institute of Technology and our colleagues at ZF Friedrichshafen AG - in particular, Dr. Jochen Abhau, and apl. Prof. Dr. Markus Reischl.
|
2308.15568 | Over-Squashing in Graph Neural Networks: A Comprehensive survey | Graph Neural Networks (GNNs) revolutionize machine learning for
graph-structured data, effectively capturing complex relationships. They
disseminate information through interconnected nodes, but long-range
interactions face challenges known as "over-squashing". This survey delves into
the challenge of over-squashing in Graph Neural Networks (GNNs), where
long-range information dissemination is hindered, impacting tasks reliant on
intricate long-distance interactions. It comprehensively explores the causes,
consequences, and mitigation strategies for over-squashing. Various
methodologies are reviewed, including graph rewiring, novel normalization,
spectral analysis, and curvature-based strategies, with a focus on their
trade-offs and effectiveness. The survey also discusses the interplay between
over-squashing and other GNN limitations, such as over-smoothing, and provides
a taxonomy of models designed to address these issues in node and graph-level
tasks. Benchmark datasets for performance evaluation are also detailed, making
this survey a valuable resource for researchers and practitioners in the GNN
field. | Singh Akansha | 2023-08-29T18:46:15Z | http://arxiv.org/abs/2308.15568v6 | # Over-Squashing in Graph Neural Networks:
###### Abstract
Graph Neural Networks (GNNs) revolutionize machine learning for graph-structured data, effectively capturing complex relationships. They disseminate information through interconnected nodes, but long-range interactions face challenges known as "over-squashing". This survey delves into the challenge of over-squashing in Graph Neural Networks (GNNs), where long-range information dissemination is hindered, impacting tasks reliant on intricate long-distance interactions. It comprehensively explores the causes, consequences, and mitigation strategies for over-squashing. Various methodologies are reviewed, including graph rewiring, novel normalization, spectral analysis, and curvature-based strategies, with a focus on their trade-offs and effectiveness. The survey also discusses the interplay between over-squashing and other GNN limitations, such as over-smoothing, and provides a taxonomy of models designed to address these issues in node and graph-level tasks. Benchmark datasets for performance evaluation are also detailed, making this survey a valuable resource for researchers and practitioners in the GNN field.
Graph Neural Networks (GNNs), Over-squashing, Over-smoothing, Graph-rewiring
## I Introduction
In recent years, the explosion of data in various domains has led to an increased interest in harnessing the power of graph structures for modeling complex relationships [1, 2, 3, 4, 5]. Graphs, which consist of nodes and edges representing entities and their connections, respectively, have emerged as a fundamental data representation in fields such as social networks [2, 6, 7], recommendation systems [8, 9, 10, 11], biology [12, 13], and more. As the diversity and complexity of graph-structured data grow, so does the demand for advanced tools to analyze and understand these intricate relationships.
This surge in interest has sparked the development of a remarkable class of machine learning models known as Graph Neural Networks (GNNs) [14, 15, 16]. GNNs are a novel approach to learning representations from graph-structured data, enabling us to capture both local and global information of nodes in a unified manner [17, 18]. In essence, GNNs extend the neural network architecture to accommodate graph data, where nodes represent entities and edges denote relationships. This extension opens the door to a multitude of applications, ranging from node classification [19, 20, 21] and link prediction to graph-level tasks like community detection [6, 22] and molecular property prediction [23, 24]. GNNs leverage the underlying graph structure to enable information propagation and aggregation, enabling them to capture intricate patterns that traditional machine learning models struggle to discern.
Notwithstanding their remarkable achievements, GNNs are not immune to certain inherent limitations, including over-smoothing [25, 26], vanishing gradients [27, 28], Out-of-Distribution (OOD) data challenges [29, 30], overfitting [31], and the relatively less explored phenomenon of over-squashing [32, 33, 34]. While exhaustive research has been dedicated to addressing the former issues, the latter--over-squashing--remains relatively less explored.
Over-squashing is a phenomenon that manifests in tasks requiring the integration of information from distant nodes [32, 35], primarily through edges that serve as bottlenecks within graph data. To put it succinctly, over-squashing denotes the distortion-prone nature of information transfer between nodes that are widely separated [34]. This distortion emerges due to the inherent tension between the limited feature representation capacity of graph embeddings and the exponential growth in the number of neighbors as graphs expand. This interplay often hampers the faithful transmission of distant information.
This survey article aims to provide a comprehensive panorama of this specific limitation. We delve into the intricate nuances of over-squashing, shedding light on its conceptual framework and its implications. Additionally, we meticulously outline the repertoire of methods proposed thus far to grapple with this intricate issue. By presenting a systematic exploration of the landscape, we contribute to a deeper understanding of over-squashing's impact on GNNs and offer insights into the evolving strategies engineered to surmount this challenge.
To summarize, this paper makes the following key contributions:
1. _Pioneering Survey_: This paper serves as the inaugural comprehensive survey on 'over-squashing,' a pivotal limitation in message-passing graph neural networks. It addresses a burgeoning area of interest among researchers.
2. _Systematic Categorization_: We provide a systematic categorization of existing methods, offering a detailed taxonomy that simplifies the understanding of various strategies to mitigate over-squashing.
3. _Benchmark Datasets_: We extensively discuss commonly used benchmark datasets employed for evaluating mod
els in the context of over-squashing, both at the node and graph levels.
4. _Added Value:_ Additionally, this survey explores the interplay of over-squashing with other fundamental GNN limitations, such as 'over-smoothing,' providing a more holistic perspective on the challenges faced in this domain.
These contributions collectively make this paper a valuable resource for researchers and practitioners delving into the intricate domain of over-squashing in Graph Neural Networks.
## II Background
### _Graph Neural Networks_
A GNN is a neural network architecture designed to operate on graph-structured data. The core idea of a GNN is to iteratively aggregate information from neighboring nodes and update the node features through multiple layers.
Consider a graph denoted as \(G=(V,E)\), where \(V\) represents the set of nodes and \(E\) is the set of edges (or links), with \(E\subseteq V\times V\). In the context of Graph Neural Networks (GNNs), the primary objective is to learn effective representations for nodes, links, and even entire graphs. This is achieved through a fundamental process called message-passing, as defined by Gilmer et al. (2017) and elaborated by Zhang et al. (2022). In this process, GNNs iteratively refine node representations using the following equations:
At layer \(l\):
\[h_{u}^{(l)}=COM\bigg{\{}h_{u}^{(l-1)},AGG\{h_{v}^{(l-1)}\text{ where }v\in N_{u}\}\bigg{\}}\]
Where \(h_{u}^{(l-1)}\) represents the representation of node \(u\) at the \(l-1\)th layer. It is typically initialized with the node's feature at the initial layer. \(N_{u}\) signifies the set of neighbors of node \(u\). \(AGG(\cdot)\) denotes the aggregation function that gathers information from neighboring nodes. \(COM(\cdot)\) is the combination function responsible for integrating aggregated information into the node's representation. By iteratively applying this message-passing mechanism, GNNs refine node representations while considering the relationships with neighboring nodes. This step is crucial for capturing the structural and semantic information within the graph.
Once the node representations are established, GNNs extend their influence to edges (links) and the entire graph. _Link Representations_: The representations of connected nodes are leveraged to derive meaningful representations for the links connecting them. _Graph-Level Representations_: The collective information from all nodes in the graph is distilled using a readout (or pooling) operation. This operation yields a representation of the entire graph, encapsulating its characteristics.
Ultimately, the acquired representations of nodes, links, or entire graphs can be harnessed to address a variety of graph-based tasks. These tasks span different levels of complexity, ranging from node-specific tasks like node classification and link prediction to higher-order tasks involving the entire graph structure.
### _Over-squashing_
The phenomenon of over-squashing has been described as a challenge arising within Message Passing Neural Networks (MPNNs) when messages traverse through distant nodes. This issue stems from the exponential expansion of a node's receptive field, which results in numerous messages being compressed into fixed-size vectors. Topping et al. (2022)[35] have formally substantiated this occurrence through a sensitivity analysis of the Jacobian matrix of node features. They have partly attributed over-squashing to the presence of edges exhibiting high-negative curvature.
To elaborate, let's consider a receptive field \(B_{r}=\{j\in V:d_{G}(i,j)\leq r\}\) associated with a \(r\)-layer GNN, where \(d_{G}\) signifies the shortest-path distance and \(r\) is a natural number. The Jacobian matrix \(\partial h(r)_{i}/\partial x_{j}\) represents the sensitivity of a node embedding \(h(r)_{i}\) to a specific input feature \(x_{j}\) in node \(j\). Over-squashing can be conceptualized as the inability of \(h(r)_{i}\) to be influenced by \(x_{j}\) at a distance \(r\). Topping et al. [35] have mathematically established that
\[\frac{\partial h_{i}^{(r+1)}}{\partial x_{j}}\leq(\alpha\beta)^{r+1}A^{r+1}(i,j)\]
under certain conditions, where \(|\Delta\phi_{l}|\leq\alpha\) and \(|\Delta\psi_{l}|\leq\beta\) for \(0\leq l\leq r\), and \(\phi_{l},\psi_{l}\) are differentiable functions. This inequality highlights how the influence of input features diminishes exponentially with distance \(r\), particularly noticeable when \(|B_{r}|\) grows exponentially.
For instance, in a binary tree where \(d_{G}(i,j)=r+1\), the term \(A^{r+1}(i,j)\) equals \(2^{(-1/3)}*2^{(-r)}\), leading to an exponential decay in node dependence on input features at distance \(r\). This phenomenon is what researchers refer to as the over-squashing of information [34, 35].
In [34] authors tried to answer (1) what is the impact of width in mitigating over-squashing? (2) can over-squashing can be avoided by sufficiently deep models? (3) how does over-squashing relate to graph spectrum and the underlying topology beyond curvature bounds that only apply to 2-hop neighbors? Last question is important because recent works trying to combat over-squashing via methods that depends on the graph spectrum [36, 37, 38]
## III Handling over-squashing in graph neural networks(GNNs)
In scenarios where tasks necessitate spanning multiple layers within a network, the depth of the network often mirrors the range of interactions between nodes. Nevertheless, a rising number of layers corresponds to an exponential increase in the number of nodes contributing to the receptive field of each individual node. This amplification leads to the phenomenon of over-squashing [32, 35]. Essentially, over-squashing manifests as a compression of information originating from a receptive field that encompasses numerous nodes. This compression results in fixed-length node vectors, impeding the accurate propagation of messages from distant nodes. This distortion takes shape due to graph bottlenecks that emerge as
the number of \(k\)-hop neighbors undergoes exponential growth with each \(k\).
In a bid to surmount these challenges, the literature has proposed strategies such as graph rewiring [32]. Consider a graph denoted as \(G\) with number of node as \(n\), adjacency matrix \(A\), and a mapping function \(R:\mathbb{R}^{n\times n}\rightarrow\mathbb{R}^{n\times n}\). When we talk about the graph \(G\) being "rewired" by \(R\), it signifies a transformation where the message exchanges among nodes occur on a graph denoted as \(R(G)\), rather than the original \(G\). In this context, \(R(G)\) is the graph characterized by its adjacency matrix \(R(A)\).
The challenge of over-squashing within Graph Neural Networks (GNNs) has spurred the development of various methodologies, each aiming to alleviate this phenomenon. Broadly, these methods can be categorized into two types of graph rewiring methods, each offering unique insights into the resolution of the over-squashing predicament.
### _Spatial Graph Rewiring Methods:_
**Curvature-Based Rewiring and Comprehensive Topological Analysis:** Topping et al. [35] and Di Giovanni et al. [34] contributed insights into the origins of over-squashing, its topological implications, and the influence of GNN design choices.
**SDRF** Topping et al. [35] yielded a novel approach, Stochastic Discrete Ricci Flow (SDRF), to mitigating the pervasive issue of over-squashing in Graph Neural Networks (GNNs) through a curvature-based graph rewiring procedure. The crux of this innovative methodology lies in its meticulous treatment of graph edges based on their curvature properties. Specifically, edges exhibiting negative curvatures, indicative of potential sources of over-squashing, become the focal point of attention. By orchestrating the construction of supplementary connections tailored to support these edges, the proposed rewiring process adeptly combats the adverse effects of over-squashing.
**FA** Alon and Yahav [32] introduced a graph rewiring method that adds a fully-adjacent matrix in the last GNN layer to mitigate over-squashing. This approach is simple to implement and can be couple to any existing GNN architecture. It involves incorporating a Fully-adjacent layer (FA) in GNNs to alleviate the problem.
**BORF** Recently, Nguyen introduced a novel rewiring technique known as Batch Ollivier-Ricci Flow (BORF), which harnesses the power of Ollivier-Ricci curvature to address the interrelated challenges of over-smoothing and over-squashing in Graph Neural Networks (GNNs) as detailed in [39]. BORF is a novel approach designed to address the issues of over-squashing and over-smoothing in Graph Neural Networks (GNNs). It operates in batches and begins by identifying two sets of edges in each batch: h edges with minimal curvature and k edges with maximal curvature. By focusing on edges with low and high curvature values, respectively, BORF aims to simultaneously mitigate over-smoothing and over-squashing. It then optimizes the graph's connectivity by adding connections to the minimally curved edges, ensuring efficient communication between distant nodes. This alleviates over-squashing. To minimize computational overhead, BORF reuses previously calculated optimal transport plans for edge addition. Additionally, BORF removes the maximally curved edges to prevent over-smoothing, as these can lead to excessive smoothing of node features. Furthermore, the algorithm's flexibility allows it to operate as a net edge addition, subtraction, or net-zero rewiring, providing adaptability to different data characteristics. BORF effectively balances these two key challenges, enhancing the performance of GNNs in graph-related tasks.
**GTR** Within the realm of addressing over-squashing in Graph Neural Networks (GNNs), Black et al. [40] have conducted a comprehensive analysis by investigating the phenomenon through the lens of commute time between node pairs. They proposed Greedy Total Resistance (GTR) rewiring, method to minimize the total resistance. Effective resistance offers an alternative metric for evaluating bottlenecks within graph topology [41]. This measure quantifies the level of resistance between two nodes in proportion to their commute time. Commute time represents the expected number of steps required for a random walk to traverse back and forth between nodes within the graph. In essence, high resistance between two nodes indicates a greater difficulty for messages to traverse from node \(i\) to node \(j\). Black et al. [40] have established a sensitivity bound which links elevated effective resistance between pairs of nodes to a reduced sensitivity of the representations, \(h^{(r+1)_{i}}\) concerning input features \(x_{j}\). Furthermore, it's important to note that effective resistance exhibits an inverse relationship with the square of the Cheeger constant.
In a parallel vein, Di Giovanni et al. [34] have undertaken similar methodologies, ultimately converging on a shared conclusion. Their findings underline the pivotal role of effective resistance in influencing the degree of over-squashing within GNNs. Furthermore, the work by Di Giovanni et al. [34] extends beyond a singular focus on effective resistance. They delve into the impact of GNN architecture's width and depth on the occurrence of over-squashing. This comprehensive analysis probes into how various dimensions of GNN design interplay with the manifestation of over-squashing, enriching our understanding of this intricate phenomenon.
In their work, Di Giovanni et al. [42] build upon their previous findings and concentrate on two pivotal factors: the network's architecture, characterized by weight norms and depth, and the intrinsic graph structure, evaluated using commute times. In doing so, they establish upper limits on the ability of Message Passing Neural Networks (MPNNs) to efficiently integrate features. Significantly, they introduce the notion of "over-squashing," which is fundamentally linked to MPNNs' maximum node mixing capacity and operates inversely to it.
**DRew** Gutteridge et al. [43] argue that while some rewiring approaches attempt to enhance connectivity for long-range tasks, they often sacrifice the inductive bias provided by graph distance by enabling instant communication between distant nodes at every layer. Hence to tackle these issues a
layer-dependent rewiring technique is proposed in [43] which gradually densify the graph. A delay mechanism that facilitates skip connections based on node distance and layer is also introduced in [43] so that graph's inductive bias is preserved.
### _Spectral Graph Rewiring Methods:_
To explain graph rewiring in the context of spectrum of the graph we would like to explain the connectedness of a graph with eigen values of the graph Laplacian. The connectedness of a graph \(G\) can be measured via a quantity known as the Cheeger constant, denoted as \(h_{Cheeg}\), is defined as follows:
\[h_{Cheeg}=\min(U\subset V)\frac{|\{(u,v)\in E:u\in U,v\in V\ U\}|}{\min(vol(U),vol(V\ U))}\]
Here, \(vol(U)\) represents the volume of set \(U\) and is calculated as the sum of degrees of nodes \(u\in U\).
The Cheeger constant, \(h_{Cheeg}\), essentially quantifies the energy required to divide graph \(G\) into two separate communities. A smaller \(h_{Cheeg}\) implies that \(G\) tends to have two communities with only a few connecting edges. In such cases, over-squashing is more likely to occur when information needs to traverse from one community to another. It's important to note that while computing \(h_{Cheeg}\) is generally a complex task, the Cheeger inequality provides a useful relationship: \(h_{Cheeg}\) is approximately proportional to the smallest positive eigenvalue of the graph Laplacian.
In light of this relationship, some recent approaches have proposed selecting a rewiring strategy that depends on the spectrum of \(G\). The goal is to generate a new graph \(R(G)\) that satisfies \(h_{Cheeg}(R(G))>h_{Cheeg}(G)\). This strategy has been explored in the works [36, 37, 38]. The underlying assumption is that propagating messages over the rewired graph \(R(G)\) can mitigate over-squashing. However, it's important to note that this claim lacks formal analytical proof at this stage.
**Augmenting the Spectral Gap:** The prevailing strategy in mitigating over-squashing has largely revolved around increasing the spectral gap of the graph, specifically targeting the smallest eigenvalue of the Laplacian matrix. Intuitively, the spectral gap is linked to the presence of bottlenecks within graphs, as elucidated by the Cheeger inequality [55]. Consequently, augmenting the spectral gap serves to reduce these bottlenecks, fostering a smoother flow of information. Various strategies have emerged to decrease the spectral gap, encompassing methods such as edge addition [38], edge flipping [47], edge reweighting [36], or the utilization of expanders to perform specific GNN layers [37]. These approaches seek to fine-tune the graph's structural characteristics to mitigate over-squashing while recognizing the pivotal role played by the spectral gap in this intricate balance.
**DiffWire** Arnaiz et al. [36] introduced a unified approach that bridges the concepts of commute time and graph spectral gap. This approach comprises two distinct layers within a Graph Neural Network (GNN). The first layer is a differentiable, parameter-free component designed to learn the commute time, while the second layer, known as the rewiring layer, optimizes the spectral gap based on the specific characteristics of the network and the task at hand. This integrated framework empowers the GNN to adaptively learn and apply rewiring strategies, effectively alleviating the challenges associated with over-squashing while considering the nuances of the graph structure and task requirements.
**EGP Model** Deac et al. [37] introduced the Expander Graph Propagation (EGP) model for graph classification tasks. Their approach leverages expander graphs to tackle bottlenecks in global information propagation within the graph. In graph classification, it's essential to compute node features that consider both local interactions within their neighborhood and the broader global context of the graph structure. Deac et al. achieved this by adding one layer of EGP after each layer of GNN utilizing Cayley graphs to construct efficient expander graphs of a specified size. The EGP model is designed to enhance connectivity for long-range tasks, ensuring efficient communication between distant nodes.
**RLEF** graphs to address bottlenecks in the global propagation of information within a graph. Their method introduces two local graph rewiring algorithms: the Random Local Edge Flip (RLEF) and the Greedy Random Local Edge Flip (G-RLEF). These algorithms operate by adding and removing edges at specific locations while preserving the node degrees and overall connectivity of the graph. This framework provides a robust foundation for conducting a comprehensive analysis of the information decay that arises due to oversquashing in Graph Neural Networks (GNNs). They clarify how an information percolation bound serves as an effective means to encapsulate the core concept of oversquashing. The primary objective of employing these techniques is to enhance connectivity for long-range tasks. They achieve this by ensuring efficient and effective communication between nodes that are geographically distant from each other within the graph structure.
**Trade off between over-smoothing and over-squashing** While these rewiring methods aim to enhance graph connectivity, they come with certain drawbacks, particularly when excessive modifications are made to the input graph. One prominent concern is the loss of valuable topological information inherent to the original graph. When we introduce extensive changes by adding or removing edges, it can diminish the relevance of the original graph's structural characteristics for the given task. Additionally, the act of adding edges has a smoothing effect on the graph. If we introduce an excessive number of edges to the input graph, a standard Graph Convolutional Network (GCN) may encounter a common issue known as oversmoothing, as highlighted by Li et al. in 2018. In simpler terms, when we opt for this straightforward rewiring approach, we find ourselves facing a trade-off between addressing over-squashing and dealing with the problem of oversmoothing.
## IV Unifying approaches for over-squashing and over-smoothing
Certain methodologies have emerged that tackle the intertwined challenges of over-smoothing and over-squashing in
unison, establishing an interconnected relationship between these fundamental limitations within graph neural networks.
**SJLR** Giraldo et al. [33] established a profound connection between over-smoothing and over-squashing and the spectral gap of the graph Laplacian in Graph Neural Networks (GNNs). Their work revealed how these challenges are intricately linked to the spectral gap of the normalized Laplacian matrix, unveiling a noteworthy trade-off illuminated by the Cheeger inequality. In response to these challenges, Giraldo et al. introduced the Stochastic Jost and Liu Curvature Rewiring (SJLR) algorithm, a notable departure from previous curvature-based techniques [35, 38, 47]. SJLR stands out for its computational efficiency and its ability to preserve essential graph properties. One distinctive feature of the SJLR algorithm is its dynamic capability to add and remove edges during the training phase of Graph Neural Networks (GNNs) while maintaining the fundamental graph structure unaltered during the testing phase. This adaptability sets SJLR apart as a promising approach to address the intricate challenges posed by over-smoothing and over-squashing in GNNs.
**MHKG** The study described in [50] takes on the persistent challenges that have plagued the performance of Graph Neural Networks (GNNs), notably over-smoothing, over-squashing, and limited expressive capabilities. Drawing inspiration from physics, the authors employ a novel approach, reversing the direction of the graph heat equation, which substantially sharpens node features. They introduce the Multi-Scaled Heat Kernel based GNN (MHKG), which amalgamates diverse filtering functions to counter these issues. Generalizing MHKG into G-MHKG, they provide an in-depth analysis of its components' roles in controlling over-smoothing, over-squashing, and expressive power. Notably, they uncover a trade-off between over-smoothing and over-squashing, wherein enhancing node feature sharpness may lead to heightened over-squashing, and vice versa. G-MHKG effectively handles these challenges in the graph spectral domain through controlled manipulation of time.
**FoSR** Karhadkar et al. [38] proposed empirical solutions to mitigate both over-smoothing and over-squashing. While acknowledging the trade-off between these issues, their method primarily involves edge addition. The authors introduce a novel rewiring method called FoSR (First-order Spectral Rewiring) with the objective of optimizing the spectral gap of the graph input to the GNN. This algorithm meticulously computes the first-order change in the spectral gap resulting from the addition of each edge and subsequently selects the edge that maximizes this change. Within this framework, the authors propose a comprehensive approach, which not only introduces this innovative rewiring method but also incorporates a relational Graph Neural Network (GNN) to leverage these rewired edges effectively. This GNN operates on the transformed graph, where the relationships within the
network indicate whether each edge was originally part of the input graph or added during the rewiring process. This integrated strategy ensures the preservation of the input graph's underlying topology while utilizing newly added edges to enhance its overall connectivity.
**CurvDrop** Liu et al. [44] addressed both problems by focusing on edge removal based on curvature metrics. They devised a Curvature-based topology-aware Dropout-sampling technique, CurvDrop, which integrates Discrete Ricci Curvature in tGNNs for more expressive graph models. Drawing inspiration from the geometric analogy of Ricci curvature, Liu et al. established a compelling relationship between the Ricci curvature of an edge and the spectral gap. They harnessed this insight to address the challenges of over-smoothing and over-squashing by introducing a sampling layer driven by Ricci curvature. This sampling layer selectively drops a portion of edges with low Ricci curvature at each GNN layer, effectively mitigating the issues associated with over-smoothing and over-squashing.
**CurvPool** CurvPool is a novel graph pooling technique designed by Sanders et al. in [52] to tackle over-smoothing and over-squashing issues in Graph Neural Networks (GNNs) during graph classification tasks. It relies on the Balanced Forman curvature (BFC) to identify critical structures in the graph that contribute to these problems. This method calculates curvature values for each edge and employs a criterion to group nodes into clusters, ensuring that nodes with similar curvature profiles are pooled together. The resulting node clusters are transformed into new nodes in the pooled graph, and node representations within each cluster are aggregated using operators like mean, sum, or maximum. To retain the original graph structure, CurvPool remaps old edges to the new node clusters. By leveraging graph curvature to guide the pooling process, CurvPool effectively balances over-smoothing and over-squashing, ultimately improving the performance of GNNs in graph classification tasks. It offers adaptability to various data characteristics while maintaining computational efficiency and effectiveness.
**CBED** Inspired by the geometric analogy of Ricci curvature, a curvature-based edge dropping algorithm known as Curvature-Based Edge Dropping (CBED) is introduced in the work by Dai Shi et al. [48]. This innovative approach strategically removes edges with the highest positive curvature. By doing so, it aims to enhance the model's adaptability to graphs characterized by heterophily and, in the process, alleviate the issue of over-smoothing.
**PowerEmbed** Huang et al. have introduced a pioneering normalization technique called PowerEmbed in their work [45] aimed at mitigating the challenges of over-smoothing and over-squashing in graph neural networks. PowerEmbed employs a layer-wise normalization approach that empowers message-passing neural networks (MPNNs) to effectively express the top-k eigenvectors of a graph while capturing crucial global spectral information. Remarkably, this technique exhibits adaptability by remaining agnostic to the specific topology of the graph, rendering it suitable for graphs characterized by both homophily and heterophily. Moreover, the authors seamlessly integrated PowerEmbed with an inception network. This synergistic combination is engineered to facilitate the learning of comprehensive representations, allowing for a seamless transition from local message-passing features to the incorporation of essential global spectral information. Notably, this strategic amalgamation is endowed with a provable capability to preemptively mitigate the challenges associated with over-smoothing and over-squashing.
**DGN** Beaini et al. introduced Directional Graph Networks (DGN) to combat over-squashing in Graph Neural Networks (GNNs). Over-squashing hinders GNNs, causing issues like over-smoothing and reduced discriminative power. They contend that over-squashing occurs in GNNs due to their incapacity to capture directional information within graphs, which constrains their grasp of graph structures and feature transformations. To resolve this, Beaini et al. presented globally consistent anisotropic kernels for GNNs, enabling them to incorporate directional flows based on graph topology. Their approach employs vector fields within the graph, utilizing low-frequency eigenvectors to define directional flows at each node. Many GNNs are insensitive to the order of neighbor features, causing multiple layers to focus on simple changes rather than learning higher-level features, contributing to over-squashing. In summary, Beaini et al.'s DGN model, through globally consistent anisotropic kernels and directional information, effectively addresses over-squashing. This empowers GNNs to comprehend local graph structures, perform meaningful feature transformations, and mitigate over-squashing's adverse effects.
## V Graph Transformers and other GNN Architectures
Graph transformers have gained substantial attention as an alternative approach to combating over-smoothing and over-squashing in the context of graph and computer vision domains [56, 57, 58]. This approach leverages the inherent strengths of transformer architectures:
**Over-smoothing Resilience:** Ying et al. [59] observed that transformers are less susceptible to over-smoothing compared to traditional Graph Neural Networks (GNNs). Their ability to model graph data efficiently contributes to mitigating the over-smoothing problem.
**Over-squashing Resilience:** Kreuzer et al. [60] highlighted the resilience of transformers to over-squashing. Transformers establish direct paths connecting distant nodes, which alleviates the over-squashing challenge.
However, it's worth noting that transformers have limitations, including significant computational and memory requirements due to the need for every node to attend to all others. This can make them less suitable for large-scale graph applications and may result in improper training leading to a blend of local and non-local interactions.
**Graph ViT\(\backslash\)Mixer MLP:** Xiaoxin [61] introduces a novel approach as an alternative to global attention mechanisms.
This approach draws inspiration from ViT\Mixer MLP architectures initially introduced in computer vision. The resulting "graph ViT\Mixer MLP" GNNs excel in capturing long-range dependencies while effectively mitigating over-squashing issues. They offer improved computational efficiency, speed, and memory advantages compared to existing models.
Gabrielsson et al. [62] employ Transformer-inspired positional encoding techniques to expand the receptive field of each node within a graph. Gabrielsson et al. [62] employ positional encoding techniques within a modified graph framework to effectively extend the receptive field of each node in Graph Neural Networks (GNNs) to encompass \(r\)-hop neighborhoods. The approach involves expanding the receptive fields by introducing modifications to the graph structure and incorporating positional encodings as both edge and node features. This innovative method differs from conventional graph transformers, which often replace the original graph topology with a complete graph and blend local and global information. Instead, Gabrielsson et al.'s approach facilitates the gradual expansion of the receptive field, allowing nodes to capture inductive biases by spanning from 1-hop neighborhood to \(r\)-hop neighborhoods. This strategic extension of the receptive field is designed to mitigate the challenges associated with over-squashing in GNNs.
**PASTEL** In their recent paper [53], Qingyun et al. tackle the issue of over-squashing in Graph Neural Networks (GNNs) by highlighting its association with topology imbalance. To combat this problem, they introduce PASTEL (Position-Aware STructurLE Learning). They redefine topology imbalance in terms of under-reaching and over-squashing and establish two quantitative metrics to evaluate these issues. PASTEL aims to enhance the intra-class connectivity of nodes in GNNs by optimizing information propagation paths. To achieve this, they employ an anchor-based position encoding mechanism to capture the relative positions of unlabeled nodes concerning labeled nodes. Additionally, a class-wise conflict measure, utilizing Group PageRank, quantifies the influence of labeled nodes from different classes, guiding edge weight adjustments to boost intra-class connectivity. PASTEL's contributions include a novel perspective on topology imbalance, improved modeling of node relationships through position encodings, and demonstrated effectiveness across diverse data annotation scenarios.
**A-DGN** Anti-Symmetric Deep Graph Network (A-DGN) is introduced by Gravina et al. in [51] as an innovative framework tailored to address the challenge of long-term information propagation in Deep Graph Networks (DGNs). This approach is devised by leveraging principles from ordinary differential equations (ODEs) and their connection to deep neural architectures. Gravina et al establishes theoretical conditions under which a stable and non-dissipative ODE system can be realized on graph structures, utilizing anti-symmetric weight matrices. Within the A-DGN framework, the A-DGN layer is formulated through the forward Euler discretization of the obtained graph ODE. This process enforces specific properties on the ODE system, resulting in the preservation of long-term dependencies between nodes within the graph and alleviating the problem of over-squashing in GNNs. Additionally, it mitigates issues related to gradient explosion or vanishing during the training process.
**RFGNN** In their paper [54], Rongqin et al. address the challenge of over-squashing in Graph Neural Networks (GNNs) by identifying its connection to message redundancy during the aggregation process. They observed that conventional GNNs often struggle to efficiently propagate long-length path information, which limits their capacity to learn graph similarities and support long-range interactions. To tackle this issue, they propose the Redundancy-Free Graph Neural Network (RFGNN). RFGNN utilizes a path-search-tree concept, constructed through breadth-first search, to eliminate
redundancy in message propagation. This ensures efficient information transmission without over-squashing. They also introduce the notion of extended paths (epaths) to capture complex graph structures and implement truncated ePaths trees (TPTs) for message-passing. RFGNN's de-redundancy technique balances epath influence, effectively mitigating over-squashing and enhancing GNNs' ability to capture structural information in original graphs. In summary, RFGNN improves structural information propagation by efficiently aggregating information through path-search trees, avoiding redundancy. The reduction of redundancy plays a crucial role in addressing the over-squashing challenge.
**GESN** The Graph Echo State Network (GESN), introduced by Tortorella and Mechelli in their work [46], represents a reservoir computing model designed to address specific challenges in node classification tasks, particularly within heterophilic graphs where nodes from the same class or category are typically positioned farther apart from each other, resulting in a lower density of intra-class edges. One distinctive feature of GESN is its training-free nature. This aspect is intriguing considering that GESN is a reservoir computing model. Reservoir computing models typically involve an untrained reservoir, which acts as a fixed, random structure to process input data. This training-free characteristic makes GESN an efficient and effective solution for node classification tasks, offering a promising approach to mitigate the issues of long-range message passing and over-squashing in heterophilic graphs.
Each of these approaches offers a unique perspective and set of techniques to address the challenges of over-squashing in graph-based machine learning models.
## VI Datasets
The common datasets employed for node and graph classification tasks in the models listed in Table I are presented in Table II, along with detailed dataset statistics. It's important to note that this list is not exhaustive, as there are numerous other datasets, including synthetic and large-scale real-world ones, utilized for various research purposes. Table II displays the statistics of the datasets used in this study, where \(H(G)\) represents the graph's homophily, as defined in [63], calculated as
\[H(G)=\frac{1}{|V|}\sum_{v\in V}\frac{\#v\text{'s neighbors with the same label as }v}{\#v\text{'s neighbors}}\]
For node classification tasks, we employ a diverse set of 12 datasets, encompassing graphs of varying sizes and characteristics.
Cora, CiteSeer, PubMed are paper citation networks. Node features are represented as bag-of-words from paper content, and the task is to classify the research topics. These datasets are characterized by high homophily. Film is constructed based on the co-occurrences of actors on the same Wikipedia page, categorized into five groups. It serves as a node classification task with a low homophily nature. TwitchDE constitutes a social network comprising German gamer accounts from Twitch, categorized as suitable for work or adult profiles. The task involves classifying the profiles. Tolkers is a collaboration network originating from the crowdsourcing platform Toloka. The objective here is to determine whether a user is active or not. Due to class imbalance, the evaluation metric is the area under the ROC curve. Cornell, Texas, Wisconsin are additional node classification tasks, each originating from university-related interactions. The Cornell dataset comprises research paper citation data. The Texas dataset represents friendships in a Texas college, and the Wisconsin dataset is derived from a university-related network. Node features and specific targets for these datasets can vary. Chameleon, Squirrel, Actor are also novel datasets introduced for node classification. Chameleon captures interactions within a university community. Squirrel is a network of interactions among squirrels in a park. The Actor dataset models collaborations among actors in the film industry. Each of these datasets presents unique characteristics and classification tasks.
For graph classification tasks, we utilize the following datasets: NCI-1 and NCI-109 datasets involve classifying molecules as cancerous or non-cancerous. Node input features are one-hot encodings of atom types, and edges represent chemical bonds. Reddit-B, Reddit-5K, Reddit-12K capture interactions between users in Reddit discussion threads. The task is to determine the type of subreddit a discussion belongs to. Collab comprises ego-networks from three distinct scientific collaboration fields. Unlike the previous datasets, Reddit tasks, and Collab, these datasets do not have node input features. Enzymes is a bioinformatics dataset for graph classification. It involves classifying enzymes based on their structures and functions. The BZR dataset is a small molecule dataset used for graph classification tasks. It is commonly employed for evaluating graph-based machine learning algorithms. MUTAG is another bioinformatics dataset for graph classification, primarily used for evaluating chemical informatics algorithms. The task is to predict mutagenicity. PTC is a bioinformatics dataset for graph classification, focusing on carcinogenicity prediction. The graphs represent chemical compounds. COX2 is a small molecule dataset, often used to assess graph-based machine learning models in chemistry-related tasks. The classification task is centered around predicting the inhibition of the COX-2 enzyme. Proteins is a bioinformatics dataset used for graph classification. The task is to classify proteins based on their functions.
In all these tasks, we intentionally avoid introducing structural input features such as node degrees or positional encodings. A summary of relevant dataset statistics is provided in Table II for reference.
## VII Conclusion
This survey has delved into the depths of over-squashing, unearthing its origins in information compression across distant nodes. We've journeyed through a diverse array of strategies aimed at mitigating its impact - from innovative graph rewiring methods and curvature-based approaches to spectral techniques and the promise of graph transformers. As we tre
this path, a nuanced interplay between over-smoothing and over-squashing has come into focus, demanding a balanced resolution. This exploration stands as a testament to the ongoing dialogue among researchers, driven by the pursuit of more refined and capable Graph Neural Networks. In closing, the quest to unravel over-squashing continues to be a beacon guiding our pursuit of more effective models, driven by the dynamic nature of graph data.
## Acknowledgment
I extend my heartfelt appreciation to Dr. Karmvir Singh Phogat for providing invaluable insights and essential feedback on the research problem explored in this article. His thoughtful comments significantly enriched the quality and lucidity of this study.
|
2301.04608 | Padding Module: Learning the Padding in Deep Neural Networks | During the last decades, many studies have been dedicated to improving the
performance of neural networks, for example, the network architectures,
initialization, and activation. However, investigating the importance and
effects of learnable padding methods in deep learning remains relatively open.
To mitigate the gap, this paper proposes a novel trainable Padding Module that
can be placed in a deep learning model. The Padding Module can optimize itself
without requiring or influencing the model's entire loss function. To train
itself, the Padding Module constructs a ground truth and a predictor from the
inputs by leveraging the underlying structure in the input data for
supervision. As a result, the Padding Module can learn automatically to pad
pixels to the border of its input images or feature maps. The padding contents
are realistic extensions to its input data and simultaneously facilitate the
deep learning model's downstream task. Experiments have shown that the proposed
Padding Module outperforms the state-of-the-art competitors and the baseline
methods. For example, the Padding Module has 1.23% and 0.44% more
classification accuracy than the zero padding when tested on the VGG16 and
ResNet50. | Fahad Alrasheedi, Xin Zhong, Pei-Chi Huang | 2023-01-11T18:03:57Z | http://arxiv.org/abs/2301.04608v1 | # _Padding Module_: Learning the Padding in Deep Neural Networks
###### Abstract
During the last decades, many studies have been dedicated to improving the performance of neural networks, for example, the network architectures, initialization, and activation. However, investigating the importance and effects of learnable padding methods in deep learning remains relatively open. To mitigate the gap, this paper proposes a novel trainable _Padding Module_ that can be placed in a deep learning model. The _Padding Module_ can optimize itself without requiring or influencing the model's entire loss function. To train itself, the _Padding Module_ constructs a ground truth and a predictor from the inputs by leveraging the underlying structure in the input data for supervision. As a result, the _Padding Module_ can learn automatically to pad pixels to the border of its input images or feature maps. The padding contents are realistic extensions to its input data and simultaneously facilitate the deep learning model's downstream task. Experiments have shown that the proposed _Padding Module_ outperforms the state-of-the-art competitors and the baseline methods. For example, the _Padding Module_ has 1.23% and 0.44% more classification accuracy than the zero padding when tested on the VGG16 and ResNet50.
Padding Module, Deep Learning, Neural Networks, Trainable Padding
## I Introduction
Deep Neural Networks (DNNs) have significantly improved the performance of a wide range of computer vision tasks to the extent of being comparable to or exceeding human-level in many domains [1], such as image classification [2], object recognition [3], and image segmentation [4]. DNNs for computer vision have been iteratively improving in different aspects such as network architecture [5, 6, 7, 8], network initialization [9, 10], optimization [11, 12], and activation [13, 14]. While it is intuitive that the salient foreground of an input image can control the results of a deep learning model [15, 16], researchers have also discovered that the input's borders and corners can dominate the model's performance recently [17, 18, 19]. The study on the importance and effects of image borders remains relatively open, and this paper focuses on a trainable padding method that process image borders for deep learning models.
Padding refers to the technique of adding extra data to the input's borders so that the input's width, height, or depth can be manipulated. Padding is widely used in Convolutional Neural Networks (CNNs) to alter the output size of a convolutional layer. Without padding, convolutional filters will not process the input's borders and the output size will be reduced. The input size can be maintained with padding; we add an extra border before the convolution so that the original border can be processed [20].
Traditional padding techniques include zero padding, replication padding, and reflection padding. The reflection padding reflects the valid data over the borders; the replication padding uses the borders themselves as padding values; the zero padding specifies the use of zeroes as padding values. The replication and reflection padding methods extend the input with duplicate contents that may not be realistic; hence, they may destroy the original distribution [19]. The zero padding may outweigh the replication and reflection padding methods in terms of speed due to its computational simplicity. The major drawback of the traditional methods is that they are not dynamic. Thus, the padding values are always static and not optimized during the model training in a way that how they could be optimally predicted rationally to the input's borders.
More recently, padding methods have been studied aiming at a more related and realistic extension of the original input [19, 21]. For example, Liu _et al._[21] proposed a padding method using partial convolution. Nguyen _et al._[19] used a local mean of a sliding window over the input's borders so the local distributions at the borders before and after the padding are consistent. These state-of-the-art padding methods outperformed the traditional padding in several tasks such as image classification, image segmentation, and image style transfer. However, the major disadvantage of the state-of-the-art padding methods is that they are not trainable: the padding contents are still not optimized.
Fig. 1: Five-pixel padding applied to a CIFAR-10 sampled image using three different padding methods: A) the zero padding, B) the local mean interpolation, and C) the proposed _Padding Module_.
In this paper, we propose a trainable _Padding Module_ that can be inserted into selected positions of a deep learning model to learn how to pad its inputs. The _Padding Module_ can be trained simultaneously with a deep learning model, but, it is a self-learner in a way that will not require or influence the model's entire loss function. During the training, the _Padding Module_ internally constructs a ground truth from the input's actual borders and trains a predictor considering the neighboring areas. The trained _Padding Module_ can produce plausible results, as shown in Figure 1. The advantages of our work can be summarized as three-fold:
* The proposed _Padding Module_ introduces a trainable method that automatically pads its inputs.
* The _Padding Module_ extends its input with realistic new data that are related to the original data.
* The _Padding Module_ improves the performance of a downstream task of a deep learning model and outperforms the state-of-the-art competitors, _e.g._, classification.
The remainder of this paper is organized as follows. In Section II, we review the related work that addressed the padding effects on the neural networks performance and discuss how the current study fills the gap in the related work. Section III discusses our approach for the _Padding Module_ followed by evaluation results in Section IV. Finally, Section V concludes with the discussion on evaluation and highlights some of the future work in this sector.
## II Literature Review and Related Work
Many studies have tried to improve the performance of CNNs models from network architecture [22, 23, 24, 25], different variants of optimization [26, 27, 28], activations [29, 30, 31, 32], regularization methods [33, 34] and so no. However, little attention has been paid to investigating the padding schemes during the convolution operation. To assist the kernel, _i.e.,_ features extractor, in extracting important features during image processing in CNNs, padding layers can be added to visit pixels of the images around the corners more times, and then increase accuracy. The previous padding methods are presented as follows: Section II-A presents the performance improvement of neural networks; Section II-B introduces the improvement of space design; and Section II-C describes our contributions.
### _Performance Improvement of Neural Networks_
Several studies have proposed padding methods to improve the performance of the neural networks [17, 19, 21].
Innamorati _et al._[17] addressed the importance of the data at the borders of the input by proposing a convolution layer that dealt with corners and borders separately from the middle part of the image. They specifically designed filters for each corner and border to handle the data at the boundaries, including upper, lower, left, and right borders. The boundary filters used in the convolution were jointly learned with the filter used for the middle part of the image. However, the main issue of this study is that the number of filters used to deal with the boundaries increases linearly with the size of the receptive field.
Also, Nguyen _et al._[19] proposed a padding method that could keep the local spatial distribution of the padded area consistent with the original input. The proposed method used the local means of the input at the borders to produce the padding values; they proposed two different variants of the padding method: mean-interpolation and mean-reflection. Both variants used filters with static values, based on the receptive field, in the convolution operation that is supposed to yield the padding values maintaining the same distributions as the original borders. However, the main issue with this method is that they are not learnable.
Liu _et al._[21] proposed a padding layer that uses a partial convolution that mainly re-weighted the convolution operation based on the ratio of the number of parameters in the filter to the number of valid data in the sliding window. In other words, they dealt with the padded area as hole areas that need to be in-painted, while the data coming from the original image were seen as non-hole areas. The main issue of this study is that the padding process is not learnable.
### _Improvement of Spaces Design_
Also, some studies addressed the importance of the padding and data at the boundaries in the semantic representation learning and converting 360-degree space to 2-dimensional (2-D) space respectively [35, 36, 37].
Cheng _et al._[37] showed the importance of the padding method when they converted the 360-degree video to 2-dimensional space. They converted the video to six faces. Then, they used the reflection padding to connect them to form the 2-D space. The reflection padding naturally connected the faces compared to the zero-padding, which caused discontinuity.
Interesting works were provided by Islam _et al._[35, 36] in which they showed the importance of zero padding along with the data at the borders in encoding the absolute position information in the semantic representation learning. They showed that the zero padding and the boundaries drove the CNN models to encode the spatial information that helped the filters where to look for a specific feature; the spatial information was eventually propagated over the whole image.
### _Our Contributions_
The padding methods and their effects on a CNN model's performance are still open areas for researchers to investigate; hence, it is worth proposing new padding methods that could improve the performance of the CNN models. We propose a novel padding method, _Padding Module_, that could realistically extend the input with related data. It learns how to pad the input by using the inputs' borders as a ground truth and the neighboring areas of the borders as a predictor. Then, it uses a local loss function such as Mean Squared Error (MSE) and updates the filters using the local differentiation of the loss function with respect to the _Padding Module_'s filters. The following section explains the implementation of the _Padding Module_.
## III The proposed _Padding Module_
This paper presents the _Padding Module_, a learnable padding method that can pad the input with related and realistic padding, as shown in Figure 1. The _Padding Module_ can be used as a substitute for other padding methods in the convolution layer, such as the zero padding, the replication padding, and the reflection padding. This section shows how the padding procedure (Section III-A) and the backpropagation (Section III-B) of the _Padding Module_ work.
### _Padding Procedure_
Algorithms 1 and 2, respectively, give an overview of the forward pass and the back-propagation of the _Padding Module_. The _Padding Module_ first constructs a ground truth and a predictor from the input ( shown in step \(1\) to step \(3\) in Algorithm 1 and explained in Sections III-A1 and III-A2). Then, the _Padding Module_ uses the filters being learned to produce the actual padding values using the input's borders as a predictor ( shown in steps \(4\) to \(13\) in Algorithm 1 and explained in Section III-A4). Finally, the _Padding Module_ uses the MSE as a loss function to compute the loss value and updates the filters during the model's back-propagation ( shown in steps \(1\) to \(2\) in Algorithm 2 and explained in Sections III-A3 and III-B).
The _Padding Module_ can pad the original input with any padding size, (\(e.g.,\) one-pixel, two-pixels, etc). Indeed, the padding process in the _Padding Module_ is iterative ( shown in steps \(4\) to \(13\) in Algorithm 1). Assume the required padding size is three pixels, the padding process will iterate three times as follows: (1) padding the original input with one-pixel along all the four borders; (2) padding the output of the \(1^{st}\) iteration with one-pixel along all the four borders; and (3) padding the output of the \(2^{nd}\) iteration with one-pixel along all the four borders. Here, to easily explain our method, a simple case of padding process was presented here, \(e.g.,\) one-pixel padding. Also, the _Padding Module_ is assigned filters as many as the number of channels in the input as explained in Section III-A3. Then, we explain the padding process considering a single channel. Here, the same procedure is separately applied to each channel in case of multiple channels.
#### Iii-A1 Ground Truth \(T\)
The _Padding Module_ structures the ground truth \(T\) by extracting the input's borders and stacking them upon each other vertically to form a four-row matrix. However, to stack the left and right borders vertically in \(T\), they are transposed from column vectors to row vectors. Formally, given \(M_{c}^{r}\) as an original input with \(r\) and \(c\) as the number of rows and columns respectively; henceforth, superscripts and subscripts represent the indexes in the row-wise traversal and the column-wise traversal of the input respectively. The following is \(T\)'s extracting function \(target\) of the input \(M_{c}^{r}\):
```
0:\(M_{c}^{r}\), \(size\), where \(r\) and \(c\) are the dimensions of a matrix, and \(size\) is the padding size.
0:\(M_{c^{\prime}}^{r}\), where \(r^{\prime}=r+2\times size\), and \(c^{\prime}=c+2\times size\).
1:\(T\gets target(M_{c}^{r})\) /* as in Eq.1 */
2:\(N\gets neighbors(M_{c}^{r})\) /* as in Eq.2 */
3:\(P\gets pad_{z}(pad_{r}(N))\) /* as in Eq.3 */
4:\(M_{c^{\prime}}^{r^{\prime}}\gets M_{c}^{r}\) /* initial state for \(\overline{M_{c^{\prime}}^{r^{\prime}}}\) */
5:while\(size\neq 0\)do
6:\(Nout\gets borders(M_{c^{\prime}}^{r^{\prime}})\) /* as in Eq.6 */
7:\(Pout\gets pad_{z}(pad_{r}(Nout))\) /* as in Eq.3 */
8:\(O\gets f_{\theta}(Pout)\) /* as in Eq.7 */
9:\(M_{c^{\prime}+2}^{r^{\prime}+2}\gets O^{0}/pad_{z}(M_{c^{\prime}}^{r^{ \prime}})\big{/}\big{/}O^{1}\)
10:\(M_{c^{\prime}+2}^{r+2}\gets sides((O^{2})^{T},M_{c^{\prime}+2}^{r^{\prime}+ 2},(O^{3})^{T})\) /* as in Eq.8 */
11:\(M_{c^{\prime}}^{r^{\prime}}\gets corners(M_{c^{\prime}+2}^{r^{\prime}+2})\) /* as in Eq.9 */
12:\(size\gets size-1\)
13:endwhile
14:return\(M_{c^{\prime}}^{r^{\prime}}\)
```
**Algorithm 1** Forward Pass
#### Iii-A2 Predictor (\(P\))
To structure the predictor from the original input \(M_{c}^{r}\), the _Padding Module_ extracts the row vectors that neighbor the upper border and lower border in \(M_{c}^{r}\) and the transpose of the column vectors that neighbor the left border and right border in \(M_{c}^{r}\). Then, the _Padding Module_ stacks all the extracted neighbors vertically to form a four-row matrix. Formally, the predictor's (denoted as \(P\)) extracting function of
\(M_{c}^{r}\) can be expressed in the following way:
First, the neighbors in \(M_{c}^{r}\) are selected and denoted as \(N\) as follows:
\[N=neighbors(M_{c}^{r})=\begin{bmatrix}M_{[1:c-1]}^{1}\\ M_{[1:c-1]}^{r-2}\\ (M_{[1:r-1]}^{1})\\ (M_{[1:c-2]}^{[1:r-1]})^{T}\end{bmatrix}. \tag{2}\]
The slice \([1:c-1]\) excludes the data in the row vectors at the borders due to overlapping with the \(T\), whereas the slice \([1:r-1]\) excludes the data in the column vectors at the borders due to overlapping with \(T\).
Then, the _Padding Module_ pads the structure as follows:
\[P=pad_{z}(pad_{r}(N)). \tag{3}\]
First, the \(pad_{r}(.)\) function pads the structure with one pixel of the reflection padding horizontally (the left and right sides); then, with one pixel of the zero padding horizontally using the \(pad_{z}(.)\) function can get the final structure for \(P\).
Each row in \(P\) will be used to predict the corresponding row in \(T\). For example, the first row in \(P\) will be used to predict the first row in \(T\) representing the upper border in the input \(M_{c}^{r}\). Figure 2 (B) is an example to visually illustrate how the _Padding Module_ constructs the stack of the neighbors (as a predictor) where the right and left sides of the stack are padded with the reflection padding (named as \(p_{r}\)), and the zero padding (named as \(p_{z}\)).
#### Iii-B3 Filters and the Loss Function
The _Padding Module_ uses as many filters as the channels in the input (_i.e._, filter per channel). Also, each filter will be a row vector with a size of \((1,3)\) and a stride of \((1,1)\); that is because of having each row in \(P\) as a predictor for the corresponding row in \(T\). Therefore, to predict \(T\), the _Padding Module_ convolutes the filters over \(P\); it uses its own loss function to optimize the prediction through the local differentiation of the loss function with respect to the filters.
The loss function used by the _Padding Module_ is the MSE which computes the squared difference between the ground truth and the predicted value. The following equation is the MSE's mathematical expression for a single data point:
\[MSE(f_{\theta}(P),T)=\sum_{a=1}^{4}\sum_{j=1}^{n}(\theta^{T}\cdot P_{j}^{a}-T _{j}^{a})^{2}, \tag{4}\]
where \(f\) is the convolutional operation parameterized by \(\theta\), \(P\) and \(T\) are the predictor and the ground truth extracted from the original input \(M_{c}^{r}\), \(a\) represents the indexes for rows in the four-row matrices \(P\) and \(T\), and \(j\) represents the indexes for both the slide windows and columns in \(P\) and \(T\) respectively. Hence, \(P_{j}^{a}\) is the \(j\)th slide window in the row indexed at \(a\) in \(P\), and \(T_{j}^{a}\) is the corresponding value in \(T\) indexed at the \(a\)th row and \(j\)th column.
The local differentiation of the _Padding Module_'s loss function and the filters' updates are achieved during the model's back-propagation; these local gradients are not propagated to the previous layer. Besides that, the _Padding Module_ facilities the back-propagation of the model's loss function going through it to the previous layer as explained in Section III-B. The following is the mathematical expression for the local gradients (_Padding Module_'s loss function gradients with respect to a single filter for a single data point):
\[\frac{\varphi}{\varphi\theta_{m}}MSE(f_{\theta}(P),T)=2\sum_{a=1}^{4}\sum_{j=1 }^{n}(\theta^{T}\cdot P_{j}^{a}-T_{j}^{a})x_{m}, \tag{5}\]
where \(x_{m}\) is a single feature in the \(P_{j}^{a}\) slide window which was multiplied by the corresponding weight, namely \(\theta_{m}\), in
Fig. 2: An example to illustrate the steps 1-3 in Algorithm 1. On the left: the input \(M_{c}^{r}\) with size of \((6,6)\) pixels; the superscripts are the indexes in the row-wise traversal while the subscripts are the indexes in the column-wise traversal of the input. On the right: (A) the ground truth \(T\): a result of applying step \(1\) in Algorithm 1 which is a stack of the borders where the first row, second row, third row, and last row are the upper, lower, left, and right borders in the input respectively; and (B) the predictor \(P\): a result of applying steps \(2\) and \(3\) in Algorithm 1 which is a stack of the neighbors where the first row, second row, third row, and last row are neighbors to the upper, lower, left, and right borders in the input respectively, and the stack is padded at the left and right sides with reflection padding (\(p_{r}\)) and with zero padding (\(p_{z}\)).
the \(\theta\) during the convolution.
#### Iii-A4 Padding Process
The procedures in Sections III-A1, III-A2, and III-A3 are used to guide the _Padding Module_ on learning how to predict the borders of the _original_ input, \(M^{r}_{c}\), based on the neighboring areas to the borders, and then the _Padding Module_ can optimize its filters.
However, the padding process is shown in steps \(4\) to \(13\) in Algorithm 1; it uses the borders of the input, \(M^{r^{\prime}}_{c^{\prime}}\), as the predictor. In detail, the padding process iterates until the original input is padded with the required padding size. Hence, the original input \(M^{r}_{c}\) is assigned to \(M^{r^{\prime}}_{c^{\prime}}\) as an initial state in step \(4\) before the padding loop starts. Then, each iteration pads the input, \(M^{r^{\prime}}_{c^{\prime}}\), with one-pixel, and outputs a new \(M^{r^{\prime}}_{c^{\prime}}\) which will be used for the next iteration and so forth. The dimensions of an iteration's output, \(M^{r^{\prime}}_{c^{\prime}}\) in step \(11\), are two-pixel larger than the dimensions of that iteration's input, \(M^{r^{\prime}}_{c^{\prime}}\) in step \(6\).
Minutely, constructing the predictor in the padding process is similar to the way that constructs \(P\) in Section III-A2 with small modifications. To distinguish the notions of \(neighbors\), \(N\), and \(P\), in Section III-A2, \(borders\), \(Nout\), and \(Pout\) are denoted for the extracting function, the function's output, and the predictor, respectively. The following is the mathematical expression for the extracting function \(borders\):
\[Nout=borders(M^{r^{\prime}}_{c^{\prime}})=\begin{bmatrix}M^{0}_{[:]}\\ M^{r-1}_{[:]}\\ (M^{[i]}_{c-1})^{T}\\ \end{bmatrix}, \tag{6}\]
where \(M^{0}_{[:]}\) and \(M^{r-1}_{[:]}\) mean extracting the entire upper and lower borders respectively. Whereas, \((M^{[i]}_{0})^{T}\) and \((M^{[i]}_{c-1})^{T}\) mean extracting the transpose of the entire left and right borders respectively. Then, the _Padding Module_ pads the output \(Nout\) using Equation 3 to get the final structure for \(Pout\).
Consequently, convoluting the filters over the \(Pout\) will produce the padding values for the iteration's input. The output can be expressed as follows:
\[O=f_{\theta}(Pout), \tag{7}\]
where \(f\) is the convolutional operation parameterized by \(\theta\), \(Pout\) is the predictor, and the \(O\) is the output and comes as a matrix of four rows. Each row represents the padding values for the corresponding area in the iteration's input, \(M^{r^{\prime}}_{c^{\prime}}\), as follows: the first row (\(O^{0}\)), the second row (\(O^{1}\)), the third row (\(O^{2}\)), and the fourth row (\(O^{3}\)) represent the padding values for the upper, the lower, the left, and the right areas in the input respectively.
Then, the steps from \(9\) to \(11\) are how the produced padding values stick around the input \(M^{r^{\prime}}_{c^{\prime}}\). First, in step \(9\), the vertical concatenation operator \(\big{/}/\) is used to concatenate the first row (\(O^{0}\)) with \(M^{r^{\prime}}_{c^{\prime}}\), and then concatenates the resulted matrix with the second row (\(O^{1}\)). However, the rows from \(O\) are two-pixel wider than the rows of \(M^{r^{\prime}}_{c^{\prime}}\); therefore, to match the dimensions of these operands, the _Padding Module_ uses \(pad_{z}(.)\) to pad the \(M^{r^{\prime}}_{c^{\prime}}\) horizontally with one pixel of the zero padding before the concatenation process. Hence, the output's dimensions in step \(9\), denoted as \(M^{r^{\prime}+2}_{c^{\prime}+2}\), are two-pixel larger than the input \(M^{r^{\prime}}_{c^{\prime}}\). Finally, the algorithm uses \(sides\) function which can be formally expressed as the following:
\[sides((O^{2})^{T},M^{r^{\prime}+2}_{c^{\prime}+2},(O^{3})^{T}). \tag{8}\]
This function does not change the dimensions; however, it adds respectively the transpose of the third row (\(O^{2}\)) and fourth row (\(O^{3}\)) to the left and right columns of \(M^{r^{\prime}+2}_{c^{\prime}+2}\), the concatenated matrix with zero values at the left and right columns unless the corners already assigned values from the concatenation process. To resolve the double-count problem at the corners, the _Padding Module_ takes the average of added values in the corners by dividing each corner by \(2\); this averaging function is step \(12\) in Algorithm 1:
\[M^{r^{\prime}}_{c^{\prime}}=corners(M^{r^{\prime}+2}_{c^{\prime}+2}). \tag{9}\]
Lastly, as mentioned early in this section that the dimensions of the iteration's output are two-pixel larger than the iteration's input. Hence, the output \(M^{r^{\prime}}_{c^{\prime}}\), in Equation 9, has dimensions \(r^{\prime}\) and \(c^{\prime}\) that are updated with the dimensions of \(M^{r^{\prime}+2}_{c^{\prime}+2}\), namely \(r^{\prime}+2\) and \(c^{\prime}+2\) respectively.
### _Back-propagation_
As seen in Section III-A3, the _Padding Module_ is not optimized based on the model's main loss function; therefore, the model does not compute the gradients of its loss function with respect to the filters of the _Padding Module_. However, during the mode's backpropagation, the _Padding Module_ achieves two key points as follows:
1. As shown in step \(1\) in Algorithm 2, the _Padding Module_ optimizes its filters through computing the local gradients for its loss function with respect to the filters as explained in Section III-A3.
2. The process also receives \(G^{r^{\prime}}_{c^{\prime}}\) which are the gradients of the model's loss function with respect to the _Padding Module_'s output, the original input \(M^{r}_{c}\) after being padded. Therefore, the _Padding Module_ strips out the gradients from \(G^{r^{\prime}}_{c^{\prime}}\) that represent the gradients for the padded areas in the _Padding Module_'s output; the stripping-out process is step \(3\) in Algorithm 2, and formally expressed as follows: \[G^{r}_{c}=strip(G^{r^{\prime}}_{c^{\prime}}).\] (10) Then, the _Padding Module_ back-propagates to the previous layer the \(G^{r}_{c}\), representing the gradients for the previous layer's output. Figure 3 is an example to visually illustrate how the back-propagation process in the _Padding Module_ is achieved.
## IV Experimental Results and Analysis
This section shows the design of the training and testing experiments on our _Padding Module_ applied to a downstream task, _i.e._, image classification. The experimental setup is presented in Sections IV-A. The quantitative and qualitative results are described in Section IV-B and IV-C.
### _Experiment Setup_
The study used the premium service from Google Colaboratory where a GPU of Tesla T4 was assigned. The experiments and comparisons were conducted on the CIFAR-10 dataset for a classification task [38]. The CIFAR-10 dataset includes a training dataset of 50,000 images and a test dataset of 10,000 images. The images are of shape \((32,32,3)\), distributed equally to ten classes of airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. The _Padding Module_ was applied to different networks namely: VGG16 [8] and ResNet50V2 [39]; to make the deeper layers in these networks carry out a valid convolution, the images were resized to \((64,64,3)\) and \((224,224,3)\) for the VGG16 and the ResNet50V2, respectively.
The VGG16 is a vanilla-based architecture where the network shape is wider at the beginning of the network and narrowed down as going deep in the network. The pre-trained VGG16 was obtained from the keras1 without the top layers (the last three dense layers including the original softmax layer). Then, we added two fully-connected layers each with 512 neurons and followed by a dropout layer. On the other hand, the ResNet50V2 is made up of blocks where each block sends the block's input through the block itself, and also uses a skip connection to directly add the block's input to the output of the input's flow coming through the block. The process is known as the identity function that could help deep layers to improve the model's accuracy. ResNet50V2 is a modified version of the ResNet50 [13]. The modification mainly is in the arrangement of the block layers; batch normalization [11] and ReLU activation [40] are applied to the data flow before the convolutional layer in the block. These changes enabled the ResNet50V2 to outperform the ResNet50 on the image classification task. The ResNet50V2 was downloaded from the keras2 without the top layer (the last dense layer which is the original softmax layer). Then, two fully-connected layers with \(1024\) and \(512\) neurons were added.
Fig. 3: An example to illustrate the back propagation in Algorithm 2. On the top: the input to the _Padding Module_ is of size \((6,6)\), the _Padding Module_ uses one-pixel padding and produces an output of size \((8,8)\) where the borders \(p\) is the computed padding values. On the bottom: the back-propagation of the received gradients which is of size \((8,8)\) where the borders are gradients for the padding values \(g_{p}\); the _Padding Module_ strips out \(g_{p}\) from the received gradients, and sends the remaining to the previous layer. The \(g\) stands for gradient; for example \(g_{M_{0}^{0}}\) is the gradient for the pixel at index \([0,0]\) in the input of the Forward Pass.
Moreover, we added a softmax layer with ten outputs for both models of VGG16 and ResNet50V2, and then used the Adam optimizer [12] for the back-propagation of the gradients. Finally, the _Padding Module_ was used before every convolutional layer in the VGG16; whereas, we replaced every zero padding layer in the ResNet50V2 with the _Padding Module_.
### _Quantitative Results_
Section IV-B1 compares the proposed _Padding Module_ and state-of-art padding solutions by performing the image classification task, and then Section IV-B2 discusses an ablation study based on our solution.
#### Iv-B1 Image Classification Task
We considered the zero padding method as a baseline to compare the _Padding Module_ with. Moreover, we used the mean interpolation padding method [19] as the state-of-art since it outperformed the partial convolution padding method [19, 21] in the image classification. The main goal of this study, which aligns with the literature, is to investigate the padding effect on the accuracy of DNN models. Therefore, the accuracy is used as a comparison metric between the performance of the _Padding Module_ and the benchmark. The accuracy is the percentage of correctly classified images over the total number of images in the dataset.
Each model was trained with 100 epochs using the training dataset, and tested in each epoch using the test dataset. In Figure 4, the _Padding Module_ outperforms both the baseline method and the mean interpolation padding method when using the VGG16; also, we found that the baseline is comparable to the mean interpolation method. As for the Resent50, the _Padding Module_ also outperforms the other two paddings as shown in Figure 5. We also noticed that the baseline method is comparable to the mean interpolation method. Moreover, Table I summarizes the average of the last five epochs for the three different padding methods and the margin between the highest and the second-highest accuracies for the two models.
zero padding (no Padding Module). One remedy to lessen the running time problem may be to stop training the _Padding Module_ when it significantly decreases the MSE after the first two epochs. However, improving the current _Padding Module_ including the time complexity can be a further direction for further research.
#### Iv-B2 Ablation Study
The experiments in this section were conducted as an ablation study where the _Padding Module_ was empirically placed at different positions in the VGG16 model, as shown in Figure 7: at the beginning of the model, in the middle, at the end, and the combination of all the three places together. We also compared the four scenarios with two other scenarios: (1) where the _Padding Module_ was placed in all positions (before each convolutional layer) in the model; and (2) where the _Padding Module_ was not used but the zero padding was used instead. We ran each scenario 100 epochs using the training dataset for training and the test dataset for evaluation, and averaged the test accuracies of the last five epochs for each scenario; Table III illustrates the summary of the comparison of the models. We noticed that using a single _Padding Module_ with the shallow layers outperformed the case of using it with the deep layers. Also, the combination scenario showed a superiority over the scenario of a single _Padding Module_. However, the best performance was when the _Padding Module_ applied in the scenario of all positions. Finally, all the scenarios of applying the _Padding Module_ outperformed the scenario of the model with no _Padding Module_.
### _Qualitative Results_
Different padding sizes, such as one-pixel, three-pixel, and five-pixel, were used to illustrate how the _Padding Module_ can extend the input with related and realistic extensions. Also, we compared these different padding sizes with the other two methods, namely the zero padding and the mean interpolation padding. As shown in Figure 8, the _Padding Module_ can learn how to pad the input with related data and natural extension; this finding becomes more evident as the padding size increases.
## V Future Research Directions and Conclusion
This paper proposed a novel padding method: _Padding Module_; that can learn how to pad an input from the input's borders; hence, the input can be realistically extended with related data. The _Padding Module_ is a self-learning of its weights. To train itself, the _Padding Module_ constructs a ground truth and a predictor from the inputs by leveraging the underlying structure in the input data for supervision. The _Padding Module_ uses convolutional operation over the predictor to produce a predicted value that is, in turn, compared with the ground truth. The _Padding Module_ uses a local loss function, independent from the model's main loss function, to minimize the difference between the predicted value and the ground truth. Therefore, the _Padding Module_ updates its convolutional filters locally during the model's back-propagation. Besides that, the _Padding Module_ back-propagates the model's gradients with respect to the _Padding Module_'s output after stripping out the gradients for the padded areas to the previous layer.
The experimental results showed that the _Padding Module_ outperformed the zero-padding and the state-of-art padding in the image classification task. In the ablation study, we also observed that using a single _Padding Module_ with the shallow layers improved the performance slightly better than using it with the deep layers in the VGG16 network. On the other hand, using three of the _Padding Module_ placed in different positions (at the beginning, at the middle, and at the end) in the VGG16 outperformed the scenario of a single _Padding Module_. Moreover, placing the _Padding Module_ in all positions (before every convolutional layer) in the VGG16 outperformed all other scenarios as shown in Table III.
Our experiments applied the _Padding Module_ to the two well-known networks: VGG16 and ResNet50, for the image classification task. The VGG16 and ResNet50 networks were chosen to represent small and large networks, respectively. They, also, were used by the literature; hence, we used them to compare the _Padding Module_ with the previous work. Although two different networks are only used in one task, we shall extend the _Padding Module_ to improve such networks in different tasks, including object detection, style transfer, and image inpainting. We leave investigating the _Padding Module_ in a wide range of tasks for future research.
Also, the _Padding Module_ learned how to pad the input independently of the model's loss function. However, it is possible to optimize the _Padding Module_'s filters based on optimizing the model's main loss function; this approach will be entirely different. Hence, one research direction may be to implement a padding method that can optimize its padding filters based on the model's main loss function.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Zero Padding**} & \multicolumn{2}{c|}{**Padding Module**} & \multirow{2}{*}{**Margin**} \\ \cline{2-2} \cline{4-5} & Accuracy & Time & Accuracy & Time & \\ \hline VGG16 & 91.43 & 2 & 92.92 & 4 & 1.49 \\ \hline ResNet50 & 94.64 & 5 & 95.08 & 9 & 0.44 \\ \hline \end{tabular}
\end{table} TABLE II: On average, the running time doubles for one epoch when applying the _Padding Module_ to the VGG16 and ResNet50V2 compared to the case of the zero padding (no _Padding Module_ applied). Times are shown in a minute-scale. The margin is the accuracy difference between the case of applying the _Padding Module_ and the zero padding.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**No** & **Different places in the VGG16** & **Accuracy** \\ \hline
1 & At the beginning & 91.98 \\
2 & At the middle & 91.8 \\
3 & At the end & 91.97 \\
4 & Combination of 1, 2, and 3 together & 92.18 \\
**5** & **All positions** & **92.8** \\
6 & VGG16 with no _Padding Module_ & 91.4 \\ \hline \end{tabular}
\end{table} TABLE III: Placing the _Padding Module_ at different positions in the VGG16: 1) at the beginning 2) at the middle 3) at the end 4) combination of beginning, middle, end 5) before every convolutional layer 6) VGG16 with no _Padding Module_ (zero padding instead). |
2310.03632 | The exact evaluation of hexagonal spin-networks and topological quantum
neural networks | The physical scalar product between spin-networks has been shown to be a
fundamental tool in the theory of topological quantum neural networks (TQNN),
which are quantum neural networks previously introduced by the authors in the
context of quantum machine learning. However, the effective evaluation of the
scalar product remains a bottleneck for the applicability of the theory. We
introduce an algorithm for the evaluation of the physical scalar product
defined by Noui and Perez between spin-network with hexagonal shape. By means
of recoupling theory and the properties of the Haar integration we obtain an
efficient algorithm, and provide several proofs regarding the main steps. We
investigate the behavior of the TQNN evaluations on certain classes of
spin-networks with the classical and quantum recoupling. All results can be
independently reproduced through the "idea.deploy"
framework~\href{https://github.com/lullimat/idea.deploy}{\nolinkurl{https://github.com/lullimat/idea.deploy}} | Matteo Lulli, Antonino Marciano, Emanuele Zappala | 2023-10-05T16:06:21Z | http://arxiv.org/abs/2310.03632v2 | # Exact Evaluation of Hexagonal Spin-networks and Topological Quantum Neural Networks
###### Abstract
The physical scalar product between spin-networks has been shown to be a fundamental tool in the theory of topological quantum neural networks (TQNN), which are quantum neural networks previously introduced by the authors in the context of quantum machine learning. However, the effective evaluation of the scalar product remains a bottleneck for the applicability of the theory. We introduce an algorithm for the evaluation of the physical scalar product defined by Noui and Perez between spin-network with hexagonal shape. By means of recoupling theory and the properties of the Haar integration we obtain an efficient algorithm, and provide several proofs regarding the main steps. We investigate the behavior of the TQNN evaluations on certain classes of spin-networks with the classical and quantum recoupling. All results can be independently reproduced through the "idea.deploy" framework [https://github.com/lulliat/idea.deploy](https://github.com/lulliat/idea.deploy)
## I Introduction
The high computational demand from every sector of contemporary science, including particle physics and condensed matter, has propelled the investment in new approaches. These have arguably become the holy grail of scientific computation, e.g. quantum computing. In turn, quantum computational approaches leave the unanswered question of how to process the data in quantum machines such as quantum computers.
Important recent developments in deriving novel and efficient algorithms in quantum machine learning have been rooted in the theoretical foundation of either quantum mechanics [1; 2; 3] or its extension to continuous systems, quantum field theory [4; 5; 6; 7; 8]. These attempts constitute the answer to the need of quantum algorithms for quantum computing, and the reason to propose quantum neural networks (QNN) -- see e.g. [3] -- and their extensions in the continuum [4; 7; 8].
A prototype for Universal Quantum Computation is provided by the Reshetikhin-Turaev model [9], as proved by Freedman-Kitaev-Wang [10; 11]. More recently, topological quantum neural networks (TQNN), based on the TQFTs such as the Turaev-Viro model [12] and its physically motivated generalizations, have been proposed as a candidate to provide quantum algorithms in quantum computing. The advantage of TQNNs lies in the fact that they share a common ground with material science, and in particular with the string-net models of Levin-Wen [13; 14].
This thread of thoughts motivates us in believing that a successful translation of the approach by Freedman-Kitaev-Wang in our TQFT methods, which is known to be possible at the mathematical level ([15; 16; 17; 18; 19]) and are at the base of the TQNNs introduced in [4; 7; 8], will result in a Universal Quantum Computing that is implementable in practice in material science.
This is achieved through the equivalent language of string-nets [13; 14], providing an alternative to topological quantum computing with anyons. The tight mathematical connection relating Reshetikhin-Turaev model and Turaev-Viro model [15; 16; 17; 18; 19] (one is known to be the "square root" of the other) allows to use our methods based on the latter to recast the former in terms of string-nets, for a material-science concrete implementation through equivalence between spin-nets and Turaev-Viro model [20; 21; 22; 23], rather than their traditional anyonic-based language.
TQNNs are represented as spin-network states supported on graphs [4; 7; 8]. These are one-complexes defined as the dual simplicial complexes to the boundaries of a manifold. Spin-networks represent then boundary states (input/output data). The intrinsic quantumness of TQNNs stands in the fact that the dynamical evolution or these boundary states is attained through the sum over an infinite amount of intermediate virtual states (filters/hidden layers). This is the key element to the derivation of novel (quantum) algorithms. The latter are in principle characterized by higher accuracy and less computational time than traditional deep neural networks (DNN) ones, thus more adapt to machine implementations.
Within this framework, it becomes then urgent to obtain the exact evaluation of spin-networks. This is a problem that requires, in principle, an exponential time complexity. In fact, the recoupling theory defined by
Kauffman and Lins [24] defines a partition function from spin-networks by summing over all possible combinations of admissible colorings, and is based on the (factorial) unraveling of the Jones-Wenzl projector [24; 25].
Recoupling theory was originally introduced to define topological invariants of 3-manifolds. In fact, one could show that the aforementioned partition function defined on spin-networks dual to the cells of a (regular enough) simplicial decomposition of a 3-manifold is invariant under Matveev-Piergallini moves [26; 27], ensuring that the numerical value of the partition function is unchanged when considering homeomorphic topological spaces.
The theory has become widely applied in quantum gravity, where it has played a central role in the formulation by Perez and Noui [28] of the physical inner product for Euclidean quantum gravity in 3-dimensions, achieved via the regularization of the projector that imposes the curvature constraint of \(SU(2)\) symmetric \(BF\) theory at the quantum level. More recently, the implementation of a projector similar to the one studied by Perez and Noui, applied to a still topological extended \(BF\) theory provided with cosmological constant, has been derived in [29]. There, it has been shown that the imposition of the curvature constraint with cosmological constant naturally provides the recoupling theory of a quantum group to emerge from the initial \(SU(2)\) symmetry structure. This has finally allowed to introduce recoupling theory of quantum groups in 3-dimensional quantum gravity in a constructive way, explaining the emergence of the recoupling theory of \(SU_{q}(2)\) from that one of \(SU(2)\).
The recoupling theories of \(SU(2)\) and \(SU_{q}(2)\) are crucial for the applications into quantum machine learning that were explored in [4; 7; 8]. As we anticipated, the notion of TQNNs is formulated by means of a TQFT, and is in practice evaluated via recoupling. Although in [4; 7; 8] concrete examples were provided only accounting for the recoupling theory of \(SU(2)\), a natural extension to quantum groups, and in particular to the recoupling theory of \(SU_{q}(2)\), can be envisaged following the constructive arguments deployed in [29].
Nonetheless, the main bottleneck of the concrete applicability of the results in [4; 7; 8] remains the ability of evaluating the Perez-Noui projector in an efficient manner. As a subcase, this also includes the problem of evaluating spin-networks in general form, which is a notoriously complicated problem and it has previously been considered in the seminal articles [30; 31], where theoretical and computational results regarding certain specific cases have been considered in detail. We focus in this article on the evaluation of spin-networks of hexagonal shape and arbitrary size, and relate these objects to the pixel space of images to apply TQNNs. We use these results to obtain an algorithm for the evaluation of the Perez-Noui projector on \(SU(2)\)[28], and its generalization to \(SU_{q}(2)\)[29].
The plan of the paper is the following. In Sec. II we delve into the correspondence between the pixel space of images and the hexagonal spin-networks. In Sec. III consider spin-networks that are obtained by juxtaposition of hexagonal cells. In Sec. IV we provide the algorithm for the evaluation of the spin-network. In Sec. V we compute the transition amplitudes between two different hexagonal spin-networks. In Sec. VI we show some numerical results for the transition probability between two different hexagonal spin-networks. In Sec. VII we comment on the relation with the Ising model. Finally, in Sec. VIII we provide outlooks for future investigations and preliminary conclusions.
## II From pixel space to hexagonal spin-networks
Our starting point is a correspondence between the pixel space of images, and hexagonal spin-networks. This also motivates our interest in evaluating hexagonal spin-networks, as they are seen to correspond to images, therefore constituting our key to translate data sets into the input of TQNNs.
We start our discussion by considering first a very natural approach that rapidly incurs into an unwanted computational overhead. We consider an \(n\times n\) grid where each square indicates a pixel. Each pixel is endowed with a label between 0 and \(m\) indicating the intensity of the black color. It is clear that in this way we can represent a black and white image of \(n\times n\) resolution. To such an image, we can associate a spin-network proceeding as follows. Let \(P_{k}\) denote the \(k^{\text{th}}\) pixel of the grid in the lexicographical order. We introduce the barycenter coordinate of each pixel (square in the grid), and consider the von Neumann neighborhood \(\mathcal{N}_{k}\) of \(P_{k}\), which is given by \(\mathcal{N}_{k}=\{P_{k-1},P_{k+1},P_{k-n},P_{k+n}\}\) with the assumption that one or two of the pixels in \(\mathcal{N}_{k}\) is omitted for pixels \(P_{k}\) along the edges or the corners, respectively. We observe that we do not use periodic boundaries here, so that our resulting spin-networks do not lie in the torus, but in the plane. The centers of \(P_{k}\), which we denote by \(C_{k}\), will be the vertices of the spin-networks, and each \(C_{k}\) is connected to all the vertices corresponding to pixels belonging to its von Neumann neighborhood. The colors of the spin-networks are attributed by labeling the edges between the vertices based on the difference of the pixel values at the vertices \(C_{k}\) and \(C_{l}\) that they connect. This approach was followed for instance in [4].
However, while working in the semi-classical limit does not incur in any problems (see e.g. [4]), when we try to evaluate the spin-networks obtained through this procedure we find that each vertex needs to be desingularized as shown in Figure 1, in order to obtain two trivalent vertices from each 4-valent vertex. Each desingularization will introduce a summation over the admissible colors, and this negatively affects the computational cost of a TQNN algorithm based on spin-networks with such grid supports.
Instead, we proceed by considering a honeycomb lattice structure as in Figure 2. It is clear that one can find a
one-to-one correspondence between hexagons in the lattice in the figure and a \(2\times 2\) (pixel) image. For the \(n\times n\) pixel space one proceeds analogously. This process allows us to associate to a figure with \(n\times n\) pixel resolution a hexagonal lattice which we will call \(n\times n\) as well. Using a scheme similar to the one described above, we can associate to each pixel in black and white or RGB colors a numerical value between \(0\) and some upper bound \(N\) depending on the coloring scale. Each perimeter of the hexagon is then given the "color" \(r\in[0,N]\) determined by the pixel color. On edges that are shared among hexagons, the colors will be summed. So, if the edge \(e\) is shared between hexagon \(h_{i}\) and \(h_{j}\) with respective colors \(r_{i}\) and \(r_{j}\), we have that \(e\) takes the color \(r_{i}+r_{j}\). At each edge we now associate two projectors (which is the same one as by definition of projector) with the implicit assumption that each edge is labeled by a number of strands that derived by summing pixel colors. Using the definition of spin-network as in [24], we can rewrite the whole hexagon lattice as a spin-network as in Figure 3, where the \(2\times 2\) case is depicted.
## III Honeycomb spin-networks and their evaluation
We consider spin-networks that are obtained by juxtaposition of hexagonal cells, where each vertex is trivalent, as depicted in Figure 2, where a four cell honeycomb is shown. In other words, we consider a honeycomb lattice whose vertices are intertwiners, and whose edges are bundles (i.e. tensor products) of \(\mathfrak{su}_{2}(\mathbb{C})\) fundamental representations symmetrized by the _Jones-Wenzl idempotent_, which we will also call _symmetrizer_. We denote by the symbol \(\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\) the square honeycomb lattice whose side is of size \(n\) and whose edges are labelled by spin-colors \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\), following a precise scheme that will be described later in the article. Here \(\bar{a}\) etc., indicate vectors of spin colors associated to the edges of the spin-networks. When the spin colors do not play a role in the discussion, or if there is no risk of confusion, we will omit to write the labels and will content ourselves with simply writing \(\mathcal{H}_{n}\). In Figure 2, for example, a square honeycomb lattice with size two side corresponding to a \(2\times 2\) pixel figure: the values \(a,b,c,d\) at the center of the hexagons represent the colors corresponding to the pixel color, and the projectors are labeled by the number of strands obtained by summing the pixel colors
Figure 1: Example of spin-network with grid support where desingularization is performed at each 4-valent vertex. Here, the spin colors \(j\) shown in the zoomed in part of the figure, run over all the compatible colors with respect to the incoming edges.
Figure 3: Honeycomb lattice with size two side corresponding to a \(2\times 2\) pixel figure: the values \(a,b,c,d\) at the center of the hexagons represent the colors corresponding to the pixel color, and the projectors are labeled by the number of strands obtained by summing the pixel colors
Figure 2: Honeycomb lattice with size two side, where corners and vertices are intertwiners.
comb lattice of side \(n=2\) is represented. The labels are not assumed to constitute admissible triples a priori, and we set to zero the evaluation of a honeycomb spin-network whose labels contain a non-admissible triple at some vertex. In this article we allow, albeit rather improperly, spin-networks with open ends, i.e. supported on graphs that have edges with one endpoint not connected to any vertex. Considering these types of spin-networks simplifies certain inductive procedures in the constructions, as we shall see in the next results. They will be referred to as _open-end_ or _open-edge_ spin-networks, in the rest of this article.
Along with the spin-networks \(\mathcal{H}_{n}\), we also define the open-end spin-networks \(\mathcal{O}_{n}\) as follows. For each \(n\), \(\mathcal{O}_{n}\) is defined as a single hexagonal cell, where we attach three open spin-network edges, symmetric with respect to the hexagonal cell. The central edge is a single edge, while the two lateral edges are assumed to consist of \(2n-1\) connected edges according to the geometry depicted in Figure 4, where there are \(n-1\) vertical edges and \(n\) horizontal ones.
The open-end spin-network \(\mathcal{O}_{n}\) is depicted in Figure 5.
Let \(\mathcal{N}\) denote a spin-network, and let \(\mathcal{L}\) denote an open-end spin-network, with legs labeled \(a_{1},\cdots,a_{r}\), for some \(r\in\mathbb{N}\). Let \(\bar{v}=(v_{1},\ldots,v_{r})\) denote a list of vertices of \(\mathcal{N}\). Then, we can define the composition, written \(\mathcal{N}\circ_{\bar{v}}\mathcal{L}\), where each edge \(a_{i}\) of \(\mathcal{L}\) is joined with the vertex \(v_{i}\) of \(\mathcal{N}\). If the edges are colored by spin colors, then we set to zero the composition of networks where the colors are not admissible, while we denote the admissible composition by the same symbol as above. Then we have the following result. It holds that
\[\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ_{\bar{v}}\mathcal{O}_{n}, \tag{1}\]
for every \(n\in\mathbb{N}\), and for some choice of vertices \(\bar{v}\) in \(\mathcal{H}_{n}\) (see Lemma A.1).
For a spin-network composition as above (and in the statement of Lemma A.1), we say that the spin-network components \(\mathcal{H}_{n}\) and \(\mathcal{O}_{n}\)_inherit_ the labels from the larger spin-network \(\mathcal{H}_{n+1}\), if the spin colors of the components coincide with the respective ones in \(\mathcal{H}_{n+1}\). When the vertices that are used for the composition are clearly understood, and there is no need to remark how the composition is being performed, we simply write the symbol \(\circ\) without indicated the vector \(\bar{v}\) of vertex indices.
We now define the following type of spin-networks, denoted by \(\mathcal{BO}_{n}\), and obtained from the graph supporting \(\mathcal{O}_{n}\) by replacing each lateral vertex by a _bubble graph_ depicted in Figure 6, as well as deleting the lower half of the hexagonal edge, and connecting the first two lateral vertical edges. The graph \(\mathcal{BO}_{n}\) is represented in Figure 7. Lastly, let \(\mathcal{HH}_{n}\) denote the spin-network obtained from \(\mathcal{H}_{n}\) by deleting the hexagons along the upper perimeter. For \(\mathcal{H}_{2}\), for example, this means that one deletes the top hexagon, while for \(\mathcal{H}_{3}\) one deletes the top 3 hexagons and so on. For \(n=1\) we set \(\mathcal{HH}_{1}\) to consist of a single edge corresponding to the lower perimeter of the hexagon \(\mathcal{H}_{1}\).
We now set a useful convention on the spin colors labeling edges of the spin-networks \(\mathcal{H}_{n}\), proceeding inductively on \(n\). We start by setting the labels of the hexagon \(\mathcal{H}_{1}\) as in Figure 8.
Then, in the decomposition \(\mathcal{H}_{n}=\mathcal{H}_{n-1}\circ_{\bar{v}}\mathcal{O}_{n-1}\), where \(n\geq 2\), we number the edges of \(\mathcal{H}_{n}\) identified with the vertical open edges of \(\mathcal{O}_{n}\) as follows. The central edge is numbered \(0\), then the left branch of \(\mathcal{O}_{n}\) is numbered in increasing order from center to left with odd numbers, while the right branch is numbered in the same way, but with even numbers. At each configuration as in Figure 9, we indicate the five spin colors involved as \(a^{\bullet}_{k},b^{\bullet}_{k},c^{\bullet}_{k},d^{\bullet}_{k},e^{\bullet}_{k}\), and denote the corresponding spin-network by \(S^{\bullet}_{k}\), where \(\bullet\) is a placeholder for an arbitrary index. Here, the subscript indicates the level in which the spin-network portion appears. Level \(k\), indicates that it is part of the \(k+1\) spin-network \(\mathcal{H}_{k+1}\), but it does not lie in the copy of \(\mathcal{H}_{k}\) inside \(\mathcal{H}_{k+1}\) according to Lemma A.1. We will also use another index, which will appear as a superscript, to indicate the position of the spin-network portion within a level. The convention is the following. For levels where an odd number of \(e_{k}\)'s appear, we denote the central \(e_{k}\) as \(e^{0}_{k}\), while those \(e_{k}\)'s that lie on the left will be labeled \(e^{-i}_{k}\), and those on the right \(e^{i}_{k}\), in a symmetric fashion, and with increasing value of \(i\) as the \(e_{k}\)'s are farther from the center. For levels with even number of \(e_{k}\)'s, we omit the central \(e^{0}_{k}\) and follow the same scheme. Observe that for each \(k\) we have that some of the edges of spin-networks \(S^{\bullet}_{k}\) of different levels are connected, and therefore the corresponding labels are identified. In this case, we follow the convention that if \(S^{\bullet}_{k}\) and \(S^{\bullet}_{k-1}\) meet, the connecting edge will take the label of \(S^{\bullet}_{k-1}\), while if \(S^{\bullet}_{k}\) meets another \(S^{\bullet}_{k}\), then the labels reported are those with lower order with respect to the natural lexicographical order \(a<b<c<d<e\). We observe that following the previous conventions, the labels \(a\) and \(b\) will not appear in the spin-network \(\mathcal{H}_{n}\) except in the bottom arc, where they are labeled with subscript \(-1\). Along the edges of \(\mathcal{H}_{n}\), there appear arcs connecting at binary vertices. These edges merge, according to the rules of spin-networks at binary vertices. The labels that we report in these cases are dictated by the following ordering. For positive superscripts (i.e. on the right side of the perimeter), we have the order \(d<c<e\), while for negative superscripts we have \(c<d<e\). Then, on the meeting edges, we relabel the merged edges according to the smallest element. On central cells on top and bottom of the spin-networks, we follow the convention that the largest spin-color label is preserved. The orderings in these cases are the natural ones. Note that the
Figure 4: Lateral open-end spin-networks of \(\mathcal{O}_{n}\)
only spin-colors that appear on the (lateral) perimeter are given by the letters \(c,d,e\), while the central perimeter cells are just two (bottom and top), so that the rules given above exhaust all the cases.
Now, let us define the following quantities. For spin colors \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\) following the convention above, we define
\[\Psi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{i}) = \begin{cases}d_{\lfloor\frac{n}{2}\rfloor}^{-\lfloor\frac{n}{2} \rfloor+1}&e_{\lfloor\frac{n}{2}\rfloor+1}^{-\lfloor\frac{n}{2}\rfloor}&i_{ \lfloor\frac{n}{2}\rfloor}^{-\lfloor\frac{n}{2}\rfloor}\\ c_{\lfloor\frac{n}{2}\rfloor+1}^{-\lfloor\frac{n}{2}\rfloor-1}&d_{\lfloor \frac{n}{2}\rfloor-1}^{-\lfloor\frac{n}{2}\rfloor+1}&c_{\lfloor\frac{n}{2} \rfloor}^{-\lfloor\frac{n}{2}\rfloor}\end{cases}\] \[\times\prod_{\lfloor\frac{n+2}{2}\rfloor-1<k\leq 2\lfloor\frac{n}{2 }\rfloor}\begin{cases}c_{k}^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2 }\rfloor}&e_{k}^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2}\rfloor}&i _{k}^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2}\rfloor}\\ c_{k+1}^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2}\rfloor}&d_{k}^{ -\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{k+1}{2}\rfloor}\end{cases}.\]
Moreover, we define the "formal involution" \(\iota\) which is applied to a symbol as given above, and acts as follows: \(\iota\) exchanges the colors \(d\) to \(c\), it reverses the signs of the superscripts, and it leaves the subscripts the same. Applying the recoupling ([24]) we obtain that the following equality holds for all choices of compatible spin colors \(a,b,c,d,e,f\):
This move will be referred to as "bubble move", for simplicity, and its proof is given in Lemma A.2 below. Now, we want to show how to decompose the \(\mathcal{H}_{n+1}\) spin-network in terms of lower degrees spin-networks of type \(\mathcal{HH}_{n}\) and \(\mathcal{BO}_{n}\). For this purpose, we decompose \(\mathcal{H}_{n+1}\) in a linear combination of \(\mathcal{HH}_{n}\) and \(\mathcal{BO}_{n}\) as
\[\mathcal{H}_{n+1}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})=\Delta _{c_{n+1}^{0}}\theta(d_{n+1}^{0},c_{n+1}^{0},c_{n+1}^{0})\frac{\theta(i_{n+1 }^{0},c_{n+1}^{-1},c_{n}^{-1})}{\Delta_{c_{n+1}^{0}}}\frac{\theta(i_{n+1}^{0},d_{n+1}^{1},d_{n}^{1})}{\Delta_{i_{n+1}^{0}}}\] \[\times\frac{\theta(i_{\lfloor\frac{n}{2}\rfloor}^{-\lfloor\frac{ n}{2}\rfloor},c_{\lfloor\frac{n}{2}\rfloor+1}^{-\lfloor\frac{n}{2}\rfloor+1}^{- \lfloor\frac{n}{2}\rfloor+1}}{\Delta_{i_{\lfloor\frac{n}{2}\rfloor}^{-\lfloor \frac{n}{2}\rfloor}}}\frac{\theta(i_{\lfloor\frac{n}{2}\rfloor}^{\lfloor\frac{ n}{2}\rfloor},d_{\lfloor\frac{n}{2}\rfloor+1}^{\lfloor\frac{n}{2}\rfloor+1})}{ \Delta_{i_{\lfloor\frac{n}{2}\rfloor}^{\lfloor\frac{n}{2}\rfloor}}}\begin{cases} d_{n+1}^{0}&c_{n+1}^{-1}&e_{n+1}^{0}\\ d_{n+1}^{0}&c_{n+1}^{0}&e_{n+2}^{0}\end{cases} \tag{2}\] \[\times\mathcal{HH}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}) \circ_{\bar{b}}\mathcal{BO}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\sum_ {\bar{i}}\Psi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{i})_{t}(\Psi( \bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{i})),\]
where \(\Psi\) and \(\iota\) have been defined above. This result is stated and proved in Lemma A.3.
The coefficients \(\Psi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{i})\) will also be written as \(\Psi_{\bar{i}}\) for simplicity, when it is clear what spin colors are being considered. Let \(\mathcal{BO}_{n}\) denote an \(n\)-bubble spin-network as in Figure 7. Here we assume that the spin colors of
Figure 6: Bubble graph
\(\mathcal{BO}_{n}\) are those inherited by Equation 2. We can now apply Lemma A.2 on each of the bubbles of \(\mathcal{BO}_{n}\). This will gives us \(\mathcal{BO}_{n}\) as a sum on admissible colors of the spin-networks \(\mathcal{O}_{n}\). The evaluation of \(\mathcal{BO}_{n}\) is obtained through the formula
\[\mathcal{BO}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e},\bar{f}) \tag{33}\] \[= \prod_{n-1\leq k\leq 2n-5}\begin{cases}c_{k}^{\lfloor\frac{k+1}{2} \rfloor-n-1}&p_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-1}&d_{k}^{\lfloor\frac{k+1} {2}\rfloor-n-2}\\ p_{k+1}^{\lfloor\frac{k+2}{2}\rfloor-n-1}&c_{k+1}^{\lfloor\frac{k+2}{2} \rfloor-n-1}&c_{k+1}^{\lfloor\frac{k+2}{2}\rfloor-n}\end{cases}\] \[\times\frac{\theta(c_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-1},c_{k +1}^{\lfloor\frac{k+2}{2}\rfloor-n-1},d_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-2} }{\Delta_{d_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-2}}}\] \[\times\begin{cases}d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1}&p_{k }^{-\lfloor\frac{k+1}{2}\rfloor+n+1}&c_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+2} \\ p_{k+1}^{-\lfloor\frac{k+2}{2}\rfloor+n+1}&e_{k+1}^{-\lfloor\frac{k+2}{2} \rfloor+n+1}&d_{k+1}^{-\lfloor\frac{k+2}{2}\rfloor+n}\end{cases}\] \[\times\frac{\theta(d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1},e_{k +1}^{-\lfloor\frac{k+2}{2}\rfloor+n+1},d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n +2}}{\Delta_{d_{k}^{\lfloor\frac{k+1}{2}\rfloor+n+2}}}\] \[\times\mathcal{O}_{n-1}.\]
The proof of this fact can be found in Lemma A.4. Observe that the formula holds for \(n\geq 4\), since this step does not appear in the cases \(n=2,3\), as a direct inspection reveals. Observe that properly speaking, the coefficients \(p_{k+1}\) corresponding to \(k=2n-5\) in the product above are identified with other \(p\) coefficients through the Schur's Lemma (i.e. a Kroencker's delta) applied when obtaining Equation (2).
For simplicity of notation, we set \(\Phi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e},\bar{f})\) to be the coefficient appearing in the RHS of Lemma A.4. If summation is to be taken over some of the indices, let us denote them as \(\bar{i}\), then we indicate these indices explicitly as \(\Phi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e},\bar{f}\mid\bar{i})\). For short, in this situation, we also write \(\Phi_{\bar{i}}\), when the labels are understood. We have
\[\mathcal{H}\mathcal{H}_{n+1}\circ_{\bar{v}}\mathcal{O}_{n}=\mathcal{H}_{n+1}, \tag{4}\]
where \(\bar{v}\) is the set of vertices as in Lemma A.3.
To obtain the general evaluation of the spin network \(\mathcal{H}_{n}\) for arbitrary \(n\), we now proceed inductively by decomposing \(\mathcal{H}_{n}\) into the composition of \(\mathcal{H}_{n-1}\) and a term \(\mathcal{O}_{n}\) whose evaluation can be obtained applying recoupling theory. Throughout, the labels \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\) indicating the colorings assigned to the spin-network will follow the scheme described above.
We are now in the position to relate the evaluation of the honeycomb spin-network \(\mathcal{H}_{n+1}\) to the evaluation of \(\mathcal{H}_{n}\), for any given configuration of the spin colors. First, we absorb all the coefficients \(\Psi_{\bar{i}}\) and the extra factors coming from Lemma A.3 and Lemma A.4 to get the new coefficients \(\Psi_{\bar{i}}\) and \(\iota\Psi_{\bar{i}}\). Observe, in fact, that apart from some pre-factors appearing in Lemma A.3, all the coefficients are symmetric with respect to the involution \(\iota\). We therefore use the symmetry to define the terms \(\hat{\Psi}_{\bar{i}}\), and give a square root factor of the terms that are fixed by \(\iota\). This preserves the symmetry between \(\hat{\Psi}_{\bar{i}}\) and \(\iota\hat{\Psi}_{\bar{i}}\). We have
\[\mathcal{H}_{n+1}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})= \sum_{\bar{i}}\hat{\Psi}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\ |\ \bar{i})\iota\hat{\Psi}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\ |\bar{i})\] \[\times\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}), \tag{5}\]
where \(\mathcal{H}_{n}\) inherits the spin colors of \(\mathcal{H}_{n+1}\). This important result is stated and proved in Theorem A.6.
A fundamental computational/algorithmic issue that arises in the evaluation of \(\mathcal{H}_{n}\) following Theorem A.6 regards the inductive determination of the new labels \(\bar{a}^{\prime},\bar{b}^{\prime},\bar{c}^{\prime},\bar{d}^{\prime},\bar{e}^ {\prime}\). In fact, observe that while the labels in
Figure 8: Labeling of the edges of \(\mathcal{H}_{1}\)
the bulk of the spin-network \(\mathcal{H}_{n-1}\) obtained from the "higher degree" \(\mathcal{H}\) remain the same in the inductive process outlined in the proof of Theorem A.6, the same does not hold true for all the labels in the upper perimeter. In fact, as a consequence of the proof, there are \(2n-3\) labels that we are going to sum over after applying recoupling an appropriate number of times. For instance, in the evaluation of \(\mathcal{H}_{2}\), we sum on a single \(i\), while in \(\mathcal{H}_{3}\) we sum over 3 and so on. These colorings we sum upon are then taken into account in the colorings of \(\mathcal{H}_{n-1}\), and to concretely evaluate \(\mathcal{H}_{n}\) (see appendix) one needs to iteratively take these colorings into account, and device a scheme for the substitution. As the edges where we sum the spin colors all lie in the upper semi-perimeter of \(\mathcal{H}_{n-1}\) (along \(\mathcal{O}_{n}\) in the decomposition of \(\mathcal{H}_{n}\)) following the proof of Theorem A.6, this is not difficult to perform iteratively.
We find that the number of summation operations needed to evaluate \(\mathcal{H}_{n}\) grows quadratically with \(n\). More specifically, if \(a_{n}\) denotes the number of summations at \(n\), we have \(a_{n}=a_{n-1}+2n-5\). This is a consequence of Equation 5 (i.e. Theorem A.6) and it is proved in Corollary A.7 below.
Another consequence of Equation 5 is that the evaluation of \(\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\), with \(n\geq 2\), is given by the formula
\[\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})=\sum_{k=2}^{n}\Psi_{ \bar{i}_{0}}\Phi_{\bar{i}^{\prime}_{0}}\theta(c_{1}^{2},e_{0}^{2},b_{0}^{2}),\]
where \(\Psi_{\bar{i}_{k}}\) and \(\Phi_{\bar{i}^{\prime}_{k}}\) have been provided above and the index \(k\) refers to the superscript of the indices of the spin colors \(a,b,c,d,e\). This is shown in Corollary A.8.
The simplicity of the formula given in Corollary A.8 is not fully representative of the intrinsic complexity of it. In fact, the main issue in computing the evaluation of the Honeycomb network for an arbitrary \(n\) is that the indices appearing in the summation symbol, which refer to the quantum \(6j\) symbols in the \(\Psi\) and \(\Phi\) coefficients, are not explicitly given, and need to be considered carefully. In fact, at each step, the spin-colors for the level \(n-1\) contain indices of summation from the previous step.
## IV Evaluation of \(\mathcal{H}_{n}\)
In this section we give the algorithm for the evaluation of the spin-network \(\mathcal{H}_{n}\) using the steps described in the previous sections. Before giving the general procedure, we will consider an example in detail. We will compute the evaluation of \(\mathcal{H}_{n}\) for arbitrary colors \(a_{k}^{i},b_{k}^{i},c^{i}d_{k}^{i},e_{k}^{i}\). The honeycomb \(\mathcal{H}_{3}\) is the first of the \(\mathcal{H}_{n}\) where the various steps of the algorithm are nontrivial, and it therefore shows the procedure, but with a complexity relatively small and still simple to perform by hand. The spin-network \(\mathcal{H}_{3}\) with the labeling described above is shown in Figure 10. Observe that some of the labels are merged into a single spin color. This is due to the fact that at a binary vertex, different colors would imply that the spin-network is trivial, and therefore it is meaningful to consider only the case when all the perimeter labels are grouped in a way that at binary vertices the incoming edges have the same spin color. Also, the composition of projectors at incident binary vertices squares to the identity, and several concatenated projectors result in a single projector. In other words we can consider these edges as a single "smoothed" edge. We specify also that the procedure given to pass from pixel space to spin-networks automatically implies that the spin colors are the same at these edges, and the spin-network is not trivial due to mismatches at the binary vertices.
First, we apply Lemma A.2 to the top of the spin-network to obtain a factor of \(\begin{cases}d_{3}^{0}&c_{3}^{-1}&e_{3}^{0}\\ d_{3}^{1}&c_{3}^{0}&e_{4}^{1}\end{cases}\cdot\Delta_{e_{3}^{0}}^{-1}\theta(d_ {3}^{0},c_{3}^{0},e_{3}^{0})\) multiplying the spin-network of Figure 11
Next, we apply the recoupling Theorem centered on the edges that have a perpendicular red marker. These recouplings can be applied in parallel, in the sense that they do not depend on each other, and the procedure can be performed simultaneously. Each recoupling now implies that a summation on compatible colors appears, along with a \(6j\)-symbol. The indices used for summation
Figure 10: The spin-network \(\mathcal{H}_{3}\) with the labeling scheme adopted in this article. Some of the edges’ labels are merged due to the fact that each edge is symmetrized through the Jones-Wenzl projector which, being a projector, is the identity when squared.
will be denoted by \(p\), and we obtain a global coefficient
\[\sum_{p_{3}^{0}}\left\{\begin{matrix}c_{2}^{-1}&c_{3}^{-1}&p_{3}^{0} \\ d_{3}^{1}&d_{2}^{1}&e_{3}^{0}\end{matrix}\right\}\sum_{p_{2}^{-1}}\left\{ \begin{matrix}e_{2}^{-1}&c_{2}^{-2}&p_{2}^{-1}\\ c_{3}^{-1}&c_{2}^{-1}&d_{2}^{-1}\end{matrix}\right\}\sum_{p_{3}^{0}}\left\{ \begin{matrix}d_{3}^{1}&d_{3}^{1}&p_{2}^{1}\\ d_{2}^{1}&e_{2}^{1}&c_{2}^{1}\end{matrix}\right\}\] \[\times\sum_{p_{3}^{0}}\left\{\begin{matrix}d_{0}^{-1}&c_{2}^{-2}&p_ {1}^{-1}\\ e_{2}^{-1}&d_{1}^{0}&c_{1}^{-1}\end{matrix}\right\}\sum_{p_{3}^{0}}\left\{ \begin{matrix}c_{0}^{0}&e_{2}^{1}&p_{1}^{1}\\ d_{2}^{1}&d_{2}^{1}&e_{3}^{0}\end{matrix}\right\} \tag{6}\]
with the resulting spin-network given in Figure 12.
Now we can apply the diagrammatic Schur's Lemma (Lemma 7 in [24]) to all the bubbles appearing in Figure 12 and burst them all. This procedure introduces some \(\theta\)'s and quantum dimensions in the coefficients, but more importantly introduces Kronecker's deltas among the indices \(p\)'s. The coefficient multiplying every summand now becomes
\[\sum_{p_{3}^{0}}\left\{\begin{matrix}c_{2}^{-1}&c_{3}^{-1}&p_{3}^ {0}\\ d_{3}^{1}&d_{2}^{1}&e_{3}^{0}\end{matrix}\right\}\left\{\begin{matrix}e_{2}^{-1 }&c_{2}^{-2}&p_{3}^{0}\\ c_{3}^{-1}&c_{2}^{-1}&d_{2}^{-1}\end{matrix}\right\}\left\{\begin{matrix}d_{2} ^{1}&d_{3}^{1}&p_{3}^{0}\\ d_{2}^{2}&e_{2}^{1}&e_{2}^{1}\end{matrix}\right\}\] \[\times\frac{\theta(c_{2}^{-2},e_{2}^{-1},p_{3}^{0})}{\Delta_{p_{3} ^{0}}}\frac{\theta(c_{3}^{-1},c_{2}^{-1},p_{3}^{0})}{\Delta_{p_{3}^{0}}}\frac{ \theta(d_{2}^{1},d_{3}^{1},p_{3}^{0})}{\Delta_{p_{3}^{0}}}\frac{\theta(d_{2}^ {2},e_{2}^{1},p_{3}^{0})}{\Delta_{p_{3}^{0}}} \tag{7}\]
and the spin-network we obtain (for each given configuration of spin-colors) is given by Figure 13.
One extra application of Lemma A.2 now allows us to obtain a sum (over compatible spin-colors) of terms that are proportional to tetrahedra, where the previous coefficients now get an extra factor of \(\left\{\begin{matrix}d_{1}^{0}&d_{0}^{-1}&e_{1}^{0}\\ c_{1}^{0}&c_{1}^{0}&p_{3}^{0}\end{matrix}\right\}\cdot\Delta_{e_{1}^{0}}^{-1} \theta(d_{1}^{0},c_{1}^{0},e_{1}^{0}).\) Since the evaluation of the tetrahedron is known (see Section 8.5 in [24]), the algorithm stops, and we can evaluate the original \(\mathcal{H}_{3}\) through a sum over the compatible spin-color, evaluations of tetrahedra, and evaluations of \(6j\)-symbols and \(\theta\)-nets.
The procedure just described for \(\mathcal{H}_{3}\), exemplifies the
Figure 11: First step of the algorithm applied to \(\mathcal{H}_{3}\).
Figure 12: Second step of the algorithm applied to \(\mathcal{H}_{3}\), where we have applied recoupling to all red marked edges of Figure 11.
Figure 13: Spin-network obtained from Figure 12 after bursting the bubbles through the diagrammatic Schur’s Lemma.
whole theory in Section III for the evaluation of \(\mathcal{H}_{n}\), and gives a concrete realization of the results of Theorem 6 to pass from \(\mathcal{H}_{3}\) to \(\mathcal{H}_{2}\) (which is a tetrahedron).
```
0:\(\mathcal{H}_{n}\) with given spin-colors \(\triangleright\) Initialization
0:\((\mathcal{H}_{n})\)\(\triangleright\) Evaluation of \(\mathcal{H}_{n}\)
1:while While \(\mathcal{H}_{n}\) with \(n\geq 3\)do
2: Apply Lemma 2 to top of \(\mathcal{H}_{n}\)
3: Apply Lemma 2 to use recoupling on all edges that connect crown to bulk
4: Remove bubbles through Schur's Lemma
5: Apply Lemma 2 to edges connecting \(\mathcal{H}\mathcal{H}_{n-1}\) to \(\mathcal{B}_{n}\)
6: Apply Lemma 4 write \(\mathcal{B}\mathcal{C}_{n}\) in terms of \(\mathcal{C}_{n}\)
7: Apply Lemma 5 to obtain \(\mathcal{H}_{n}\)
8:endwhile
9: Perform sum over all compatible colors from the while
10: Evaluate the tetrahedra
```
**Algorithm 1** General algorithm for the evaluation of \(\mathcal{H}_{n}\).
## V Computation of transition amplitudes
To compute the transition amplitudes between two different hexagonal spin-networks \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), we compute the physical inner product defined by Noui and Perez by means of a projector \(P\)[28]. The definition of [28] was extended in [29] to the case of a projector where the quantum recoupling theory at non-classical \(q\) is used. Physically, this corresponds to the case where the cosmological constant is nontrivial. We will refer to projector and physical product in the classical and quantum case interchangeably.
A direct verification using the definitions found in [29] shows that the Haar integration (depicted as black boxes in [28]) satisfies the gauge fixing and summation identities found in the appendix of [28] in the quantum case as well. Using these two properties of the integration and the definition of the projector, we can reduce the computation of the transition amplitudes to evaluations as in Section III.
Let \(\mathcal{P}\) denote the projector of [28] as well as the modified version of the quantum case ([29]). Then, the transition amplitude between two spin-networks \(\mathcal{H}_{n}\) and \(\mathcal{H}^{\prime}_{n}\) of the same size, i.e. the physical inner product, is defined by the formula
\[\langle\mathcal{H}_{n}|\mathcal{H}^{\prime}_{n}\rangle_{\text{Phys}}:=\langle \mathcal{H}_{n}|\mathcal{P}|\mathcal{H}^{\prime}_{n}\rangle,\]
where \(\langle\bullet|\bullet\rangle\) indicates the inner product defined via the Ashtekar-Lewandowski measure.
It can be shown that Equation 5 suffices to evaluate transition amplitudes as follows. Then, the physical inner product between \(\mathcal{H}_{n}\) and \(\mathcal{H}^{\prime}_{n}\) is given by
\[\langle\mathcal{H}_{n}|\mathcal{H}^{\prime}_{n}\rangle_{\text{Phys}}=\overline {\langle\mathcal{H}_{n}\rangle}\langle\mathcal{H}^{\prime}_{n}\rangle, \tag{8}\]
where \(\langle\mathcal{H}_{j}\rangle\) indicates the evaluation computed in Section III and the overbar denotes complex conjugation. This is proved in Lemma 9, and the main step is to use Figure 14 to decouple the evaluation of the two spin-networks (see proof of Lemma 9 below).
## VI Phase-space properties
In this Section we explore the _phase space_ for the value of the Perez-Noui projector, relatively to two different sizes of the hexagonal grid, i.e. with \(N=2,3\). Furthermore, we set the value of \(q\) to be the _classical_ one, with \(q=-1\). In order to deal with a finite number of coloring configurations for the hexagonal lattices, we need to set bounds on the possible compatible choices for each edge, i.e. we need to impose a minimum \(c_{m}\) and a maximum \(c_{M}\) color value, and enumerate all the possible coloring configurations in that range, with some constraint coming from the coloring procedure. This is a rather complex combinatorial problem: a first straightforward approach would be to randomly draw colors for each edge and imposing the compatibility conditions at the vertices, with the drawback of searching among \((c_{M}-c_{m})^{N_{e}}\) combinations, with \(N_{e}\) the total number of edges, among which only a very small fraction actually yields compatible colorings.
As it appears, the main problem is to find a procedure that automatically yields compatible color configurations. The solution we put forward is to color the graph using its cycles as the fundamental units: assuming one finds all possible graph cycles \(\{\gamma_{i}^{(N)}\}\) (i.e. sequences of
Figure 14: Elimination of Haar integration from the bulk. The blue dashed line shows the elimination of diagonal Haar boxes, while the dotted red line shows the elimination of the horizontal Haar box.
edges that form close loops) for the \(N\times N\) hexagonal lattice, then one can build compatible colorings configurations in the range \([c_{m},c_{M}]\) by increasing by one the color of each edge belonging to a given cycle \(\gamma_{i}^{(N)}\), with the possibility of increasing multiple times the value of the colors of the edges belonging to the any given cycle. This is a non-local construction of the colorings that automatically assures the compatibility of each configuration.
Hence, after enumerating all the cycles, one can build all the possible configurations of maximum cycle color \(c_{M}=1\) simply by coloring one cycle per configuration; then one can build all configurations of maximum cycle color \(c_{M}=2\) by coloring all possible combinations of pairs of cycles, including choosing the same cycle twice, and so on.
In this way, we introduce a possible parametrization of the phase space of all the (infinite) compatible colorings that is based on coloring cycles in order to assure the compatibility of each configuration. As a final remark, it is important to consider that finding all possible cycles of the hexagonal graph is again a non-trivial combinatorial problem for which we have developed our own strategy, which will be described in a separate work.
Let us now discuss the results for the projector values among any couple of configurations, in relation to a given range of cycles-colorings, for \(N=2,3\). As it turns out, it is not possible to store in memory the results for \(N=4\): the number of cycles in this case is \(N_{c}^{(4)}=18370\), which would all yield the same evaluation, while the number of configurations for all pairs of cycles is \(N_{c_{M}=2}^{(4)}=168737635\), which does not allow to compute all the possible transition values and store them on RAM considering 32-bits precision. Hence, we choose to consider \(N=2\) with \(c_{m}=0\) and \(c_{M}=6\), and \(N=3\) with \(c_{m}=0\) and \(c_{M}=2\), yielding a total number of transition values of \(N_{c_{m}=0,c_{M}=6}^{(3)}=1502260081\) and \(N_{c_{m}=0,c_{M}=2}^{(3)}=1569744400\).
We start by associating an integer index \(i\) to each coloring configuration \(|\mathcal{H}_{n}^{(i)}\rangle\), so that the transition matrix \(\mathcal{A}\) reads
\[\mathcal{A}_{ij}=|\langle\mathcal{H}_{n}^{(i)}|\mathcal{H}_{n}^{(j)}\rangle|_ {\text{Norm}}^{2}=\frac{|\overline{\langle\mathcal{H}_{n}^{(i)}\rangle} \langle\mathcal{H}_{n}^{(j)}\rangle|^{2}}{\max\left\{|\langle\mathcal{H}_{n}^ {(i)}\rangle|^{4},|\langle\mathcal{H}_{n}^{(j)}\rangle|^{4}\right\}}\,, \tag{9}\]
which is such that the diagonal part is normalized to unity. Any random labelling of the coloring states \(|\mathcal{H}_{n}\rangle\) would not yield any apparent structure in the density matrix, hence we decided to rank each state by means of the sum of all the transition probability values between the given state and all the others, i.e.
\[\mathcal{S}_{i}=\sum_{j}\mathcal{A}_{ij}. \tag{10}\]
As shown in Figs. 15 and 16, if one reorders the labeling according to increasing values of \(S_{i}\), one can use the new ranked indices \(\{i_{\text{R}}\}\) and represent the ranked transition matrix, denoted as \(\mathcal{A}_{i_{\text{R}}j_{\text{R}}}\). One can then see that the values are automatically structured in a block diagonal form, where different states cluster in what we refer to as _classes_: within one class each state is equivalent to the others, in the sense that the transition probability is unity.
Another remarkable property is that all the elements of a class also share the same value of the total sum \(\mathcal{S}_{i}\). This feature provides an additional property of these classes: each element belonging to a class has the same global scalar product with all the other elements within the configuration space. This structure is showing how the Perez-Noui projector can be used to distinguish one class of elements from the other, without any prior information, i.e. training, with a structure that spontaneously emerges when considering a simple ranking of the states. In other words, this results is providing direct evidence for the ideas discussed in [8], for which one expects DNNs to emerge as a semi-classical limit of TQNNs. In other words, the block-diagonal part of the transition matrix \(\mathcal{A}\) is a way of representing the saddle point that would be found by training a classifier DNNs on the portion of the configuration space we study here. These results seem very promising for using TQNNs as an image classifier.
Figure 15: Transition matrix for \(N=2\) honeycomb lattice with \(q=-1\), maximum cycle-coloring value \(c_{M}=6\).
## VII Relation with the Ising Model
DNNs present many affinities with statistical models. Specifically, DNNs' architectures can be addressed from the perspective of statistical physics and Gibbs distributions. An area of research that was very active in the 80's was the one hinging on the implementation of spin-glass models to unveil the way neural networks operate. A flourishing statistical approach is also represented by the so-called Boltzmann machines, networks of neuron-like units symmetrically connected, with neurons can be switched on or off according to a stochastic dynamics. Their learning algorithm [32] allows to achieve complex pattern recognition tasks by adopting a supervised approach. Boltzmann machines emerge as stochastic recurrent neural networks, which have been cast in statistical physics as the disordered versions of the Ising model [33], i.e. the Sherrington-Kirkpatrick model [34].
In particular, the generalisation ability was one of the battle field of these investigations inspired by the statistical analysis of phase transitions. Quantum fluctuations can be rephrased as statistical fluctuations, by means of a standard Wick rotation. This latter transforms the partition function of any quantum theory in the equivalent partition function in statistical mechanics provided with the Gibbs-ensemble measure, namely the negative exponential of the Hamiltonian of the system. On the other hand, the connectivity does naturally enter inside the definition of the semi-classical limit of the QNN/TQNN states through the concept of coarse-graining. Borrowing an intuition proper of statistical mechanics, we may think that blocking and coarse-graining procedures, directly applied at the quantum level on the TQNN states, individuate a class of effective TQNN states that are supported on graphs characterised by a lower topological connectivity, and thus by a lower capacity -- we call these states statistical TQNN (STQNN). More concretely, from an operative point of view, the blocking and the coarse-graining procedures are defined in terms of the ability to carry out measurements.
## VIII Conclusions and Outlooks
The enhancement of computational methods is the omnipresent driving factor of today's scientific panorama. The advancement of technological instrumentation has allowed researchers in any field to gather increasingly more data about virtually any aspect of natural science. Nonetheless, advancements in computational ability, with eventual breakthrough, are still required, and probably even more needed that in the past.
Quantum computing may represent a milestone along this trajectory. It may pave the way to a shift of perspective in computational methods, with outputs that are qualitatively different and not comparable with classical computing. Quantum computing may furthermore enable to process data in quantum machines, including quantum computers, exploiting the quantum structures of matter.
In this article we have delved into the evaluation of spin-networks of hexagonal shape and arbitrary size. We have hence related these objects to the pixel space of images, in order to apply the new tools provided by topological quantum neural networks (TQNNs). We have then constructed an algorithm for the evaluation of the Perez-Noui projector on \(SU(2)\)[28], and extended this result to \(SU_{q}(2)\)[29].
Some aspects of our construction will deserve more detailed investigations in the future. The link between "local" features and "global" ones is among these, and appears of particular interest.
The squared norm of the normalized physical scalar product between two different states \(\mathcal{A}_{nn^{\prime}}=|\langle\hat{\mathcal{H}}_{n}|\hat{\mathcal{H}}^{ \prime}{}_{n}\rangle|^{2}\) can be used to rank the states as follows: fix the state \(|\hat{\mathcal{H}}^{\prime}{}_{n}\rangle\) and compute the partial sum \(\mathcal{S}_{n^{\prime}}=\sum_{n}\mathcal{A}_{nn^{\prime}}\); as it happens the value of \(\mathcal{S}_{n^{\prime}}\) can be used to rank each state \(|\hat{\mathcal{H}}^{\prime}{}_{n}\rangle\). At the end of the ranking procedure one finds that the ranked matrix \(\bar{\mathcal{A}}_{nn^{\prime}}\) has a block diagonal
Figure 16: Transition matrix for \(N=3\) honeycomb lattice with \(q=-1\), maximum cycle-coloring value \(c_{M}=2\).
structure where the blocks are all related to transitions \(|\langle\hat{\mathcal{H}}_{n}|\hat{\mathcal{H}}^{\prime}{}_{n^{\prime}}\rangle|^{2}=1\). It also happens that each block is associated to a unique value of the partial sum \(\mathcal{S}_{n^{\prime}}\).
Hence, the states belonging to the blocks display two fundamental property: a "local" property, i.e. the fact that each state has a scalar product equal to one over any other state belonging to the same block; a "global" property, i.e. that all the states belonging to a block yield the same value for the partial sum \(\mathcal{S}_{n^{\prime}}\). This is a remarkable property that links a local feature to a global one. It is possible to associate each of the diagonal blocks to a "class" that, upon visual inspection, seems to yield reasonably distinguishable spin networks in terms of the coloring.
The origin of the classification mechanism also deserves more detailed analyses. If one assumes that the overall set of all possible transitions, computed using the Perez-Noui projector, allows to compute the Turaev-Viro invariant, then it might be possible that the partial sum \(\mathcal{S}_{n^{\prime}}\) is related to the Reshetikhin-Turaev invariant. If this is the case, then each diagonal block might be related to a different value of the Reshetikhin-Turaev invariant thus providing a mathematical foundation for the mechanism that is yielding the classification we observe in the ranked transition matrix \(\bar{\mathcal{A}}_{nn^{\prime}}\).
Assuming that the Turaev-Viro invariant can be computed from the transition matrix \(\mathcal{A}_{nn^{\prime}}\), then the diagonal blocks in \(\bar{\mathcal{A}}_{nn^{\prime}}\) might represent the saddle point of the Turaev-Viro evaluation, if considering the TV as composed by a sum of the exponential of the values of \(\bar{\mathcal{A}}_{nn^{\prime}}\).
In conclusion, the intrinsic quantumness of the TQNN framework [4; 7], in which the dynamical evolution of the boundary states (input/output data) is attained through the sum over an infinite amount of intermediate virtual states (filters/hidden layers), has been realised here by applying the physical projectors to the spin-network states. The quantumness that is intrinsic in this proposed new framework allows us to consider a sum over infinite (virtual) hidden layers, being conjectured at the same time to avoid the issues of redundancy and overfitting [8]. This instantiates novel (quantum) algorithms, the effectiveness and accuracy of which we will have to continue testing, investigating the amount of computational time TQNNs spent in comparison with classical counterparts, such as deep neural networks (DNNs), and delving into the material implementations that exploit topological condensed matter structures described in terms of string-nets [13; 14]. All results can be independently reproduced through the "idea.deploy" framework [https://github.com/lullimat/idea.deploy](https://github.com/lullimat/idea.deploy)
## Appendix A Proofs of the results
In this appendix we collect the main results (and their proofs) used in the article to obtain the algorithm.
**Lemma A.1**.: _It holds that \(\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ_{\bar{v}}\mathcal{O}_{n}\) for every \(n\in\mathbb{N}\), and for some choice of vertices \(\bar{v}\) in \(\mathcal{H}_{n}\)._
Proof.: The proof is by induction on \(n\), and it does not depend on the colorings of the spin-networks, so that we can omit keeping track of the spin colors, but we can just consider the underlying graphs. The base of induction holds true, since for \(n=1\) the graph \(\mathcal{H}_{n}\) is just a single hexagon cell, and \(\mathcal{H}_{2}\) is obtained by attaching \(\mathcal{O}_{1}\) on the three top vertices of the hexagon cell. Suppose now that the result has been proved for some \(k>1\), and let us consider \(\mathcal{H}_{k+1}\). in the graph of \(\mathcal{H}_{k+1}\) we can isolate a top layer of the graph, where we imagine of cutting the edges that connect the outer perimeter to the inner vertices of \(\mathcal{H}_{k+1}\). This leaves a graph \(\mathcal{H}_{k}\) and detaches an open-edge graph that is readily identified with a copy of the graph \(\mathcal{O}_{k}\). We observe that in this step it might be necessary to eliminate extra vertices inside the edges of the detached graph. This is indeed possible since a binary vertex can be eliminated, and the symmetrizers that label the two edges are compacted into one, using idempotency of the Jones-Wenzl symmetrizer.
**Lemma A.2**.: _The following equality holds for all choices of compatible spin colors \(a,b,c,d,e,f\):_
Proof.: Applying recoupling to the edge \(e\) we obtain the equality
Now, applying Lemma 7 of [24] (i.e. the diagrammatic Schur's Lemma) we find that the only term in the sum that is not trivial is the one corresponding to \(i=f\), and moreover the previous equation becomes
where \(\theta(a,d,f)\) denotes the value of the \(\theta\)-net
The evaluation of the latter \(\theta\)-net cancels out with the of the renormalizations (see Appendix A of [35]) \(\sqrt{\theta(a,d,f)}\) of the two 3-vertices \((a,d,f)\) and \((a,d,i)\), completing the proof.
**Lemma A.3**.: _Let \(\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\) be a honeycomb spin-network with the labeling scheme described above. Then we have_
\[\begin{split}\mathcal{H}_{n+1}&(\bar{a},\bar{b}, \bar{c},\bar{d},\bar{e})=\Delta_{c^{0}_{n+1}}\theta(d^{0}_{n+1},c^{0}_{n+1},e^ {0}_{n+1})\\ &\quad\times\frac{\theta(i^{0}_{n+1},c^{-1}_{n+1},c^{-1}_{n})}{ \Delta_{i^{0}_{n+1}}}\frac{\theta(i^{0}_{n+1},d^{1}_{n+1},d^{1}_{n})}{\Delta_ {i^{0}_{n+1}}}\\ &\frac{\theta(i^{-\lfloor\frac{n}{2}\rfloor}_{\lfloor\frac{n}{ 2}\rfloor},c^{-\lfloor\frac{n}{2}\rfloor+1}_{\lfloor\frac{n}{2}\rfloor+1},c^ {-\lfloor\frac{n}{2}\rfloor+1}_{\lfloor\frac{n}{2}\rfloor+1})}{\Delta_{i^{ \lfloor\frac{n}{2}\rfloor}_{\lfloor\frac{n}{2}\rfloor}}}\frac{\theta(i^{ \lfloor\frac{n}{2}\rfloor}_{\lfloor\frac{n}{2}\rfloor+1},c^{\lfloor\frac{n}{2 }\rfloor-1}_{\lfloor\frac{n}{2}\rfloor+1})}{\Delta_{i^{\lfloor\frac{n}{2} \rfloor}_{\lfloor\frac{n}{2}\rfloor}}}\\ &\quad\times\begin{cases}d^{0}_{n+1}\ \ c^{-1}_{n+1}\ \ e^{0}_{n+1}\end{cases}\\ d^{1}_{n+1}\ \ c^{0}_{n+1}\ \ e^{0}_{n+2}\end{cases}\\ &\quad\times\mathcal{HH}_{n}(\bar{a},\bar{b},\bar{c},\bar{d}, \bar{e})\circ_{\bar{v}}\mathcal{SO}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{ e})\\ &\quad\times\sum_{\bar{i}}\Psi(\bar{a},\bar{b},\bar{c},\bar{d}, \bar{e}\mid\bar{i})_{t}(\Psi(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid\bar{ i})),\end{split} \tag{10}\]
_where \(\Psi\) and \(\iota\) were defined above, and the fractions appear only when \(n>2\)._
Proof.: We proceed by using Lemma 1, and recoupling theory. First, let us consider the simpler case \(n=2\), which is verified as follows. We write \(\mathcal{H}_{2}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})=\mathcal{H}_{1}(\bar{ a},\bar{b},\bar{c},\bar{d},\bar{e})\circ_{\bar{v}}\mathcal{O}_{2}(\bar{a},\bar{b}, \bar{c},\bar{d},\bar{e})\). Let us omit the labels \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\) for simplicity. Then, we apply Lemma 1 on the central edge right above the hexagonal cell of \(\mathcal{H}_{1}\), as given in the decomposition of \(\mathcal{H}_{2}\) above. The resulting spin-network is, with complex \(6j\) factor multiplying it, \(\mathcal{HH}_{1}\circ_{\bar{v}}\mathcal{BO}_{1}\), where \(\bar{v}\) consists of the two vertices on the sides of the hexagonal cell \(\mathcal{H}_{1}\). The complex factor appearing in the sum is the \(6j\)-symbol determined by Lemma 1. In this case there is a single \(6j\), which is seen directly to coincide with the first factor in the formula in the statement of the lemma. The terms containing \(\Psi\) and the fractions containing \(\theta\) and \(\Delta\) are not present in this case. The case for arbitrary \(n\) is similar, and it only requires more applications of the recoupling theorem. More specifically, we apply Lemma 1 to the top of the spin network. This produces the factor
\[\Delta_{c^{0}_{n+1}}\theta(d^{0}_{n+1},c^{0}_{n+1},e^{0}_{n+1})\begin{cases}d^{ 0}_{n+1}\ \ c^{-1}_{n+1}\ \ e^{0}_{n+1}\\ d^{1}_{n+1}\ \ c^{0}_{n+1}\ \ e^{0}_{n+2}\end{cases}\]
which is the prefactor appearing in the statement. Then, we apply recoupling to the edges that are used to connect \(\mathcal{O}_{n}\) to \(\mathcal{H}_{n}\) in the decomposition \(\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ_{\bar{v}}\mathcal{O}_{n}\), along with the bottom edges of the most lateral hexagons. For each coloring, we now have to consider the coefficients appearing at each application of the recoupling theorem. Now, proceeding along the left side of the graph supporting \(\mathcal{O}_{n}\), we encounter the recoupling of edges \(d^{-\lfloor\frac{n+2}{2}\rfloor+\lfloor\frac{n+2}{2}\rfloor}_{\lfloor\frac{n}{ 2}\rfloor}\), while going in the opposite direction gives the recoupling on \(c^{\lfloor\frac{n+2}{2}\rfloor-\lfloor\frac{k+1}{2}\rfloor}_{\lfloor\frac{k+1} {2}\rfloor}\). This gives rise to the \(6j\)-symbols that constitute the terms indexed by \(k\) appearing in the product that defines \(\Psi\) and \(\iota\Psi\), where one needs to sum over all the compatible \(i\), with respect to the other entries of the \(6j\)-symbol. Finally, on the bottom edges of the equatorial belt of hexagons in the copy of \(\mathcal{H}_{n}\) found inside of \(\mathcal{H}_{n+1}\) we get recoupling on \(c^{-\lfloor\frac{n}{2}\rfloor-1}_{\lfloor\frac{n}{2}\rfloor+1}\) and \(d^{\lfloor\frac{n}{2}\rfloor+1}_{\lfloor\frac{n}{2}\rfloor+1}\), which gives rise to the last two factors in the definition of \(\Psi\) and \(\iota\Psi\). At this point we have a decomposition of the geometric support of the spin-network as \(\mathcal{H}_{n}\circ\mathcal{BO}_{n}\) with extra four bubbles. Using Lemma 7 in [24] to burst the bubbles we obtain \(\mathcal{H}_{n}\circ\mathcal{BO}_{n}\), and the remaining factors that consist of the fractions in the statement of the lemma. This completes the proof.
**Lemma A.4**.: _We have, for any \(n\geq 4\), the equality_
\[\mathcal{BO}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e},\bar{f})\] \[= \prod_{n-1\leq k\leq 2n-5}\begin{Bmatrix}c_{k}^{\lfloor\frac{k+ 2}{2}\rfloor-n-1}&p_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-1}&d_{k}^{\lfloor\frac{ k+1}{2}\rfloor-n-2}\\ p_{k+1}^{\lfloor\frac{k+2}{2}\rfloor-n-1}&c_{k+1}^{\lfloor\frac{k+2}{2} \rfloor-n-1}&c_{k+1}^{\lfloor\frac{k+2}{2}\rfloor-n}\end{Bmatrix}\] \[\times\frac{\theta(c_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-1},e_{k} ^{\lfloor\frac{k+2}{2}\rfloor-n-1},d_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-2}}{ \Delta_{d_{k}^{\lfloor\frac{k+1}{2}\rfloor-n-2}}}\] \[\times\begin{Bmatrix}d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1}&p_ {k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1}&c_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n +2}\\ p_{k+1}^{-\lfloor\frac{k+2}{2}\rfloor+n+1}&e_{k+1}^{-\lfloor\frac{k+2}{2} \rfloor+n+1}&d_{k+1}^{-\lfloor\frac{k+2}{2}\rfloor+n}\end{Bmatrix}\] \[\times\frac{\theta(d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+1},e_{k+ 1}^{-\lfloor\frac{k+2}{2}\rfloor+n+1},d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n +2}}{\Delta_{d_{k}^{-\lfloor\frac{k+1}{2}\rfloor+n+2}}}\] \[\times\mathcal{O}_{n-1}.\]
Proof.: This is an application of the bubble move of Lemma A.2 to each bubble of \(\mathcal{BO}_{n}\).
**Lemma A.5**.: _We have_
\[\mathcal{HH}_{n+1}\circ_{\bar{v}}\mathcal{O}_{n}=\mathcal{H}_{n+1},\]
_where \(\bar{v}\) is the set of vertices as in Lemma A.3._
Proof.: This result follows from a direct inspection of the graph support of the spin-networks \(\mathcal{HH}_{n+1}\) and \(\mathcal{O}_{n}\). In fact, \(\mathcal{HH}_{n+1}\) is obtained from \(\mathcal{H}_{n+1}\) by discarding the upper hexagonal cells. But then, attaching \(\mathcal{O}_{n}\) re-constructs the missing hexagonal cells.
**Theorem A.6**.: _Let \(\mathcal{H}_{n+1}\) denote a honeycomb of size \(n+1\), and let \(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\) denote compatible spin colors according to the scheme described above. Then_
\[\mathcal{H}_{n+1}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})= \sum_{\bar{i}}\hat{\Psi}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e} \mid\bar{i})_{\ell}\hat{\Psi}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}\mid \bar{i})\] \[\times\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e}),\]
_where \(\mathcal{H}_{n}\) inherits the spin colors of \(\mathcal{H}_{n+1}\)._
Proof.: We apply the lemmas previously proved to obtain the result. To simplify notation we omit writing the labels of the spin-networks, but we will assume throughout to follow the conventions outlined above. Observe that using Lemma A.1 we can write \(\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ_{\bar{v}}\mathcal{O}_{n}\). Then, following the convention for the spin colors established in the paragraph preceding Lemma A.3, the edges connecting \(\mathcal{O}_{n}\) to \(\mathcal{H}_{n}\) are labeled by \(e_{k}\), with \(k=0,\dots,2n-2\). So, we apply Lemma A.3 to these edges to obtain \(\mathcal{H}_{n+1}=\sum_{\bar{i}}\Psi_{\bar{i}}i\bar{\Psi}_{\bar{i}}\mathcal{H} \mathcal{H}_{n}\circ_{\bar{v}}\mathcal{BO}_{n}\), where spin colors are intended as in the lemma. From Lemma A.4 we have \(\mathcal{BO}_{n}=\hat{\Psi}_{\bar{i}}i\bar{\Psi}_{\bar{i}}\mathcal{O}_{n-1}\). Therefore, we have found that \(\mathcal{H}_{n+1}=\sum_{\bar{i}}\hat{\Psi}_{\bar{i}}i\mathcal{H}\mathcal{H}_{n} \circ_{\bar{v}}\mathcal{O}_{n-1}\). Lastly, we apply Lemma A.5 to rewrite \(\mathcal{HH}_{n}\circ_{\bar{v}}\mathcal{O}_{n-1}=\mathcal{H}_{n}\). This completes the proof.
**Corollary A.7**.: _The number of summation operations needed to evaluate \(\mathcal{H}_{n}\) grows quadratically with \(n\). More specifically, if \(a_{n}\) denotes the number of summations at \(n\), we have \(a_{n}=a_{n-1}+2n-5\)._
Proof.: This is an immediate consequence of Theorem A.6 using induction. In fact, at each step, i.e. for a fixed \(n\), we have a sum on \(2n-5\) indices. To see this, observe that from the proof of Theorem A.6 we have to apply recoupling \(2n-1\) times twice. The second round of re-couplings does not introduce new labels in the summations, since in Lemma A.4 there is no sum. In order to apply Lemma A.5, we need to apply Lemma 7 from [24] on the top of the spin-network, where three of the indices upon which we sum are present. This allows us to reduce the sum to one single index, and factor a summation of quantum dimensions coming from \(i_{0}\) in the final result. Moreover, we notice that the base of \(\mathcal{O}_{n}\) has a merging of \(4\) labels and therefore a two more sums are suppressed. This gives the total number of \(2n-5\) summation indices. Now, we have reduced our evaluation to \(\mathcal{H}_{n-1}\), which inductively carries a summation over \(a_{n-1}\) indices by induction. This completes the proof.
**Corollary A.8**.: _The evaluation of \(\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})\), with \(n\geq 2\), is given by the formula_
\[\mathcal{H}_{n}(\bar{a},\bar{b},\bar{c},\bar{d},\bar{e})=\sum_{k=2}^{n}\hat{ \Psi}_{\bar{i}_{k}}\ell\hat{\Psi}_{\bar{i}_{k}}\theta(c_{1}^{2},e_{0}^{2},b_{0 }^{2}),\]
_where the coefficients \(\hat{\Psi}_{\bar{i}_{k}}\) were given above and the index \(k\) refers to the iteration of the application of Theorem A.6._
Proof.: We proceed by induction over \(n\). For \(n=2\) we evaluate the spin-network \(\mathcal{H}_{2}\) directly. Apply Lemma A.2 to the top of the spin-network, where we indicate the top spin color by \(t\). The other that take part in the application of the lemma are, following the previously described conventions, \(c_{1}^{2},b_{0}^{2},c_{2}^{2}b_{2}^{2}\) and \(e_{0}^{2}\) which take the places of \(a,d,c,b\) and \(f\), respectively, in the lemma. Then we obtain
A second application of Lemma A.2, this time with \(g\) playing the role of \(e\) in the diagram of the lemma, we find that
\[\mathcal{H}_{2}=\Delta_{e_{0}^{2}}\begin{Bmatrix}c_{1}^{2}&b_{1}^{2}&e_{0}^{2} \\ c_{0}^{2}&b_{0}^{2}&t\end{Bmatrix}\begin{Bmatrix}b_{1}^{2}&c_{1}^{2}&e_{0}^{2} \\ d_{0}^{2}&a_{0}^{2}&g\end{Bmatrix}\theta(c_{1}^{2},e_{0}^{2},b_{0}^{2}),\]
which concludes the proof of the base of induction. To derive the general formula, now we apply Theorem A.6
to reduce the case of dimension \(n+1\) to \(n\), where \(n=2\) reduces to a \(\theta\)-net as just shown above. With the stratified labelings introduced above, to pass from \(\mathcal{H}_{n+1}\) to \(\mathcal{H}_{n}\) we need to sum over all the \(\bar{i}\). Once we have reduced the size of \(\mathcal{H}_{n}\) by one degree, we apply again Theorem A.6 until we reach the \(n=2\) case. Each time, we relabel all the spin-colors by \(\bar{a}_{k},\bar{b}_{k},\bar{c}_{k},\bar{d}_{k},\bar{e}_{k}\) to reapply all the formulas. This completes the proof.
**Lemma A.9**.: _Let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be honeycomb spin-networks, and let \(\mathcal{P}\) be as above. Then, the physical inner product is given by_
\[\langle\mathcal{H}_{2}|\mathcal{H}_{1}\rangle_{\mathrm{Phys}}=\langle\mathcal{ H}_{1}\rangle\langle\mathcal{H}_{2}\rangle,\]
_where \(\langle\mathcal{H}_{j}\rangle\) indicates the evaluation computed in Section III._
Proof.: We apply the gauge fixing identity and the summation identity (see [36; 37; 28]) repeatedly to eliminate all the Haar integration boxes in the bulk. The case on \(2\times 2\) honeycomb \(\mathcal{H}_{2}\) spin-networks is shown in Figure 14. In this case one proceeds as follows. First, making use of the integration boxes in the perimeter it is possible to eliminate the diagonal Haar integration boxes, as it is explicitly done for the top-left diagonal box via the blue dashed line in Figure 14. Then, we can draw a circle that intersects the spin-networks only horizontally through the central integration boxex, shown in Figure 14 as a dotted red line. This allows us to eliminate the central box. Now, only the perimeter boxes are left, and they have a summation over the projector lines where no other integration box appear. We can therefore apply the summation identity to eliminate them, and therefore decouple the spin-networks completing the \(2\times 2\) case. The figure shows a transition between hexagonal spin-networks where the initial and final states are superposed. The effect of the projector is that of adding lines for colors \(k\) compatible with the spin colors of the states, and Haar integration (black) boxes on the edges.
We observe that from the case \(n\times n\) with \(n=3\) one complication is easily seen to arise. In fact, it is not possible to directly eliminate all the boxes in the bulk by only utilizing the gauge fixing identity. It is inf fact possible to eliminate only one box per horizontal row (which in the \(2\times 2\) case happens to eliminate the only horizontal box). However, since the diagonal rows are eliminated via gauge fixing, the horizontal rows can be cleared by an application of the summation identity. The perimeter is likewise cleared of any Haar integration boxes.
Although the previous argument provides a relatively detailed argument, we present here the general by induction using the decomposition \(\mathcal{H}_{n+1}=\mathcal{H}_{n}\circ\mathcal{O}_{n}\) from Lemma A.1 for the sake of completeness. In addition, this approach is practically useful for the implementation of the algorithm, which takes advantage of the hierarchic structure of the honeycomb spin-networks.
In practice, we use the inductive step to remove the integration boxes from the bulk of \(\mathcal{H}_{n}\), and then use the gauge fixing identity between the integration boxes of \(\mathcal{O}_{n}\) and those boxes in the perimeter of \(\mathcal{H}_{n}\) that are in the bulk of \(\mathcal{H}_{n+1}\). Observe that when decomposing \(\mathcal{H}_{n+1}\), the top edge of \(\mathcal{H}_{n}\) is split in two by a vertex connected with \(\mathcal{O}_{n}\), so the induction is not immediately applicable. However, this is not a problem as the two integration boxes that arise on the two sides of the top vertex abut in an external cell, so that any line that is drawn through them can go out of \(\mathcal{O}_{n}\) without intersecting the spin-network at a point other than a Haar integration box. So, the inductive procedure can be applied with the slight modification of using the gauge fixing identity to delete the diagonal integration boxes with two top integration boxes rather than a single one. The reader can convince themselves directly of the veracity of this assertion by drawing the connecting part of \(\mathcal{H}_{n}\) and \(\mathcal{O}_{n}\). Now, we observe that the only horizontal integration box that is left (on the central vertical leg of \(\mathcal{O}_{n}\)) can be removed by another application of the gauge fixing identity. The two aforementioned diagonal boxes on top of \(\mathcal{H}_{n}\) cannot be eliminated directly both, but just one of them via gauge fixing. However, the remaining one, which is now the only non perimeter box that is left is eliminated via the summation identity, since no other box appear in the top cell of \(\mathcal{H}_{n}\). The remaining perimeter boxes are eliminated once again via summation identity completing the inductive step.
Now, using the definition of the Perez-Noui projector \(\mathcal{P}\) by means of the Ashtekar-Lewandowski measure we evaluate the spin networks in the identity element of \(SU(2)\) in the classical case, while we apply them on an element \(H^{-1}\) in the quantum case, where \(H^{-1}\) reproduces the quantum recoupling theory [29]. This gives us the evaluation of \(\mathcal{H}_{j}\), \(j=1,2\), from Section III as stated.
|
2303.00565 | AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning
Rate and Momentum for Training Deep Neural Networks | Sharpness aware minimization (SAM) optimizer has been extensively explored as
it can generalize better for training deep neural networks via introducing
extra perturbation steps to flatten the landscape of deep learning models.
Integrating SAM with adaptive learning rate and momentum acceleration, dubbed
AdaSAM, has already been explored empirically to train large-scale deep neural
networks without theoretical guarantee due to the triple difficulties in
analyzing the coupled perturbation step, adaptive learning rate and momentum
step. In this paper, we try to analyze the convergence rate of AdaSAM in the
stochastic non-convex setting. We theoretically show that AdaSAM admits a
$\mathcal{O}(1/\sqrt{bT})$ convergence rate, which achieves linear speedup
property with respect to mini-batch size $b$. Specifically, to decouple the
stochastic gradient steps with the adaptive learning rate and perturbed
gradient, we introduce the delayed second-order momentum term to decompose them
to make them independent while taking an expectation during the analysis. Then
we bound them by showing the adaptive learning rate has a limited range, which
makes our analysis feasible. To the best of our knowledge, we are the first to
provide the non-trivial convergence rate of SAM with an adaptive learning rate
and momentum acceleration. At last, we conduct several experiments on several
NLP tasks, which show that AdaSAM could achieve superior performance compared
with SGD, AMSGrad, and SAM optimizers. | Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, Dacheng Tao | 2023-03-01T15:12:42Z | http://arxiv.org/abs/2303.00565v1 | AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks
###### Abstract
Sharpness aware minimization (SAM) optimizer has been extensively explored as it can generalize better for training deep neural networks via introducing extra perturbation steps to flatten the landscape of deep learning models. Integrating SAM with adaptive learning rate and momentum acceleration, dubbed AdaSAM, has already been explored empirically to train large-scale deep neural networks without theoretical guarantee due to the triple difficulties in analyzing the coupled perturbation step, adaptive learning rate and momentum step. In this paper, we try to analyze the convergence rate of AdaSAM in the stochastic non-convex setting. We theoretically show that AdaSAM admits a \(\mathcal{O}(1/\sqrt{bT})\) convergence rate, which achieves linear speedup property with respect to mini-batch size \(b\). Specifically, to decouple the stochastic gradient steps with the adaptive learning rate and perturbed gradient, we introduce the delayed second-order momentum term to decompose them to make them independent while taking an expectation during the analysis. Then we bound them by showing the adaptive learning rate has a limited range, which makes our analysis feasible. To the best of our knowledge, we are the first to provide the non-trivial convergence rate of SAM with an adaptive learning rate and momentum acceleration. At last, we conduct several experiments on several NLP tasks, which show that AdaSAM could achieve superior performance compared with SGD, AMSGrad, and SAM optimizers.
Sharpness-aware minimization, Adaptive learning rate, Non-convex optimization, linear speedup.
## I Introduction
Sharpness-aware minimization (SAM) [1] is a powerful optimizer for training large-scale deep learning models by explicitly minimizing the gap between the training performance and generalization performance. It has achieved remarkable results in training various deep neural networks, including ResNet [1, 2, 3], vision transformer [4, 5], language models [6, 7, 8], on extensive benchmarks.
However, SAM-type methods suffer from several issues during training the deep neural networks, especially for huge computation costs and heavily hyper-parameter tuning procedure. In each iteration, SAM needs double gradients computation compared with classic optimizers, like SGD, Adam [9], AMSGrad [10], due to the extra perturbation step. Hence, SAM requires to forward and back propagate twice for one parameter update, resulting in one more computation cost than the classic optimizers. Moreover, as there are two steps during the training process, it needs double hyper-parameters, which makes the learning rate tuning unbearable and costly.
Adaptive learning rate optimization methods [11] scale the gradients based on the history gradient information to accelerate the convergence by tuning the learning rate automatically. These methods, such as Adagrad [12], Adam [9], and AMSGrad [10], have been proposed for solving the computer vision, natural language process, and generative neural networks tasks [11, 13, 14, 15]. Recently, several works have tried to ease the learning rate tuning in SAM by inheriting the triplet advantages of SAM, adaptive learning rate, and momentum acceleration. For example, [16] and [17] train ViT models and NLP models with adaptive learning rates and momentum acceleration, respectively. Although remarkable performance has been achieved, their convergences are still unknown since the adaptive learning rate and momentum acceleration are used in SAM. Directly analyzing its convergence is complicated and difficult due to the three coupled steps of optimization, i.e., the adaptive learning rate estimation is coupled with the momentum step and perturbation step of SAM.
In this paper, we analyze the convergence rate of SAM with an adaptive learning rate and momentum acceleration, dubbed AdaSAM, in the non-convex stochastic setting. To circumvent the difficulty in the analysis, we develop a novel technique to decouple the three-step training of SAM from the adaptive learning rate and momentum step. The analysis procedure is mainly divided into three parts. The first part is to analyze the procedure of the SAM. Then we analyze the second step that adopts the adaptive learning rate method. We introduce a second-order momentum term from the previous iteration, which is related to the adaptive learning rate and independent of SAM while taking an expectation. Then we can bound the term composed by the SAM and the previous second-order momentum due to the limited adaptive learning rate. In the last part, we analysis the momentum acceleration that is combined with the SAM and the adaptive learning rate. The momentum acceleration lead to an extra term in convergence analysis. Here, we introduce an auxiliary sequence to absorb it and show that their summation over the all iterations is controllable. We prove that AdaSAM enjoys the property of linear speedup property with respect to the batch size, i.e. \(\mathcal{O}(1/\sqrt{bT})\) where
\(b\) is the mini-batch size. Empirically, we apply AdaSAM to train RoBERTa model on the GLUE benchmark to evaluate our theoretical findings. We show that AdaSAM achieves the best performance in experiments, where it wins 6 tasks of 8 tasks, and the linear speedup can be clearly observed.
In the end, we summarize our contributions as follows:
* We present the first convergence guarantee of the adaptive SAM method with momentum acceleration under the stochastic non-convex setting. Our results suggest that a large mini-batch can help convergence due to the established linear speedup with respect to batch size.
* We conduct a series of experiments on various tasks. The results show that AdaSAM outperforms most of the state-of-art optimizers and the linear speedup is verified.
## II Preliminary and Related Work
In this section, we first describe the basic problem setup and then introduce several related works on the SAM, adaptive learning rate and momentum steps.
### _Problem Setup_
In this work, we focus on stochastic nonconvex optimization
\[\min_{x\in\mathbb{R}^{d}}f(x):=\mathbb{E}_{\xi\sim D}f_{\xi}(x), \tag{1}\]
where \(d\) is dimension of variable \(x\), \(D\) is the unknown distribution of the data samples, \(f_{\xi}(x)\) is a smooth and possibly non-convex function, and \(f_{\xi_{i}}(x)\) denotes the objective function at the sampled data point \(\xi_{i}\) according to data distribution \(D\). In machine learning, it covers empirical risk minimization as a special case and \(f\) is the loss function when the dataset \(D\) cover \(N\) data points, i.e., \(D=\{\xi_{i},i=1,2,\ldots,N\}\). Problem (1) reduces to the following finite-sum problem:
\[\min_{x\in\mathbb{R}^{d}}f(x):=\frac{1}{N}\sum_{i}f_{\xi_{i}}(x). \tag{2}\]
Notations.Without additional declaration, we represent \(f_{i}(x)\) as \(f_{\xi_{i}}(x)\) for simplification, which is the \(i\)-th loss function while \(x\in\mathbb{R}^{d}\) is the model parameter and \(d\) is the parameter dimension. We denote the \(l_{2}\) norm as \(\|\cdot\|_{2}\). A Hadamard product is denoted as \(a\odot b\) where \(a\),\(b\) are two vectors. For a vector \(a\in\mathbb{R}^{d}\), \(\sqrt{a}\) is denoted as a vector that the \(j\)-th value, \((\sqrt{a})_{(j)}\), is equal to the square root of \(a_{j}\).
### _Related Work_
Sharpness-aware minimizationMany works try to improve the generalization ability during training the deep learning model. Some methods such as dropout [18], weight decay [19], and regularization methods [20, 21] provide an explicit way to improve generalization. Previous work shows that sharp minima may lead to poor generalization whereas flat minima perform better [22, 23, 24]. Therefore, it is popular to consider sharpness to be closely related to the generalization. Sharpness-aware minimization (SAM) [1] targets to find flat minimizers explicitly by minimizing the training loss uniformly in the entire neighborhood. Specifically, SAM aims to solve the following minimax saddle point problem:
\[\min_{x}\max_{\|\delta\|\leq\rho}f(x+\delta)+\lambda\|x\|_{2}^{2}, \tag{3}\]
where \(\rho\geq 0\) and \(\lambda\geq 0\) are two hyperparameters. That is, the perturbed loss function of \(f(x)\) in a neighborhood is minimized instead of the original loss function \(f(x)\). By using Taylor expansion of \(f(x+\delta)\) with respect to \(\delta\), the inner max problem is approximately solved via
\[\delta^{*}(x) =\operatorname*{arg\,max}_{\|\delta\|\leq\rho}f(x+\delta)\] \[\approx\operatorname*{arg\,max}_{\|\delta\|\leq\rho}f(x)+\delta^ {\top}\nabla f(x)\] \[=\operatorname*{arg\,max}_{\|\delta\|\leq\rho}\delta^{\top} \nabla f(x)=\rho\frac{\nabla f(x)}{\|\nabla f(x)\|}.\]
By dropping the quadratic term, (3) is simplified as the following minimization problem
\[\min_{x}f\left(x+\rho\frac{\nabla f(x)}{\|\nabla f(x)\|}\right). \tag{4}\]
The stochastic gradient of \(f\left(x+\rho\frac{\nabla f(x)}{\|\nabla f(x)\|}\right)\) on a batch data \(b\) includes the Hessian-vector product, SAM further approximates the gradient by
\[\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f_{b}(x)}{\|\nabla f_{b}(x)\|}\right) \approx\nabla_{x}f_{b}(x)\big{|}_{x+\rho\frac{\nabla f_{b}(x)}{\|\nabla f_{b }(x)\|}}.\]
Then, along the negative direction \(-\nabla_{x}f_{b}(x)\big{|}_{x+\rho\frac{\nabla f_{b}(x)}{\|\nabla f_{b}(x)\|}}\), SGD is applied to solve the surrogate minimization problem (4). It is easy to see that SAM requires twice gradient back-propagation, i.e., \(\nabla f_{b}(x)\) and \(\nabla_{x}f_{b}(x)\big{|}_{x+\rho\frac{\nabla f_{b}(x)}{\|\nabla f_{b}(x)\|}}\). Due to the existence of hyperparameter \(\rho\), one needs to carefully tune both \(\rho\) and learning rate in SAM. In practice, \(\rho\) is predefined to control the radius of the neighborhood.
Recently, Several variants of SAM are proposed to improve its performance. For example, [16, 8, 17] have empirically incorporated adaptive learning rate with SAM and shown impressive generalization accuracy, while their convergence analysis has never been studied. ESAM [25] proposes an efficient method by sparsifying the gradients to alleviate the double computation cost of backpropagation. ASAAM [17] modifies SAM by adaptively scaling the neighborhood so that the sharpness is invariant to parameters re-scaling. GSAM [16] simultaneously minimizes the perturbed function and a new defined surrogate gap function to further improve the flatness of minimizers. Liu et al. [26] also study SAM in large-batch training scenario and periodically update the perturbed gradient. Recently, [3, 8] improve the efficiency of SAM by adopting the sparse gradient perturbation technique. [27, 28] extend SAM to the federated learning setting with a significant performance gain. On the other hand, there are some works analyzing the convergence of the SAM such as [29] without considering the normalization step, i.e., the normalization in \(\frac{\nabla f_{b}(x)}{\|\nabla f_{b}(x)\|}\).
Adaptive optimizerThe adaptive optimizer can automatically adjust the learning rate based on the history gradients methods. The first adaptive method, Adagrad [12], can achieve a better result than other first-order methods under the convex setting. While training the deep neural network, Adagrad will decrease the learning rate rapidly with a degraded performance. Adadelta [30] is proposed to change this situation and introduces a learning rate based on the exponential average history gradients. Adam [9] additionally adds momentum step to stabilize the training process, and it shows great performance in many tasks. However, Reddi et al [10] give a counterexample that it cannot converge even when the objective function is convex and propose an alternative method called AMSGrad with convergence guarantee. Then, many works [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44] have been proposed to study the convergence of adaptive methods and their variants in the nonconvex setting. However, their analysis techniques can not directly extend to establish the convergence of SAM with adaptive learning rate due to the coupled perturbation step and adaptive learning rate.
Momentum accelerationMomentum methods such as Polyak's heavy ball method [45], Nestrov's accelerated gradient descent method [46] and accelerated projected method [47] are used to optimize the parameters of the deep neural network. In practice, they have been used to accelerated for federated learning tasks [48], non-negative latent factor model [49] and recommender systems [50]. There are many theoretical works [51, 52, 53] that focus on analyzing the momentum acceleration for optimizing non-convex problem. [54] shows that it is important for tuning momentum while training deep neural network. [55] first points out linear convergence results for stochastic momentum method. [56] proposes a class of accelerated zeroth-order and first-order momentum method to solve mini-optimization and minimax-optimization problem. [57] extend the momentum method by introducing an RNA scheme and a constrained formulation RNA which has non-linear updates. [58] propose a heuristic adaptive restart method and [59] propose a scheduled restart momentum accelerated SGD method named SRSGD which helps reduce the training time. [60] adds one momentum term on to the distributed gradient algorithm.
## III Methodology
In this section, we introduce SAM with adaptive learning rate and momentum acceleration, dubbed AdaSAM, to stabilize the training process of SAM and ease the learning rate tuning. Then, we present the convergence results of AdaSAM. At last, we give the proof sketch for the main theorem.
### _AdaSAM Algorithm_
AdaSAM for solving Problem (1) is described in Algorithm 1. In each iteration, a mini-batch gradient estimation \(g_{t}\) at point \(x+\epsilon(x)\) with batchsize \(b\) is computed, i.e.,
\[g_{t}=\nabla_{x}f_{b}(x)|_{x_{t}+\epsilon(x_{t})}=\frac{1}{b}\sum_{i\in B} \nabla f_{\xi_{t}}(x_{t}+\delta(x_{t})).\]
Here, \(\delta(x_{t})\) is the extra perturbed gradient step in SAM that is given as follows
\[\delta(x_{t})=\rho\frac{s_{t}}{\|s_{t}\|},\ \mathrm{where}\ s_{t}=\nabla_{x}f_{b}(x )|_{x_{t}}=\frac{1}{b}\sum_{i\in B}\nabla f_{\xi_{t}}(x_{t}).\]
Then, the momentum term of \(g_{t}\) and the second-order moment term \([g_{t}]^{2}\) is accumulatively computed as \(m_{t}\) and \(v_{t}\), respectively. AdaSAM then updates iterate along \(-m_{t}\) with the adaptive learning rate \(\gamma\eta_{t}\).
**Remark 1**.: _Below, we give several comments on AdaSAM:_
* _When_ \(\beta_{2}=1\)_, the adaptive learning rate reduces to the diminishing one as SGD. Then, AdaSAM recovers the classic SAM optimizer._
* _If we drop out the 8-th line_ \(\hat{v}_{t}=\max(\hat{v}_{t-1},v_{t}),\) _then our algorithm becomes the variant of Adam. The counterexample that Adam does not converge in the_ _[_10_]_ _also holds for the SAM variant, while AdaSAM can converge._
```
Input: Initial parameters \(x_{0}\), \(m_{-1}=0\), \(\hat{v}_{-1}=\epsilon^{2}\)(a small positive scalar to avoid the denominator diminishing), base learning rate \(\gamma\), neighborhood size \(\rho\) and momentum parameters \(\beta_{1}\), \(\beta_{2}\). Output: Optimized parameter \(x_{T+1}\)
1foriteration\(t\in\{0,1,2,...,T-1\}\)do
2 Sample mini-batch \(B=\{\xi_{t_{1}},\xi_{t_{2}},...,\xi_{t_{|B|}}\}\);
3 Compute gradient \(s_{t}=\nabla_{x}f_{B}(x)|_{x_{t}}=\frac{1}{b}\sum_{i\in B}\nabla f_{t_{i}}(x_ {t})\);
4 Compute \(\delta(x_{t})=\rho_{t}\frac{s_{t}}{\|s_{t}\|}\);
5 Compute SAM gradient \(g_{t}=\nabla_{x}f_{B}(x)|_{x_{t}+\delta(x_{t})}\);
6\(m_{t}=\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}\);
7\(v_{t}=\beta_{2}v_{t-1}+(1-\beta_{2})[g_{t}]^{2}\);
8\(\hat{v}_{t}=\max(\hat{v}_{t-1},v_{t})\);
9\(\eta_{t}=1/\sqrt{\hat{v}_{t}}\);
10\(x_{t+1}=x_{t}-\gamma m_{t}\odot\eta_{t}\);
11
12 end for
```
**Algorithm 1**AdaSAM: SAM with adaptive learning rate and momentum acceleration
### _Convergence Analysis_
Before presenting the convergence results of the AdaSAM algorithm, we first introduce some necessary assumptions.
**Assumption 1** (\(L\)-smooth).: \(f_{i}\) _and \(f\) is differentiable with gradient Lipschitz property: \(\|\nabla f_{i}(x)-\nabla f_{i}(y)\|\leq L\|x-y\|,\|\nabla f(x)-\nabla f(y)\| \leq L\|x-y\|,\forall x,y\in\mathbb{R}^{d},i=1,2,...,N,\) which also implies the descent inequality, i.e., \(f_{i}(y)\leq f_{i}(x)+\langle\nabla f_{i}(x),y-x\rangle+\frac{f_{i}}{2}\|y-x\|^ {2}\)._
**Assumption 2** (Bounded variance).: _The estimator of the gradient is unbiased and the variance of the stochastic gradient is bounded. i.e.,_
\[\mathbb{E}\nabla f_{i}(x)=\nabla f(x),\quad\mathbb{E}\|\nabla f_{i}(x)-\nabla f (x)\|^{2}\leq\sigma^{2}.\]
_When the mini-batch size \(b\) is used, we have \(\mathbb{E}\|\nabla f_{b}(x)-\nabla f(x)\|^{2}\leq\frac{\sigma^{2}}{b}\)._
**Assumption 3** (**Bounded stochastic gradients**).: _The stochastic gradient is uniformly bounded, i.e.,_
\[\left\|\nabla f_{i}(x)\right\|_{\infty}\leq G,for\ any\ i=1,\ldots,N.\]
**Remark 2**.: _The above assumptions are commonly used in the proof of convergence for adaptive stochastic gradient methods such as [31, 32, 61, 62]._
Below, we briefly explain the main idea of analyzing the convergence of the AdaSAM algorithm. First, we discuss the difficulty of applying the adaptive learning rate on SAM. We notice that the main step which contains adaptive learning rate in convergence analysis is to estimate the expectation \(\mathbb{E}[x_{t+1}-x_{t}]=-\mathbb{E}m_{t}\odot\eta_{t}=-\mathbb{E}(1-\beta_{1 })g_{t}\odot\eta_{t}-\mathbb{E}\beta_{1}m_{t-1}\odot\eta_{t}\), which is conditioned on the filtration \(\sigma(x_{t})\). In this part, we consider the situation that \(\beta_{1}=0\) which does not include the momentum. Then, we apply delay technology to disentangle the dependence between \(g_{t}\) and \(\eta_{t}\), that is
\[\mathbb{E}g_{t}\odot\eta_{t} =\mathbb{E}[g_{t}\odot\eta_{t-1}]+\mathbb{E}[g_{t}\odot(\eta_{t }-\eta_{t-1})]\] \[=\nabla f(x_{t})\odot\eta_{t-1}+\mathbb{E}[g_{t}\odot(\eta_{t}- \eta_{t-1})].\]
The second term \(\mathbb{E}[g_{t}\odot(\eta_{t}-\eta_{t-1})]\) is dominated by the first term \(\nabla f(x_{t})\odot\eta_{t-1}\). Then, it is not difficult to get the convergence result of the stochastic gradient descend with the adaptive learning rate such as AMSGrad. However, when we apply the same strategy to AdaSAM, we find that \(\mathbb{E}g_{t}\odot\eta_{t-1}\) cannot be handled similarly because \(\mathbb{E}g_{t}=\mathbb{E}\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f_{b}(x)}{ \left\|\nabla f_{b}(x)\right\|}\right)\neq\nabla f(x_{t})\). Inspired by [29, Lemma 16], our key observation is that
\[\mathbb{E}\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f_{b}(x)}{\left\| \nabla f_{b}(x)\right\|}\right) \approx\mathbb{E}\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f(x)}{ \left\|\nabla f(x)\right\|}\right)\] \[=\nabla_{x}f\left(x+\rho\frac{\nabla f(x)}{\left\|\nabla f(x) \right\|}\right)\]
and we prove the other terms such as \(\mathbb{E}\left(\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f_{b}(x)}{\left\| \nabla f_{b}(x)\right\|}\right)-\nabla_{x}f_{b}\left(x+\rho\frac{\nabla f(x)}{ \left\|\nabla f(x)\right\|}\right)\right)\odot\eta_{t-1}\) have small values that do not dominate the convergence rate.
On the other hand, when we apply the momentum steps, we find that the term \(\mathbb{E}m_{t-1}\odot\eta_{t}\) cannot be ignored. By introducing an auxiliary sequence \(z_{t}=x_{t}+\frac{\beta_{1}}{1-\beta_{1}}(x_{t}-x_{t-1})\), we have \(\mathbb{E}[z_{t+1}-z_{t}]=-\mathbb{E}[\frac{\beta_{1}}{1-\beta_{1}}\gamma m_{ t-1}\odot(\eta_{t-1}-\eta_{t})-\gamma g_{t}\odot\eta_{t}]\). The first term contains the momentum term which has a small value due to the difference of the adaptive learning rate \(\eta_{t}\). Thus, it is diminishing without hurting the convergence rate.
**Theorem 1**.: _Under the assumptions 1,2,3, and \(\gamma\) is a fixed number satisfying \(\gamma\leq\frac{\epsilon}{16L}\), for the sequence \(\{x_{t}\}\) generated by Algorithm 1, we have the following convergence rate_
\[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\nabla f(x_{t})\|_{2}^{2}\!\leq\!\frac {2G(f(x_{0})\!-\!f^{*})}{\gamma T}\!+\!\frac{8G\gamma L}{\epsilon}\frac{\sigma^ {2}}{b\epsilon}\!+\!\Phi \tag{5}\]
_where_
\[\Phi=\frac{45GL^{2}\rho_{t}^{2}}{\epsilon}+\frac{2G^{3}}{(1-\beta _{1})T}d(\frac{1}{\epsilon}-\frac{1}{G})+\frac{6\gamma^{2}L^{2}\beta_{1}^{2}}{ (1-\beta_{1})^{2}}\frac{dG^{3}}{\epsilon^{3}}\] \[+\frac{2(4+(\frac{\beta_{1}}{1-\beta_{1}})^{2})\gamma LG^{3}}{ \epsilon}d(\epsilon^{-2}-G^{-2})+\frac{8G\gamma L}{\epsilon}\frac{L\rho_{t}^{ 2}}{\epsilon}, \tag{6}\]
_in which \(T\) is the number of iteration, \(f^{*}\) is the minimal value of the function \(f\), \(\gamma\) is the base learning rate, \(b\) is the mini-batch size, \(d\) is the dimension of paramter \(x\). \(\beta_{1}\), \(G\), \(L\), \(\epsilon\), \(\sigma^{2}\), \(d\) are fixed constants._
Theorem 1 characterizes the convergence rate of the sequence \(\{x_{t}\}\) generated by AdaSAM with respect to the stochastic gradient residual. The first two terms of the right hand side of Inequality (5) are the terms that dominate the convergence rate. Compared with the first two terms, \(\Phi\) is a small value while we set neighborhood size \(\rho\) and learning rate \(\gamma\) as small values which are related to large iteration number \(T\). Then, we obtain the following corollary directly.
**Corollary 1** (**Mini-batch linear speedup**).: _Under the same conditions of Theorem 1. Furthermore, when we choose the base learning rate \(\gamma=O(\sqrt{\frac{b}{T}})\) and neighborhood size \(\rho=O(\sqrt{\frac{1}{bT}})\), the following result holds:_
\[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\nabla f(x_{t})\|_{2}^{2} =O\left(\frac{1}{\sqrt{bT}}\right)+O\left(\frac{1}{bT}\right)+O \left(\frac{1}{T}\right)\] \[+O\left(\frac{1}{b^{\frac{1}{2}}T^{\frac{3}{2}}}\right)+O\left( \frac{b^{\frac{1}{2}}}{T^{\frac{3}{2}}}\right)+O\left(\frac{b}{T}\right).\]
_When \(T\) is sufficiently large, we achieve the linear speedup convergence rate with respect to mini-batch size \(b\), i.e.,_
\[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\nabla f(x_{t})\|_{2}^{2}=O\left(\frac{1}{ \sqrt{bT}}\right). \tag{7}\]
**Remark 3**.: _Two comments are given about the above results:_
* _To reach a_ \(O(\delta)\) _stationary point, when the batch size is 1, it needs_ \(T=O(\frac{1}{\delta^{2}})\) _iterations. When the batch size is_ \(b\)_, we need to run_ \(T=O(\frac{1}{\delta^{2}})\) _steps. The method with batch size_ \(b\) _is_ \(b\) _times faster than batch size of 1, which means that it has the mini-batch linear speedup property._
* _According to_ _[_37, 63, 64_]__, AdaSAM can be extended to distributed version and achieves linear speedup property with respect to the number of works in the Parameter-Sever setting._
### _Proof Sketch_
In this part, we give the proof sketch of the Theorem 1. For the complete proof, please see Appendix. Below, we first introduce an auxiliary sequence \(z_{t}=x_{t}+\frac{\beta_{1}}{1-\beta_{1}}(x_{t}-x_{t-1})\). By applying \(L\)-smooth condition, we have
\[f(z_{t+1})\!\leq\!f(z_{t})\!+\!\langle\nabla f(z_{t}),z_{t+1}-z_{t}\rangle\!+\! \frac{L}{2}\|z_{t+1}-z_{t}\|^{2}. \tag{8}\]
Applying it to the sequence \(\{z_{t}\}\) and using the delay strategy yield
\[f(z_{t+1})-f(z_{t})\] \[\leq\langle\nabla f(z_{t}),\frac{\gamma\beta_{1}}{1-\beta_{1}}m_{t-1} \odot(\eta_{t-1}-\eta_{t})\rangle+\frac{L}{2}\|z_{t+1}-z_{t}\|^{2}\] \[+\langle\nabla f(z_{t}),\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i}(x _{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot(\eta_{t-1}-\eta_{t})\rangle\]
\[+\langle\nabla f(z_{t})-\nabla f(x_{t}),-\frac{\gamma}{b}\sum_{i\in B} \nabla f_{i}(x_{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot\eta_{t-1}\rangle\] \[+\langle\nabla f(x_{t}),-\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i} (x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|})\odot\eta_{t-1}\rangle\] \[+\langle\nabla f(x_{t}),\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i} (x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|})\odot\eta_{t-1}\] \[-\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i}(x_{t}+\rho_{t}\frac{s_ {t}}{\|s_{t}\|})\odot\eta_{t-1}\rangle. \tag{9}\]
From the Lemma 5, Lemma 6, Lemma 7 in appendix, we can bound the above terms in (III-A) as follows
\[\langle\nabla f(z_{t}),\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i} (x_{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot(\eta_{t-1}-\eta_{t})\rangle\] \[\leq\gamma G^{2}\|\eta_{t-1}-\eta_{t}\|_{1}, \tag{10}\] \[\langle\nabla f(z_{t}),\frac{\gamma\beta_{1}}{1-\beta_{1}}m_{t-1} \odot(\eta_{t-1}-\eta_{t})\rangle\] \[\leq\frac{\gamma\beta_{1}}{1-\beta_{1}}G^{2}\|\eta_{t-1}-\eta_{t }\|_{1},\] (11) \[\langle\nabla f(x_{t}),\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i} (x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|})\odot\eta_{t-1}\] \[-\frac{\gamma}{b}\sum_{i\in B}\nabla f_{i}(x_{t}+\rho_{t}\frac{s_ {t}}{\|s_{t}\|})\odot\eta_{t-1}\rangle\] \[\leq\frac{\gamma}{2\mu^{2}}\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1 }}\|^{2}+\frac{2\mu^{2}\gamma L^{2}\rho_{t}^{2}}{\epsilon}. \tag{12}\]
Then we substitute them into the (III-A), and take the conditional expectation to get
\[\mathbb{E}f(z_{t+1})-f(z_{t})\] \[\leq\mathbb{E}\langle\nabla f(x_{t}),-\frac{\gamma}{b}\sum_{i\in B }\nabla f_{i}(x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|}) \odot\eta_{t-1}\rangle\] \[+\frac{\gamma}{2\mu^{2}}\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1}} \|^{2}+\frac{\gamma}{1-\beta_{1}}G^{2}\|\eta_{t-1}-\eta_{t}\|_{1}\] \[+\mathbb{E}\langle\nabla f(z_{t})-\nabla f(x_{t}),-\frac{\gamma} {b}\sum_{i\in B}\nabla f_{i}(x_{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot\eta_ {t-1}\rangle\] \[+\frac{2\mu^{2}\gamma L^{2}\rho_{t}^{2}}{\epsilon}+\frac{L}{2} \mathbb{E}\|z_{t+1}-z_{t}\|^{2}, \tag{13}\]
where \(\mu>0\) is a constant to be determined. Next, from the Lemma 8, Lemma 10 and Lemma 9 in Appendix, we have
\[\mathbb{E}\langle\nabla f(x_{t}),-\frac{\gamma}{b}\sum_{i\in B} \nabla f_{i}(x_{t}+\rho_{t}\frac{\nabla f(x_{t})}{\|\nabla f(x_{t})\|}) \odot\eta_{t-1}\rangle\] \[\leq-\gamma\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1}}\|^{2}+ \mathbb{E}\frac{\gamma}{2\alpha^{2}}\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1}}\|^ {2}\] \[\quad+\frac{\gamma\alpha^{2}L^{2}\rho_{t}^{2}}{2\epsilon}, \tag{14}\] \[\frac{L}{2}\mathbb{E}\|z_{t+1}-z_{t}\|^{2}\leq\frac{LG^{2}\gamma ^{2}\beta_{1}^{2}}{(1-\beta_{1})^{2}}\mathbb{E}\|\eta_{t}-\eta_{t-1}\|^{2}\] \[\quad+\gamma^{2}L(3\frac{1+\beta}{\beta\epsilon}(\frac{L\rho_{t}^{ 2}}{\epsilon}+\frac{\sigma^{2}}{b\epsilon}+\mathbb{E}\|\nabla f(x_{t})\odot \sqrt{\eta_{t-1}}\|^{2})\] \[\quad+(1+\beta)G^{2}\mathbb{E}\|\eta_{t}-\eta_{t-1}\|^{2}),\] (15) \[\mathbb{E}\langle\nabla f(z_{t})-\nabla f(x_{t}),-\frac{\gamma}{b} \sum_{i\in B}\nabla f_{i}(x_{t}+\rho_{t}\frac{s_{t}}{\|s_{t}\|})\odot\eta_{t-1}\rangle\] \[\leq\frac{\gamma^{3}L^{2}\beta_{1}^{2}}{2\epsilon(1-\beta_{1})^{ 2}}(\frac{1}{\lambda_{1}^{2}}+\frac{1}{\lambda_{2}^{2}}+\frac{1}{\lambda_{3}^{ 2}})\frac{dG_{\infty}^{2}}{\epsilon^{2}}+\frac{\gamma L^{2}\rho_{t}^{2}}{2 \epsilon}(\lambda_{2}^{2}+4\lambda_{3}^{2})\] \[\quad+\frac{\gamma\lambda_{1}^{2}}{2}\|\nabla f(x_{t})\odot\sqrt{ \eta_{t-1}}\|^{2}. \tag{16}\]
Next, we substitute it into the (III-A). Taking the expectation over all history information yields
\[\mathbb{E}f(x_{t+1})-\mathbb{E}f(x_{t})\] \[\leq-\gamma(1-\frac{1}{2\mu^{2}}-\frac{1}{2\alpha^{2}}-\frac{3 \gamma L(1+\beta)}{\beta\epsilon}-\frac{\lambda_{1}^{2}}{2})\mathbb{E}\|\nabla f (x_{t})\odot\sqrt{\eta_{t-1}}\|^{2}\] \[+\frac{2\mu^{2}\gamma L^{2}\rho_{t}^{2}}{\epsilon}+\frac{\gamma}{1 -\beta_{1}}G^{2}\mathbb{E}\|\eta_{t-1}-\eta_{t}\|_{1}+\frac{\gamma\alpha^{2}L^{2 }\rho^{2}}{2\epsilon}\] \[+\frac{\gamma^{3}L^{2}\beta_{1}^{2}}{2\epsilon(1-\beta_{1})^{2}}( \frac{1}{\lambda_{1}^{2}}+\frac{1}{\lambda_{2}^{2}}+\frac{1}{\lambda_{3}^{3}}) \frac{dG_{\infty}^{2}}{\epsilon^{2}}+\frac{\gamma L^{2}\rho_{t}^{2}}{2\epsilon}( \lambda_{2}^{2}+4\lambda_{3}^{2})\] \[+\gamma^{2}LG^{2}((\frac{\beta_{1}}{1-\beta_{1}})^{2}+1+\beta) \mathbb{E}\|\eta_{t}-\eta_{t-1}\|^{2}\] \[+\frac{3\gamma^{2}L(1+\beta)}{\beta\epsilon}(\frac{L\rho_{t}^{2}}{ \epsilon}+\frac{\sigma^{2}}{b\epsilon}). \tag{17}\]
We set \(\mu^{2}=\alpha^{2}=8\), \(\beta=3\), \(\lambda_{1}^{2}=\frac{1}{4}\), \(\lambda_{2}^{2}=\lambda_{3}^{2}=1\) and we choose \(\frac{2\gamma L}{\epsilon}\leq\frac{1}{8}\). Note that \(\eta_{t}\) is bounded. We have
\[\frac{\gamma}{2G}\mathbb{E}\|\nabla f(x_{t})\|^{2}\leq\frac{\gamma}{2} \mathbb{E}\|\nabla f(x_{t})\odot\sqrt{\eta_{t-1}}\|^{2} \tag{18}\] \[\leq-\mathbb{E}f(x_{t+1})+\mathbb{E}f(x_{t})+\frac{45\gamma L^{2 }\rho_{t}^{2}}{2\epsilon}+\frac{4\gamma^{2}L}{\epsilon}(\frac{L\rho_{t}^{2}}{ \epsilon}+\frac{\sigma^{2}}{b\epsilon})\] \[+\frac{\gamma}{1-\beta_{1}}G^{2}\mathbb{E}\|\eta_{t-1}-\eta_{t }\|_{1}+\frac{3\gamma^{3}L^{2}\beta_{1}^{2}}{(1-\beta_{1})^{2}}\frac{dG_{ \infty}^{2}}{\epsilon^{3}}\] \[+(4+(\frac{\beta_{1}}{1-\beta_{1}})^{2})^{2}\gamma^{2}LG^{2} \mathbb{E}\|\eta_{t}-\eta_{t-1}\|^{2}. \tag{19}\]
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **CoLA** &
Then, telescoping it from \(t=0\) to \(t=T-1\), and assuming \(\gamma\) is a constant, it follows that
\[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\|\nabla f(x_{t})\|^{2}\leq \frac{2G(f(x_{0})-f^{*})}{\gamma T}+\frac{8G\gamma L}{\epsilon}\frac{\sigma^{2} }{b\epsilon}\] \[+\frac{45GL^{2}\rho_{t}^{2}}{\epsilon}+\frac{2G^{3}}{(1-\beta_{1 })T}d(\frac{1}{\epsilon}-\frac{1}{G})+\frac{6\gamma^{2}L^{2}\beta_{1}^{2}}{(1- \beta_{1})^{2}}\frac{dG^{3}}{\epsilon^{3}}\] \[+\frac{8G\gamma L}{\epsilon}\frac{L\rho_{t}^{2}}{\epsilon}+\frac{ 2(4+(\frac{\beta_{1}}{1-\beta_{1}})^{2})\gamma LG^{3}}{T}d(\epsilon^{-2}-G^{-2 }), \tag{20}\]
which completes the proof.
## IV Experiments
In this section, we apply AdaSAM to train language models and compare it with SGD, AMSGrad, and SAM to show its effectiveness. Due to space limitations, more experiments, including visualization, task description, implementation details and results description, are placed in the Appendix.
### _Experimental Setup_
**Tasks and Datasets.** We evaluate AdaSAM on a popular benchmark, _i.e._ General Language Understanding Evaluation (GLUE) [65], which consists of several language understanding tasks including sentiment analysis, question answering and textual entailment. For a fair comparison, we report the results based on single-task, without multi-task or ensemble training. We evaluate the performance with Accuracy ("_Acc_") metric for most tasks, except the F1 scores for QQP and MRPC, the Pearson-Spearman correlations ("_Peor/Scor_") for STS-B and the Matthew correlations ("_Mcc_") for CoLA. The performance is better as the metric is higher.
**Implementations.** We conduct our experiments using a widely-used pre-trained language model, RoBERTa-large1 in the open-source toolkit fairseq2, with 24 transformer layers, a hidden size of 1024. For fine-tuning on each task, we use different combinations of hyper-parameters, including the
Fig. 1: The loss and evaluation metric v.s. steps on MRPC, RTE, CoLA, SST-2, STS-B, MNLI, QQP, and QNLL(\(\beta_{1}=0.9\))
learning rate, the number of epochs, the batch size, _etc_3. In particular, for RTE, STS-B and MRPC of GLUE benchmark, we first fine-tune the pre-trained RoBERTa-large model on the MNLI dataset and continue fine-tuning the RoBERTa-large-MNLI model on the corresponding single-task corpus for better performance, as many prior works did [66, 7]. All models are trained on NVIDIA DGX SuperPOD cluster, in which each machine contains 8\(\times\)40GB A100 GPUs.
Footnote 3: Due to the space limitation, we show the details of the dataset and training setting in Appendix A.
### _Results on GLUE Benchmark_
Table I shows the performance of SGD, SAM, AMSGrad, and AdaSAM. For the AdaSAM, we tune the neighborhood size of the perturbation parameter from 0.01, 0.005, and 0.001. The result shows that AdaSAM outperforms AMSGrad on 6 tasks of 8 tasks except for QNLI and QQP. Overall, it improves the 0.28 average score than AMSGrad. On the other hand, Table I indicates that SAM is better than SGD on 7 tasks of 8 tasks except for RTE. And SAM can significantly improve performance. Comparing the results of Table I, we can find that the adaptive learning rate method is better than SGD tuned with handicraft learning rate. AdaSAM achieves the best metric on 6 tasks which is CoLA, SST-2, MRPC, STS-B, RTE, QNLI, and MNLI. In general, AdaSAM is better than the other methods.
In addition, Figure 3 shows the convergence speed of the detailed loss and evaluation metrics vs. the number of steps during training, respectively. The loss curve of AdaSAM decreases faster than SAM and SGD in all tasks, and it has a similar decreasing speed as the AMSGrad. The evaluation metric curve of AdaSAM and AMSGrad show that the AdaSAM is better than SGD and SAM and decreases the loss value as faster as the AMSGrad in all tasks.
### _Mini-batch Speedup_
In this part, we test the performance with different batch sizes to validate the linear speedup property. The experiments are conducted on the MRPC, RTE, and CoLA tasks. The batch size is set as 4, 8, 16, 32, respectively. We scale the learning rate as \(\sqrt{N}\), which is similar as [67], where \(N\) is the batch size. The results show that the training loss decreases faster as the batchsize increases, and the loss curve with the batch size of 32 achieves nearly half iterations as the curve with the batch size of 16.
### _Ablation Study_
In this subsection, we conduct the experiments the momentum hyper-parameter \(\beta_{1}\) is set to 0 to evaluate the influence of the momentum acceleration and the adaptive learning rate. Table II shows that AdaSAM outperforms AMSGrad on 6 tasks of 8 tasks except for SST-2 and RTE. In Table II, we also compare SGD and SAM, and without the momentum, SAM outperforms SGD on all tasks. Under this situation, AdaSAM without the momentum acceleration method is better than the other methods.
When comparing the result of Table I and Table II, we find that both the adaptive learning rate method and the momentum acceleration are helpful for the model's generalization ability. When there is no momentum term, SAM with an adaptive
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **CoLA** & **SST-2** & **MRPC** & **STS-B** & **RTE** & **MNLI** & **QNLI** & **QQP** \\ & mcc. & Acc. & Acc./F1 & Pcor./Scor. & Acc. & m/mm. & Acc. & F1/ Acc. & \\ \hline SGD & 0 & 51.722 & 68.38/ 81.22 & 5.55/ 7.2 & 51.27 & 32.51/ 32.42 & 53.32 & 0 / 63.18 & 37.23 \\ SAM(\(\rho=\)0.01) & 41.91 & 95.3 & 68.38/ 81.22 & 9.21/ 10.38 & 53.07 & 87.99/ 87.8 & 51.24 & 83.44/ 87.27 & 63.1 \\ SAM(\(\rho=\)0.005) & 58.79 & 81.54 & 68.38/ 81.22 & 13.52/ 16.6 & 53.79 & 88.42/ 88.15 & 92.95 & 83.84/ 87.7 & 67.91 \\ SAM(best) & 58.79 & 95.3 & 68.38/ 81.22 & 13.52/ 16.6 & 53.79 & 88.42/ 88.15 & 92.95 & 83.84/ 87.7 & 67.90 \\ AMSGrad & 63.78 & 96.44 & 89.71/ 92.44 & 89.98/ 90.35 & 87.36 & 90.65/ 90.35 & 94.53 & 88.59/ 91.27 & 88.79 \\ \hline AdaSAM(\(\rho=\)0.01) & 69.23 & 96.22 & 89.96/ 92.84 & 88.83/ 89.07 & 87 & 90.83/ 90.41 & 94.8 & 88.67/ 91.38 & 89.1 \\ AdaSAM(\(\rho=\)0.005) & 68.47 & 96.22 & 89.96/ 92.82 & 91.59/ 91.22 & 73.65 & 90.75/ 90.42 & 94.73 & 88.72/ 91.46 & 88.33 \\ AdaSAM(best) & 69.23 & 96.22 & 89.96/ 92.84 & 91.59/ 91.22 & 87 & 90.83/ 90.42 & 94.8 & 88.72/ 91.46 & 89.52 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Results of SGD, SAM, AMSGrad and AdaSAM on the GLUE benchmark without momentum, i.e., \(\beta_{1}=0\)
Fig. 2: The linear speedup verification of AdaSAM with the number of batch size of 4, 8, 16, 32.
learning rate improves the 0.74 average score to AMSGrad. With a momentum term, AdaSAM improves the 0.28 average score to AMSGrad. It shows that the adaptive method can improve the performance with or without momentum acceleration and it achieves the best performance with momentum acceleration. And we can find that momentum acceleration improves the performance of SAM, AMSGrad and AdaSAM.
## V Conclusion
In this work, we study the convergence rate of Sharpness aware minimization optimizer with an adaptive learning rate and momentum acceleration, dubbed AdaSAM in the stochastic non-convex setting. To the best of our knowledge, we are the first to provide the non-trivial \(\mathcal{O}(1/\sqrt{bT})\) convergence rate of AdaSAM, which achieves a linear speedup property with respect to mini-batch size \(b\). We have conducted extensive experiments on several NLP tasks, which verifies that AdaSAM could achieve superior performance compared with AMSGrad and SAM optimizers. Future works include extending AdaSAM to the distributed setting and reducing the twice gradient back-propagation cost.
|
2307.13788 | Histogram Layer Time Delay Neural Networks for Passive Sonar
Classification | Underwater acoustic target detection in remote marine sensing operations is
challenging due to complex sound wave propagation. Despite the availability of
reliable sonar systems, target recognition remains a difficult problem. Various
methods address improved target recognition. However, most struggle to
disentangle the high-dimensional, non-linear patterns in the observed target
recordings. In this work, a novel method combines a time delay neural network
and histogram layer to incorporate statistical contexts for improved feature
learning and underwater acoustic target classification. The proposed method
outperforms the baseline model, demonstrating the utility in incorporating
statistical contexts for passive sonar target recognition. The code for this
work is publicly available. | Jarin Ritu, Ethan Barnes, Riley Martell, Alexandra Van Dine, Joshua Peeples | 2023-07-25T19:47:26Z | http://arxiv.org/abs/2307.13788v1 | # Histogram Layer Time Delay Neural Networks for Passive Sonar Classification
###### Abstract
Underwater acoustic target detection in remote marine sensing operations is challenging due to complex sound wave propagation. Despite the availability of reliable sonar systems, target recognition remains a difficult problem. Various methods address improved target recognition. However, most struggle to disentangle the high-dimensional, non-linear patterns in the observed target recordings. In this work, a novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification. The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition. The code for this work is publicly available.
Jarin Ritu\({}^{1}\), Ethan Barnes\({}^{1}\), Riley Martell\({}^{2}\), Alexandra Van Dine\({}^{2}\), Joshua Peeples\({}^{1}\)\({}^{1}\)Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, USA
\({}^{2}\)Massachusetts Institute of Technology Lincoln Laboratory, Lexington, MA, USA
Deep learning, histograms, passive sonar, target classification, texture analysis
## 1 Introduction
Underwater acoustic target recognition (UATR) technology plays a crucial role in a variety of domains, including biology [1], carrying out search and rescue operations, enhancing port security [2], and mapping the ocean floor [3]. One of the primary target detection techniques used by modern crafts, such as unmanned underwater vehicles, is passive sonar [4]. Passive sonar is an underwater acoustic technology that uses hydrophones to detect and analyze sound waves in the ocean [5]. Unlike active sonar, passive sonar resolves targets from the natural sounds of the ocean and the noises produced by ships and other underwater vehicles. Processing and analyzing passive sonar data can be challenging due to the high volume of data and environmental complexity [6]. Signal processing techniques are often used to analyze ship-generated noise such as low frequency analysis and recording (LOFAR) spectra [7]. The Detection of Envelope Modulation on Noise (DEMON) is an approach that has been successfully used for target detection and recognition in passive sonar [8, 9, 10]. Despite their success, these approaches use handcrafted features that can be difficult to extract without domain expertise [11].
Artificial neural networks (ANNs), such as convolutional neural networks (CNNs) and time delay neural networks (TDNNs), provide an end-to-end process for automated feature learning and follow-on tasks (_e.g._, detection and classification of signals) [12, 13, 14, 15]. The TDNN has shown success in simulating long-term temporal dependencies [16] and can be modeled as a 1D CNN [13]. Thus, the TDNN can adaptively learn the sequential hierarchies of features, but does not explicitly account for the statistics of passive sonar data. These are difficult to model for feature extraction [17, 18]. The statistics of the signals can describe the acoustic texture of the targets of interest [18]. Texture generally falls into two categories: statistical and structural [19, 20, 21, 22].Statistical context in audio analysis involves studying the amplitude information of the audio signal. One way to capture amplitude information is by using probability density functions [18]. However, traditional artificial neural network (ANN) approaches, like convolutional neural networks (CNNs) and time-delay neural networks (TDNNs), have shown a bias towards capturing structural textures rather than statistical texture [20, 21, 22]. This bias limits their ability to directly model the statistical information required to capture acoustic textures accurately. To overcome this shortcoming, histogram layers can be integrated into ANNs to incorporate statistical context [22]. Methods that combine both structural and statistical textures have
Figure 1: Overall experimental work flow. Each signal is resampled to \(16\) kHz and binned into three second segments. After dividing the signals and corresponding segments into training, validation, and test partitions, several time-frequency features are extracted. The features are then passed into the model and classified as one of the four vessel types.
improved performance for other tasks such as image classification and segmentation [20, 21, 22]. In this work, we propose a new TDNN architecture that integrates histogram layers for improved target classification. Our proposed workflow is summarized in Figure 1. The contributions of this work are as follows:
* Novel TDNN architecture with histogram layer (HLTDNN) for passive sonar target classification
* In-depth qualitative and quantitative comparisons of TDNN and HLTDNN across a suite of time-frequency features.
## 2 Method
### Baseline TDNN Architecture
The TDNN architecture consisted of several convolution layers with the ReLU activation function and max pooling. 2D convolutional features were extracted from the time-frequency input to capture local relationships between the vessel's frequency information [23]. Padding was added to the input time-frequency feature to maintain the spatial dimensions of the resulting features maps. After each convolution operation and ReLU activation function, the features were pooled along the time axis with desired kernel length \(L\) (_e.g._, max pooling kernel of size \(1\times L\)) to aggregate the feature information while maintaining the temporal dependencies similar to other TDNNs [16, 23]. After the fourth convolutional block, the features are flattened and then passed through a final 1D convolutional layer followed by a sigmoid activation function and global average pooling layer (GAP).
### Proposed HLTDNN
The baseline TDNN is focused on the "structural" (_e.g._, local) acoustic textures of time and frequency as well as the temporal dependencies in the data. However, the model does not directly consider the statistical aspects of the data. A histogram layer [22] can be added in parallel to the baseline TDNN model to capture statistical features to assist in improving classification performance. Given input features, \(\mathbf{X}\in\mathbb{R}^{M\times N\times D}\), where \(M\) and \(N\) are the spatial (or time-frequency) dimensions while \(D\) is the feature dimensionality, the output tensor of the local histogram layer with \(B\) bins, \(\mathbf{Y}\in\mathbb{R}^{R\times C\times B\times D}\) with spatial dimensions \(R\) and \(C\) after applying a histogram layer with kernel size \(S\times T\) is shown in (1):
\[Y_{rcbd}=\frac{1}{ST}\sum_{s=1}^{S}\sum_{t=1}^{T}e^{-\gamma_{bd}^{2}\left( \tau_{r+s,c+t,d}-\mu_{bd}\right)^{2}} \tag{1}\]
where the bin centers (\(\mu_{bd}\)) and bin widths (\(\gamma_{bd}\)) of the histogram layer are learnable parameters. Each input feature dimension is treated independently, resulting in \(BD\) output histogram feature maps. The histogram layer takes input features and outputs the "vote" for a value in the range of \([0,1]\). The histogram layer can be modeled using convolution and average pooling layers as shown in Figure 2. Following previous work [22], the histogram layer is added after the fourth convolutional block (_i.e._, convolution, ReLU, and max pooling) and its features are concatenated with the TDNN features before the final output layer.
## 3 Experimental Procedure
### Dataset Description
The DeepShip dataset [14] was used in this work. The database contained 609 records reflecting the sounds of four different ship types: cargo, passengership, tanker, and tug. Following [14], each signal is re-sampled to a frequency of \(16\) kHz and divided into segments of three seconds. Figure 3 illustrates the structure of the dataset after "binning" the signals into segments. The number of signals and segments for each class are also shown.
Figure 3: DeepShip dataset structure.
Figure 2: Proposed HLTDNN architecture. The histogram layer is added in the parallel with the baseline TDNN model through the bin center and width convolution layers with the radial basis activation function (RBF) and average pooling layer.
### Experimental Design
**Feature Extraction** Six different features are extracted: Mel Spectrogram (MS), Mel-frequency cepstral coefficients (MFCCs), Short-time Fourier transform (STFT), Gammatone-frequency cepstral coefficients (GFCC), Constant-q transform (CQT), and Variable-q transform (VQT). The window and hop length for each feature was set to \(250\) and \(64\) ms respectively [14]. The number of Mel filter banks for the MelSpectrogram was set to \(40\). For MFCC, the number of Mel-frequency cepstral coefficients was 16. The number of frequency bins for STFT was \(48\) while GFCC, CQT, and VQT used 64 frequency bins. The feature dimensions after zero-padding were \(48\times 48\) for MS and STFT, \(16\times 48\) for MFCC, and \(64\times 48\) for GFCC, CQT, and VQT.
**Data partitioning** The data set was split into 70% training, 15% validation, and 15% test based on the signals (428 training, 90 validation, and 91 test). After "binning" the signals into three second segments, 56,468 segments were created (38,523 training, 9,065 validation, and 8,880 test). All segments of each signal remained in the same partition to prevent data leakage (_i.e._, if one signal was selected for training, all segments of the signal were also used for training).
**Experimental setup** The models (TDNN or HLTDNN) were evaluated with each individual feature across three runs of random initialization. The experimental parameters for the models were the following:
* [noitemsep,topsep=0pt,parsep=0pt]
* Optimizer: Adagrad
* Learning rate (\(\eta\)): 0.001
* Batch size: 128
* Epochs: 100
* Dropout (\(p\)): 0.5
* Early stopping: 10 epochs
* Number of bins (HLTDNN): 16
Dropout was added before the output classification layer and early stopping was used to terminate training if validation loss did not improve within number of patience epochs. Experiments were conducted on an NVIDIA RTX 3090. The models are implemented in Pytorch 1.13, TorchAudio 2.0, and nnAudio 0.3.1 [24].
## 4 Results and Discussion
### Classification Performance
TDNN and HLTDNN classification performances are shown in Table 1. Classification performance was accessed using five metrics: accuracy, precision, recall, F1 score, and Matthew's correlation coefficient (MCC). Fisher's discriminant ratio (FDR) was used to access the feature quality (discussed more in Section 4.2). Confusion matrices for the TDNN and HLTDNN using best performing feature are displayed in Figures 3(a) and 3(b) respectively. For the HLTDNN, STFT achieved the best classification performance compared to other features. However, MFCC had the best for performance for TDNN across the different performance metrics. STFT performed similarly to MFCC when observing classification accuracy. Additional quantitative and qualitative analysis will use STFT to evaluate the impact of the histogram layer on the vessel classification.
The TDNN model initially performed well with the Mel spectrogram, MFCC, and STFT, but significantly degraded for the other three features (Table 1). The best performance was achieved using the MFCC feature as input while the worst feature was GFCC. A
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Features & Model & Accuracy & Precision & Recall & F1 Score & MCC & FDR \\ \hline \multirow{2}{*}{MS} & TDNN & 50.31 \(\pm\) 1.41\% & 39.56 \(\pm\) 0.05\% & 47.67 \(\pm\) 0.03\% & 42.09 \(\pm\) 0.02\% & 34.22 \(\pm\) 0.02\% & 4.14 \(\pm\) 1.50 \\ \cline{2-7} & HLTDNN & 47.46 \(\pm\) 2.39\% & 45.25 \(\pm\) 0.03\% & 51.80 \(\pm\) 0.04\% & 46.00 \(\pm\) 0.03\% & 29.55 \(\pm\) 0.03\% & **20.51 \(\pm\) 1.86** \\ \hline \multirow{2}{*}{MFCC} & TDNN & 51.39 \(\pm\) 0.79\% & 50.10 \(\pm\) 0.02\% & 49.95 \(\pm\) 0.03\% & 49.48 \(\pm\) 0.02\% & 34.84 \(\pm\) 0.01\% & 5.34 \(\pm\) 1.29 \\ \cline{2-7} & HLTDNN & 54.41 \(\pm\) 0.42\% & 54.28 \(\pm\) 0.03\% & 53.91 \(\pm\) 0.03\% & **53.62 \(\pm\) 0.02\%** & 39.38 \(\pm\) 0.02\% & 15.29 \(\pm\) 1.85 \\ \hline \multirow{2}{*}{STFT} & TDNN & 51.15 \(\pm\) 0.72\% & 40.88 \(\pm\) 0.03\% & 48.49 \(\pm\) 0.01\% & 43.86 \(\pm\) 0.02\% & 24.04 \(\pm\) 0.04\% & 8.30 \(\pm\) 2.87 \\ \cline{2-7} & HLTDNN & **59.21 \(\pm\) 0.56\%** & **54.84 \(\pm\) 0.02\%** & **56.59 \(\pm\) 0.03\%** & 53.23 \(\pm\) 0.02\% & **46.05 \(\pm\) 0.01\%** & 17.75 \(\pm\) 0.58 \\ \hline \multirow{2}{*}{GFCC} & TDNN & 27.73 \(\pm\) 0.18\% & 17.45 \(\pm\) 0.00\% & 26.40 \(\pm\) 0.00\% & 17.61 \(\pm\) 0.00\% & 3.63 \(\pm\) 0.00\% & 15.26 \(\pm\) 0.44 \\ \cline{2-7} & HLTDNN & 43.42 \(\pm\) 0.61\% & 39.63 \(\pm\) 0.01\% & 41.44 \(\pm\) 0.01\% & 38.57 \(\pm\) 0.01\% & 24.24 \(\pm\) 0.01\% & 11.94 \(\pm\) 4.82 \\ \hline \multirow{2}{*}{CQT} & TDNN & 36.89 \(\pm\) 0.83\% & 23.34 \(\pm\) 0.03\% & 34.92 \(\pm\) 0.07\% & 30.85 \(\pm\) 0.02\% & 15.06 \(\pm\) 0.01\% & 16.95 \(\pm\) 0.56 \\ \cline{2-7} & HLTDNN & 50.66 \(\pm\) 1.37\% & 44.37 \(\pm\) 0.01\% & 48.04 \(\pm\) 0.02\% & 43.62 \(\pm\) 0.02\% & 34.30 \(\pm\) 0.02\% & 13.14 \(\pm\) 3.61 \\ \hline \multirow{2}{*}{VQT} & TDNN & 36.76 \(\pm\) 0.96\% & 28.14 \(\pm\) 0.02\% & 34.80 \(\pm\) 0.07\% & 30.76 \(\pm\) 0.02\% & 14.84 \(\pm\) 0.01\% & 16.82 \(\pm\) 0.94 \\ \cline{2-7} & HLTDNN & 50.12 \(\pm\) 0.27\% & 43.35 \(\pm\) 0.02\% & 47.57 \(\pm\) 0.01\% & 43.40 \(\pm\) 0.01\% & 33.44 \(\pm\) 0.00\% & 13.28 \(\pm\) 2.87 \\ \hline \end{tabular}
\end{table}
Table 1: Overall performance metrics for baseline TDNN and proposed HLTDNN model. The average score with \(\pm 1\sigma\) across the three experimental runs of random initialization is shown and the best average metric is bolded. The log of the Fisher Discriminant Ratio (FDR) is shown due to the magnitude of the FDR score. The time-frequency features in this work were Mel Spectrogram (MS), Mel-frequency cepstral coefficients (MFCC), Short-time Fourier transform (STFT), Gammatone-frequency cepstral coefficients (GFCC), Constant-q transform (CQT), and Variable-q transform (VQT).
Figure 4: Average confusion matrices for the TDNN and HLTDNN on the DeepShip dataset using the STFT feature. The average overall test accuracy is shown in parenthesis.
possible reason for this is that each feature used a 250 ms window and hop length of 64 ms. The short time frame may be limiting the frequency domain and selecting the best frequency band greatly impacts performance [25]. However, the performance of the HLTDNN was fairly robust across the different time-frequency features. The STFT feature performed the best for this model, and the HLTDNN also improved the performance of the GFCC, CQT and VQT features significantly in comparison to the TDNN. This demonstrates that the statistical context captured by the histogram layer is useful for improving target classification.
Both models did not identify the Cargo class as well as the other vessel types as shown in Figure 4. Particularly, the most common classification mistakes occurred when the model predicted Cargo as Tanker (_i.e._, false positive). Intuitively, this classification error makes sense because Tanker is a type of Cargo ship (_e.g._, oil tanker [26]) and the sound produced by each ship maybe similar. Also, the Cargo class in the DeepShip data has been noted to have high intra-class variance [27]. As a result, the Cargo class was the most difficult to classify. Feature regularization methods (_i.e._, constrastive learning) can be incorporated into the objective function to mitigate intra-class variance.
### Feature Evaluation
In addition to the classification metrics, quality of the features was accessed using Fisher's Discriminant Ratio (FDR). FDR is the ratio of the inter-class separability and the intra-class compactness. Ideally, the inter-class separability should be maximized (_i.e._, different vessel types should be "far away" from one another or have large distances between the classes in the feature space) and the intra-class compactness should be minimized (_i.e._, samples from the same class should be "close" or have small distances between one another in the feature space). As a result, the FDR should be maximized. From Table 1, the log of the FDR shows that the histogram model achieved the best FDR scores for all six features further demonstrating the utility of the statistical features.
A deeper analysis using the best performing feature (STFT) in terms of classification performance is shown in Table 2. For all four classes, the log FDR for the HLTDNN is statistically significant (no overlapping error bars) in comparison to the TDNN. The main difference between the two models were the increased feature separability of the HLTDNN model in comparison with the baseline TDNN. The TDNN had smaller denominator (_i.e._, intra-class compactness) compared to the HLTDNN when computing the norm of the within-scatter matrix, indicating that the TDNN performs marginally better in terms of intra-class compactness. On the other hand, the features from the HLTDNN are more separable than those from the TDNN, as evident from the norm of the between-scatter matrix, showing the HLTDNN's superiority in terms of inter-class separability. The FDR scores further elucidate the importance of statistical texture information captured by the histogram layer.
Figure 5 shows the 2D t-SNE projection of the features from the best performing models using the STFT feature. The same random initialization for t-SNE was used for both methods in order to do a fair comparison between both models. The qualitative results of t-SNE match our quantitative analysis using FDR. The features extracted by the histogram acts as a similarity measure for the statistics of the data and assigning higher "votes" to bins where features are closer. The addition of these features to the TDNN model improved the separability of the classes as observed in Figure 4(b). Modifying the histogram layer to help improve the intra-class compactness of the HLTDNN would be of interest in future investigations.
## 5 Conclusion
In this work, a novel HLTDNN model was developed to incorporate statistical information for improved target classification in passive sonar. In comparison to the base TDNN, the HLTDNN not only improved classification performance and led to improved feature representations for the vessel types. Future work will investigate combining features as opposed to using a single time-frequency representation as the input to the network. Each feature can also be tuned (_e.g._, change number of frequency bins) to enhance the representation of the signals. Additionally, both architectures can be improved by a) adding more depth and b) leveraging pretrained models. The training strategies could also use approaches to mitigate overfitting and improve performance, such as regularization of the histogram layer (_e.g._, add constraints to the bin centers and widths) and data augmentation.
|
2301.05264 | Security-Aware Approximate Spiking Neural Networks | Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs) are both known
for their susceptibility to adversarial attacks. Therefore, researchers in the
recent past have extensively studied the robustness and defense of DNNs and
SNNs under adversarial attacks. Compared to accurate SNNs (AccSNN), approximate
SNNs (AxSNNs) are known to be up to 4X more energy-efficient for ultra-low
power applications. Unfortunately, the robustness of AxSNNs under adversarial
attacks is yet unexplored. In this paper, we first extensively analyze the
robustness of AxSNNs with different structural parameters and approximation
levels under two gradient-based and two neuromorphic attacks. Then, we propose
two novel defense methods, i.e., precision scaling and approximate
quantization-aware filtering (AQF), for securing AxSNNs. We evaluated the
effectiveness of these two defense methods using both static and neuromorphic
datasets. Our results demonstrate that AxSNNs are more prone to adversarial
attacks than AccSNNs, but precision scaling and AQF significantly improve the
robustness of AxSNNs. For instance, a PGD attack on AxSNN results in a 72\%
accuracy loss compared to AccSNN without any attack, whereas the same attack on
the precision-scaled AxSNN leads to only a 17\% accuracy loss in the static
MNIST dataset (4X robustness improvement). Similarly, a Sparse Attack on AxSNN
leads to a 77\% accuracy loss when compared to AccSNN without any attack,
whereas the same attack on an AxSNN with AQF leads to only a 2\% accuracy loss
in the neuromorphic DVS128 Gesture dataset (38X robustness improvement). | Syed Tihaam Ahmad, Ayesha Siddique, Khaza Anuarul Hoque | 2023-01-12T19:23:15Z | http://arxiv.org/abs/2301.05264v1 | # Security-Aware Approximate Spiking Neural Networks
###### Abstract
Deep Neural Networks (DNNs) and Spiking Neural Networks (SNNs) are both known for their susceptibility to adversarial attacks. Therefore, researchers in the recent past have extensively studied the robustness and defense of DNNs and SNNs under adversarial attacks. Compared to accurate SNNs (AccSNN), approximate SNNs (AxSNNs) are known to be up to 4X more energy-efficient for ultra-low power applications. Unfortunately, the robustness of AxSNNs under adversarial attacks is yet unexplored. In this paper, we first extensively analyze the robustness of AxSNNs with different structural parameters and approximation levels under two gradient-based and two neuromorphic attacks. Then, we propose two novel defense methods, i.e., precision scaling and approximate quantization-aware filtering (AQF), for securing AxSNNs. We evaluated the effectiveness of these two defense methods using both static and neuromorphic datasets. Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs. For instance, a PGD attack on AxSNN results in a 72% accuracy loss compared to AccSNN without any attack, whereas the same attack on the precision-scaled AxSNN leads to only a 17% accuracy loss in the static MNIST dataset (4X robustness improvement). Similarly, a Sparse Attack on AxSNN leads to a 77% accuracy loss when compared to AccSNN without any attack, whereas the same attack on an AxSNN with AQF leads to only a 2% accuracy loss in the neuromorphic DVS128 Gesture dataset (38X robustness improvement).
Spiking Neural Networks, Approximate Spiking Neural Networks, Adversarial Robustness, Approximate Defense.
## I Introduction
Spiking neural networks (SNNs) are the third generation of neural networks that employ event-driven computing capabilities [1]. In recent years, many SNN models of different sizes have been developed for data analytics, such as gesture recognition, object detection, and image classification. However, large-sized SNN models are known for recognizing more features than small ones. Consequently, the state-of-the-art large-sized SNN models have numerous parameters that need to be considered in both the training and inference phases. This limits the deployment of SNNs on ultra-low power resource-constrained edge devices. To handle this problem, approximate computing in SNNs has recently emerged as an energy-efficient solution. Approximate computing-based SNNs (AxSNNs) relax the abstraction of near-perfect accuracy in error-resilient applications for their low energy consumption. For instance, AxSNNs obtained via approximating the weights can reduce the energy consumption by 4X [2] compared to the accurate SNNs (AccSNNs).
Similar to traditional deep neural networks (DNNs), AxSNNs are also prone to adversarial attacks. Adversarial attacks are known for being very stealthy as they add minimal perturbation noise to the inputs, which are imperceptible to the human eye, yet successfully fool the SNN classifiers. [3]. Motivated by this, the robustness analysis and adversarial defense in AccSNNs have been thoroughly investigated in several recent works [4, 5, 6, 7]. Very recently, in [8], the authors showed that approximate DNNs (AxDNNs) are more prone to adversarial attacks as compared to accurate DNNs (AccDNN). However, surprisingly, the robustness of AxSNNs is yet unexplored. Therefore, a more comprehensive study is required to understand the inherent behavior of AxSNNs vs. AccSNNs, especially under adversarial attacks. Exploration of such behaviors can enable designing defense techniques tailored specifically for AxSNNs.
### _Motivational Case Study and Key Observations_
As a preliminary study for motivating our research, we conducted a motivational case study highlighting the impact of adversarial attacks on AcSNNs vs. AccSNNs. For this purpose, we first trained a 5-layered AccSNN, having 3 convolutional layers and 2 fully-connected layers for classifying the MNIST [9] dataset. Then, we built an AxSNN (using approximation level 0.1) as an approximate counterpart of the AccSNN. Finally, we compared the performance of AccSNN and AxSNN under the \(l_{\infty}\) norm-based projected gradient descent (PGD) attack by varying the perturbation budget \(\epsilon\) ranging from 0 to 1.0. The results are presented in Fig. 1. We observe that the AxSNN is significantly less robust when compared to AccSNNs under attack. For instance, when there is no attack (\(\epsilon\)=0), the accuracy of AccSNN and AxSNN is 97% and 52%, respectively. However, for (\(\epsilon\)=0.5), the accuracy of
Figure 1: Robustness comparison of AccSNN and AxSNN under PGD attack with different perturbation budgets.
AccSNN and AxSNN is 95% and 40%, respectively, which shows a 55% difference in their accuracy. Furthermore, when the perturbation budget was varied to 1.0 (\(\epsilon\)=1.0), we can observe a 68% difference while comparing the accuracy of the AccSNN and AxSNN. These outcomes motivated us to thoroughly investigate the robustness of the AxSNNs and explore potential defense techniques to design robust AxSNNs.
### _Novel Contributions_
In this paper, we present a security-aware AxSNNs design method with the following novel contributions:
1. A novel design approach for designing adversarially robust AxSNNs by identifying their robustness-aware knobs through _precision scaling_, i.e., through finding the appropriate combination approximation levels, structural parameters (threshold voltage, time steps), and quantization. We also propose a defense method through Approximate Quantization-aware filtering (AQF), specifically effective against neuromorphic attacks. **[Section IV]**
2. An extensive vulnerability analysis of AxSNNs vs. AccSNNs against two gradient-based attacks and two neuromorphic attacks under different threshold voltage, time steps, precision scales, and approximation levels. Specifically, we evaluated the impact of Projected Gradient Descent (PGD) and Basic Iterative Method (BIM) attacks on AccSNN and AxSNN with a static dataset MNIST [9]. On the contrary, we evaluated the impact of Sparse and Frame attacks on AccSNN and AxSNN with the neuromorphic dataset DVS128 Gesture [10]). **[Section V]**
Our results demonstrate that AxSNNs are more prone to adversarial attacks than AccSNNs. However, the precision scaling and approximate quantization aware filtering improve their robustness significantly. For instance, a PGD attack with perturbation budget 1.0 on AxSNNs results in a 72% accuracy loss, whereas the same attack on AccSNNs results in only a 9% accuracy loss in MNIST classification. Interestingly, the same AxSNN with precision scaling shows only a 17% accuracy loss, indicating a 4X robustness improvement. Similarly, a Sparse Attack on AxSNN leads to a 77% accuracy loss when compared to AccSNN without any attack in the DVS128 Gesture classification. However, after using our designed approximate quantization aware filter the accuracy loss is just 2%, indicating a 38X improvement in robustness.
## II Preliminaries
This section provides a brief overview of SNNs and adversarial attacks to understand the paper better.
**Approximate Spiking Neural Networks:** AxSNNs employ approximate computing to trade their classification accuracy with energy efficiency in ultra-low power applications. AxSNNs typically associate an approximation level \(a_{th}\) with each spiking neuron. The \(a_{th}\) determines if the respective neurons should be activated or deactivated based on the sensitivity of the neurons to errors and spiking activity [11]. Similar to AccSNNs, AxSNNs use the standard leaky-integrate-and-fire (LIF) model where, when the membrane potential exceeds the threshold voltage, the neuron emits an output spike and resets its membrane potential. Specifically, they process the spike-encoded inputs which are most commonly encoded using rate encoding, where the activation activity corresponds to the mean firing rates of spikes over certain time steps. The time steps refer to the observation period when the SNN receives the same input.
**Adversarial Attacks:** The adversarial attacks are small perturbations that cause the classifier to predict the false labels. Examples of such attacks include gradient-based PGD and BIM, which are strong attacks in the adversarial machine learning domain. The gradient-based attacks exploit the concept of back-propagation; however, instead of calculating the gradient with respect to the weights of the model, they craft the adversarial examples by perturbing the input images. Recent studies show that these attacks cannot be used for perturbing the neuromorphic datasets due to their event-driven nature. Therefore, specialized neuromorphic attacks, such as Sparse and Frame Attacks are used [6]. A Sparse Attack is a stealthy attack that iteratively perturbs the neuromorphic images based on the loss function of output label probability to generate perturbed events. On the other hand, a Frame Attack is a simple yet effective neuromorphic attack that generates perturbed events by attacking every pixel of the boundary for all the events.
## III Threat Model
In this section, we present a threat model for exploring the adversarial robustness of AxSNNs.
**Adversary's Knowledge:** We assume that the adversary uses an accurate classifier model for crafting the adversarial examples. Furthermore, the adversary has partial knowledge about AxSNN, i.e., the internal architecture of the classifier model is known, but the inexactness and model parameters such as threshold voltage and time-steps, precision scale, and approximation levels are not known to the adversary.
**Attack Generation:** The adversary is assumed capable of evading the classifier model by tampering the input images in the prediction phase without influencing the training data. The adversary crafts the adversarial examples by finding the perturbations that maximize the loss of a model on input while keeping the perturbation magnitude lower than the perturbation budget \(\epsilon\). As mentioned earlier, we use iterative gradient-based attacks specifically, \(l_{\infty}\) norm-based _BIM and PGD_, which are considered high-strength attacks for the static datasets. We also employ neuromorphic attacks specifically, sparse and frame attacks which are stealthy yet effective in perturbing the neuromorphic images with high resolutions. We used \(\epsilon\)=1.0 in this paper because the accuracy of both AccSNN and AxSNN drops significantly after this value, so it becomes non-recoverable. As an example, it drops to 10% with \(\epsilon\)=1.5.
## IV Security-Aware AxSNN Design Approach
In this section, we discuss our proposed approach for designing a robust and secure AxSNN in detail.
### _Precision-scaling_
Traditionally, the approximation levels are determined by identifying the maximum tolerable perturbation (MTP) in a neural network [12]. However, this becomes challenging in precision-scaled AxSNNs due to the variation in accuracy with the change in the threshold voltage, time steps and precision scales. Intuitively, an increase in the number of time steps and threshold voltage may lead to a higher number of insignificant spikes by some neurons and hence, affect the classification accuracy. Skipping such neurons has the potential to improve the accuracy of AxSNNs; however, their robustness can decrease under attacks. Since precision scaling has the potential to improve the robustness of AccDNNs [12], determining approximation on the basis of the precision scales in AxSNNs can improve the adversarial robustness. However, exploring a precision scale in addition to threshold voltage and time steps for robust approximation seems challenging. In this paper, we determine the approximation levels \(a_{th}\) in AxSNNs by using the following equation:
\[a_{th}=(cN_{s}/T)\cdot min(1,V_{m}/V_{th})\cdot\sum_{i=1}^{c}w_{i}^{p}, \tag{1}\]
where \(c\), \(N_{s}\), \(T\), \(V_{m}\), \(V_{th}\) and \(w^{p}\) denote the number of connections to output, number of spikes, time steps, membrane potential, threshold voltage, and precision scaled weight of neuron. Furthermore, \(min(1,V_{m}/V_{th})\) is the spike probability. The maximum spike probability is 1 when \(V_{m}\) crosses \(V_{th}\) otherwise, \(V_{m}/V_{th}\). The spike probability is weighted by the mean of all weights corresponding to a connection \(c\) i.e., \(\sum_{i=1}^{c}w_{i}^{p}\) which includes the precision scaling of weights.
Algorithm 1 delineates the steps involved in implementing this equation for robustness in AxSNNs. We first initialize a counter \(adv\) for successful attack generation (Line 1). Then, we train an AccSNN model with the given threshold voltage \(v\) and time steps \(t\) and save the trained model and the corresponding weights of each layer 1 (Line 3). This learning phase is quantitatively verified with a quality constraint \(Q\) i.e., minimum baseline accuracy below which we consider SNN learning inefficient (Line 4). The value of \(Q\) depends on the given SNN architecture, dataset, and application. Then, we craft the adversarial examples for an adversarial attack and perturbation budget \(\epsilon\) (Line 5). The adversarial defense through precision scaling starts from Line 6 where the trained weights are sorted initially in ascending order. This sorting helps us in approximating the weights according to their significance later on. Afterwards, we perform precision scaling on the basis of each precision scale \(s\) and calculate the mean of all connections in layer \(l\) (Lines 8-9). Using this mean, we determine \(a_{th}\) (as discussed earlier) for each layer \(l\) and approximate the precision-scaled model by removing the connections having weights below \(a_{th}\) (Line 10). If a neuromorphic dataset is given as an input and the corresponding flag \(F_{d}\) is high then, we also use a special approximate quantization aware mask filter (AQF) from Algorithm 2 as discussed in Section IV-B (Line 12-14). Next, the algorithm checks if the crafted adversarial example can fool AxSNN, i.e., if the attack succeeds in forcing the output to a wrong label, and accordingly increment the adversarial success counter (Line 15-18). Lastly, we evaluate the robustness for the perturbation budget \(\epsilon\) as the rate of attacks for which the adversary failed to generate an adversarial example that fools the victim SNN (Line 21). In this paper, we use this algorithm to find the approximation level \(a_{th}\) with the robust set of threshold voltage, time steps, and precision scales. Therefore, we compare the accuracies of the precision-scaled AxSNN models across all these parameters and return their values which meet our quality constraint (Lines 22-24).
```
1:Inputs : Type of adversarial attack: \(attack\); Perturbation budget: \(c\); Time steps: \(T=[t_{1},t_{2},...,t_{n}]\); Threshold Voltage: \(V_{th}=[v_{1},v_{2},...,v_{n}]\); Train dataset: \(\mathcal{D}_{tr}=(X,\,L)\); Test dataset: \(\mathcal{D}_{ts}=(x,\,l)\); Perturbation budget: \(\epsilon\); Precision-scaling level: \(s_{l}=[s_{1},s_{2},...,s_{n}]\) ; Quality constraint: \(Q\); Neuromorphic Dataset Flag: \(F_{d}\)
2:Outputs : Robustness level \(R\), Best \(V_{th}\), time steps \(ts\), approximation level \(a_{th}\) and precision-scaling level \(s\)
3:\(adv=0\);
4:for each (\(v\), \(ts\)) in (\(V_{th}\), \(T\))do
5:\((model,w_{l})\) = trainAccuAccessNN (\(v\), \(ts\), \(\mathcal{D}_{tr}\))
6:if Accuracy (\(model\)) \(>Q\)then
7:\((x_{k}^{*},l_{k}^{*})\) = AdvExGen (\(model\), \(\epsilon\), \(attack\), \(X_{k}^{*}\))
8:\(w\) = SortInAscendingOrder (\(w_{l}\));
9:for each \(s\) in \(a_{l}\)do
10:\(w^{p}\) = PrecisionScaling (\(w\), s);
11:\(m_{l}^{c}=\sum_{i=1}^{c}(w_{i}^{p})\);
12:\(a_{th}\) = (\(cN_{s}/T\)) \(m_{l}^{c}\) - min (1, \(V_{m}\) / \(v\))
13:\(model\) = ApproximateSNN(\(model\), \(w^{p}\), \(m_{l}^{c}\)\(a_{th}\));
14:if\(T_{d}\) is TRUE then
15:\(D_{ts}\) = ApproximateQuantizedFilter(\(q_{l}\), \(\mathcal{D}_{ts}\))
16:endif
17:\((x_{k}^{\prime},l_{k}^{*})\) = AdvAttacks (\(model\), \(\epsilon\), \(attack\), \(x_{k}^{*}\), \(l_{k}^{*}\))
18:if\(l_{k}^{*}\neq l_{k}\)then
19:adv++;
20:else
21:NOP;
22:endif
23:\(R\) (\(\epsilon_{i}\)) = (1 - \(adv\)/ size(\(\mathcal{D}_{ts}\))) * 100;
24:if\(R\geq Q\)then
25:return (\(R\), \(V_{th}\), \(T\), \(s\), \(a_{th}\))
26:endif
27:endfor
```
**Algorithm 1**Precision-Scaling in AxSNNs
### _Approximate Quantization Aware Filtering_
For SNNs feeded by dynamic vision sensors (DVS), the above-discussed defense techniques for frame-based sensors cannot be directly applied due to the event-driven nature of neuromorphic images. Therefore, we present an additional approximate quantization aware filter (AQF) to remove uncorrelated events from the neuromorphic images. Our proposed Algorithm 2 removes adversarial perturbation noise from event-driven neuromorphic dataset \(E\). The neuromorphic dataset \(E\) is represented in the form of \((x,y,p,t)\), where \(x\), \(y\), \(p\) and \(t\) denote the x-coordinate, the y-coordinate, the polarity
and the timestamp of the event \(E\) respectively. The events \(e\) are associated with a spatio-temporal domain and hence, they are correlated. Our algorithm calculates the correlation between events \(e\). If the correlation for events \(e\) is lower than certain spatial-temporal thresholds (\(s\), \(T1\), \(T2\)) then, they are removed because these events with very low correlation are more likely to be noisy due to adversarial perturbations.
```
Inputs : List of events: \(D_{ts}(x,y,p,t)=Events\) Quantization step: \(qt\);
0 Outputs: Filtered Quantized dataset \(D_{q}\)
1:\(M=0\)
2:\(activity=0\), \(s=2\), \(T1=5\), \(T2=50\)
3:for\(e\) in \(Events\)do
4:\(e\) = round(\(d_{q}\)) \(\cdot\)\(q_{t}\)
5:for\(i\) in \((x_{e}-s,x_{e}+s)\)) do
6:for\(j\) in \((y_{e}-s,y_{e}+s)\) ) do
7:if\(\text{not}i==x_{e}\) and \(j=y_{e}\)) then
8:\(M[i][j]=t_{e}\)
9:endif
10:if not(\(i==x_{e}\) and \(j==y_{e}\)) then
11:\(activity[i][j]+=1\)
12:endif
13:endfor
14:endfor
15:if\(activity[i][j]>T1\)then
16:\(M[i][j]=1\)
17:endif
18:if\(t_{e}-M[x_{e}][y_{e}]>T2\) or \(M[x_{e}][y_{e}]==1)\) then
19: Remove \(e\) from \(Events\)
20:endif
21:endfor
22:\(D_{q}=Events\)
23:return\(D_{q}\)
```
**Algorithm 2**ApproximateQuantizedFilter
Algorithm 2 delineates the steps involved in approximate quantization aware filtering of perturbed events in the neuromorphic dataset. First, the dataset is quantized with a fixed quantization step \(q_{t}\) (Line 4) then, the uncorrelated values are checked in each event \(e\) of the dataset (Lines 5-9). With each low correlation, a counter variable \(activity\) is increased and flagged (Lines 10-16). Finally, the flagged uncorrelated values events \(e\) are removed from the dataset (Lines 18-20) to get a quantized and filtered dataset \(D_{q}\) (Line 23).
## V Results and Discussions
In this section, we present our results for the adversarial vulnerability analysis and adversarial defense in AxSNNs.
### _Datasets and Architectures_
We use both static and neuromorphic datasets i.e., MNIST [9] and DVS128 Gesture [10]. Both of these datasets are common for evaluating the performance of SNNs [13] in embedded platforms. As a classifier for the MNIST dataset, we employed a 7-layered SNN with three convolutional layers, two pooling, and two fully connected layers. The test accuracy of this architecture on clean inputs (without any adversarial attack) is 97%. On the other hand, as a classifier for the DVS128 Gesture dataset, we employed an 8-layered SNN with two convolutional and two fully connected layers, three pooling, and one dropout layer. The test accuracy of this architecture on clean inputs is 92%.
#### V-A1 Vulnerability analysis of AxSNNs
We first evaluate the adversarial robustness of AxSNNs by varying the approximation levels and comparing them with AccSNNs under different perturbation budgets for the static MNIST dataset. We keep threshold voltage and time steps constant at 0.25 and 32 in this experiment. Our results in Fig. 2 and Fig. 3 show that the robustness of AxSNNs decreases with an increase in the perturbation budget. For example, a PGD attack (\(\epsilon\) = 0.9) drops the accuracy of AxSNN with approximation level 0.01 from 93% (\(\epsilon\) = 0) to 77% (see labels A and B in Fig. 2) whereas the same attack drops the accuracy of the AccSNN from 96% to 89% see Label C and D in Fig. 2. This indicates a 7% accuracy loss for the AccSNN under PGD attack, whereas the same attack causes accuracy loss of 16% for the AxSNN. In this case, the AxSNN is 2X more vulnerable than AccSNN. Likewise, a BIM attack (\(\epsilon\) = 0.9) drops the accuracy of the AxSNN with approximation level 0.01 from 93% (\(\epsilon\) = 0) to 71% (see labels E and F in Fig. 3) whereas the same attack drops the accuracy of the AccSNN from 96% to 82% (see Label G and H in Fig. 3). This indicates a 14% accuracy drop for the AccSNN, whereas the accuracy drop for the AxSNN is around 22% with PGD attack. In this case, the AxSNN is 1.5X more vulnerable than AccSNN.
Furthermore, we observe that the robustness of AxSNNs varies with the change in the approximation levels. For
Figure 4: Accuracy of AxSNN (approx. level 0.01, precision-scale FP32) under attack (\(\epsilon\) = 1) for MNIST.
Figure 3: Robustness analysis of AccSNN (approximation level 0) and AxSNN MNIST classifier under BIM attack for approximation levels 0.001, 0.01, 0.1 and 1.
Figure 2: Robustness analysis of AccSNN (approximation level 0) and AxSNN MNIST classifier under PGD attack for approximation levels 0.001, 0.01, 0.1 and 1.
instance, under no attack (\(\epsilon\)=0), the accuracy of AxSNNs drops from 96% to 93%, 51%, and 10% when we change the approximate level from 0 to 0.01, 0.1 and 1.0, respectively. Conversely, the accuracy of AxSNNs drops from 96% (\(\epsilon\)=0) to 77%, 25%, and 10% when we change the approximation levels from 0 to 0.01, 0.1 and 1.0, respectively under a PGD attack (\(\epsilon\)=0.9) (see Fig. 2). A similar trend is also observed for the BIM attack (see Fig. 3).
Next, we explore the vulnerability of AxSNNs under sparse and frame attacks with the neuromorphic DVS128 Gesture dataset. We keep the threshold voltage and time steps constant to 1.0 and 80 in this experiment. Our results demonstrate that AxSNNs are vulnerable to neuromorphic attacks. For instance, the accuracy of AxSNN drops from 92% to 12% and 10% under sparse and frame attacks which is almost the same case in AccSNNs as shown in Fig. 7.
#### V-B2 Precision-scaling-based Adversarial Defense
For evaluating the precision-scaling-based adversarial defense in AxSNNs for the static MNIST dataset, we start with the approximation level of 0.01 and vary the precision-scaling scales as Int8, FP16, and FP32. Specifically, we analyze the robustness of each precision-scaled AxSNN against PGD and BIM attacks, with perturbation budget (\(\epsilon\)=1.0), by comparing the accuracy corresponding to each precision scale in Fig. 4, Fig. 5, and Fig. 6 with the base model accuracy (AccSNN) in Fig. (a)a for the MNIST classification. We used \(\epsilon\)=1.0 because the accuracy of both AccSNN and AxSNN drops significantly after this value, so it becomes non-recoverable. For example, it drops to 10% with \(\epsilon\)=1.5.
Furthermore, we observe that the robustness of precision-scaled AxSNNs varies while we change parameters such as threshold voltage, time steps, precision-scaling scale, and approximation level. For example, a PGD attack (\(\epsilon=1.0\)) results in a 12% accuracy loss in precision-scaled AxSNN with approximation level 0.01, precision scale FP32, threshold voltage 0.75 and time steps 32, when compared to AccSNN (see Fig. (a)a). However, changing the precision scale to FP16 and INT8 results in 7% (see Fig. (a)a) and 4% accuracy loss (see Fig. (a)a). Likewise, changing threshold voltage and time steps to 1.0 and 48, respectively, results in recovering the accuracy of AxSNNs to 95%. Similarly, a BIM attack (\(\epsilon=1.0\)) results in a 15% accuracy loss in precision-scaled AxSNN with approximation level 0.01, precision-scaled FP32, threshold voltage 0.5 and time-step 72, when compared to AccSNN (see Fig. (a)a). However, changing the precision scale to FP16 and INT8 results in 4% (see Fig. (a)a) and 3% accuracy loss (see Fig. (a)a, respectively, in the same AxSNN when compared to the AccSNN (see Fig. (a)a). Similarly, changing threshold voltage and time steps to 1.25 and 64, respectively, results in recovering the accuracy of AxSNNs to 97%. This indeed highlights the importance of using our proposed Algorithm 1 for finding a robust sweet spot for the most suitable threshold voltage, time steps, approximation level, and precision scale under an attack. It is important to note that some deviating behavior from the above-discussed robustness trend is often observed at very high threshold voltage, i.e., greater than 1.75, and high approximation level, i.e., greater than 0.1. It is not surprising because SNNs typically lack performance efficiency for very high threshold voltage and high approximation levels since the spikes that trigger the LIF neurons may not cross the threshold voltage, resulting in a compromised performance of the whole network.
We test our proposed Algorithm 1 using threshold voltage in the range of 0.25 to 2.25 with an interval of 0.25, time steps in the range of 32 to 80 with an interval of 8, and precision scales as Int8, FP16, and FP32. As shown in Table I, our Algorithm 1 identifies the best parameter configurations even under PGD and BIM attacks. For example, our algorithm determines the approximation level 0.01 for precision-scaled AxSNN with the precision scale FP32, threshold voltage 1.0, and time steps 48 for accuracy of 97% even under PGD attack (\(\epsilon=1.0\)). However, it determines an approximation level 0.011 for precision-scaled AxSNN with the precision scale INT8, threshold voltage 0.75, and time steps 32 for accuracy
Figure 5: Accuracy of AxSNN (approx. level 0.01, precision-scale EP16 under attack (\(\epsilon\) = 1) to MNIST.
Figure 6: Accuracy of AxSNN (approx. level 0.01, precision-scale INT8) under attack (\(\epsilon\) = 1) for MNIST.
Figure 7: Accuracy of AccSNN without attack (\(\epsilon\)=0) for MNIST and accuracy of AccSNN and AxSNN for DVS128 Gesture with and without attacks.
of 88% under the same attack. It is worth mentioning that the robustness of AxSNNs varies with the precision scales, and thus our algorithm provides different approximation levels for a combination of different threshold voltage, time steps, and precision scales. For instance, a PGD attack (\(\epsilon=1.0\)) on AxSNN with approximation level 0.1, time steps 32, and threshold voltage 0.25 results in a 72% accuracy loss, whereas the same attack on the precision-scaled AxSNN with approximation level 0.01, precision scale Int8 leads to only 17% accuracy loss in the static MNIST dataset (4X robustness improvement). Similar trend is observed with the BIM attack.
#### V-B3 AQF-based Adversarial Defense
To evaluate the effectiveness of the proposed AQF-based adversarial defense in the precision-scaled AxSNN, we use Algorithm 2 for DVS128 Gesture classification under the sparse and frame attacks. Unfortunately, training AxSNNs takes a very long time; thus, we limit ourselves in testing the proposed Algorithm 2 with threshold voltage 1.0 and time steps 80 only. Note, these parameter settings are the most common in SNN research community for neuromorphic datasets [14]. Table II enlists the approximation levels identified by our algorithm for precision scales 0.015 and 0.01. We observe that a sparse attack on AxSNN leads to a 77% accuracy loss when compared to AccSNN without any attack. However, AQF-based adversarial defense in precision-scaled AxSNN recovers the accuracy close to the baseline accuracy. For instance, incorporating AQF-based adversarial defense with a precision scale of 0.015 and approximation level of 0.1 in a precision-scaled AxSNN under a sparse attack recovers the accuracy to almost 90%, which is only 2% less than the baseline accuracy. Interestingly, a frame attack on AxSNN leads to 82% accuracy loss when compared to AccSNN. However, AQF-based adversarial defense with a precision scale of 0.015 and an approximation level 0.1 in a precision-scaled AxSNN under a frame attack recovers the accuracy to almost 91%, which is only 1% less than the baseline accuracy. The reason behind such a tremendous improvement in robustness is that AQF masks noisy events that have a low correlation with each other in a perturbed neuromorphic dataset.
## VI Conclusion
In this paper, we extensively explored the adversarial robustness of AxSNNs and proposed two novel defense methods: precision scaling and approximate quantization-aware filtering (AQF) for designing adversarially robust AxSNNs. To demonstrate the effectiveness of these defense methods, we employ the static MNIST and the neuromorphic DVS128 Gesture datasets. Our results show that AxSNNs are more prone to adversarial attacks than AccSNNs, but precision scaling and AQF significantly improve the robustness of AxSNNs. For instance, a PGD attack on AxSNN results in a 72% accuracy loss compared to AccSNN without any attack, whereas the same attack on the precision-scaled AxSNN leads to only a 17% accuracy loss in the static MNIST dataset (4X robustness improvement). Likewise, a Sparse Attack on AxSNN leads to a 77% accuracy loss when compared to AccSNN without any attack, whereas the same attack on an AxSNN with AQF leads to only a 2% accuracy loss in the neuromorphic DVS128 Gesture dataset (38X robustness improvement).
## VII Acknowledgements
This work was partially supported by awards from U.S. Naval Research Lab under the grant N0017321C2016. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the U.S. Government or agency thereof.
|
2308.04459 | MCTS guided Genetic Algorithm for optimization of neural network weights | In this research, we investigate the possibility of applying a search
strategy to genetic algorithms to explore the entire genetic tree structure.
Several methods aid in performing tree searches; however, simpler algorithms
such as breadth-first, depth-first, and iterative techniques are
computation-heavy and often result in a long execution time. Adversarial
techniques are often the preferred mechanism when performing a probabilistic
search, yielding optimal results more quickly. The problem we are trying to
tackle in this paper is the optimization of neural networks using genetic
algorithms. Genetic algorithms (GA) form a tree of possible states and provide
a mechanism for rewards via the fitness function. Monte Carlo Tree Search
(MCTS) has proven to be an effective tree search strategy given states and
rewards; therefore, we will combine these approaches to optimally search for
the best result generated with genetic algorithms. | Akshay Hebbar | 2023-08-07T18:11:51Z | http://arxiv.org/abs/2308.04459v1 | # MCTS guided Genetic Algorithm for optimization of neural network weights
###### Abstract
In this research, we investigate the possibility of applying a search strategy to genetic algorithms to explore the entire genetic tree structure. Several methods aid in performing tree searches; however, simpler algorithms such as breadth-first, depth-first, and iterative techniques are computation-heavy and often result in a long execution time. Adversarial techniques are often the preferred mechanism when performing a probabilistic search, yielding optimal results more quickly. The problem we are trying to tackle in this paper is the optimization of neural networks using genetic algorithms. Genetic algorithms (GA) form a tree of possible states and provide a mechanism for rewards via the fitness function. Monte Carlo Tree Search (MCTS) has proven to be an effective tree search strategy given states and rewards; therefore, we will combine these approaches to optimally search for the best result generated with genetic algorithms.
genetic algorithm, mcts, optimization, reinforcement learning, neural network
## I Introduction
Genetic algorithms belong to a subclass of evolutionary algorithms that provide a method for optimization based on genetic selection principles [8]. They serve a purpose in machine learning and research development, in addition to search tool optimization systems. The approach used in genetic algorithms is analogous to biological concepts of chromosome generation, with operators such as selection, crossover, mutation, and recombination. GA is a population-based approach that aims to provide a solution to successive generations. The process of evolution using GA involves starting with a random population and evolving it using crossover and mutation operators to generate children. The best-fit solution is then filtered, and the genetic process is repeated until the objective is achieved. We can observe that in the process of genetic algorithms, a tree of possible solutions is generated, and the best-fit solution is picked for successive iteration, which limits the search space and computational resources. Genetic algorithms are used for various problem domains such as decision trees, segmentation, classification, etc. However, in this paper, our focus will be on the application of GA in optimizing neural network weights.
The Monte Carlo Tree Search approach was developed in 2006 as an application to game-tree search. This tree search works on the principles of cumulated reward calculated from children's nodes and uses Q and N values to balance between exploration and expansion approaches. The exploration approach considers the number of nodes visited and uses a quantitative approach to discovering child nodes that have not been visited. The expansion approach follows a qualitative strategy to discovering child nodes with Q-value indicating the cumulative sum of rewards.
Figure 1: Simple Genetic Algorithm
Figure 2: Outline of a Monte Carlo Tree Search
A sufficient policy for MCTS - from prior research - has been found to be UCT (upper confidence trees). UCT provides an upper confidence bound to tree search. This policy helps the search with balancing exploration vs exploitation and navigate the search space optimally. Given that the MCTS is an adversarial tree search strategy, we may be able to apply it to the entire GA tree landscape to find optimal solutions rather than exploration based on the fitness alone. Thus, we discuss the novel approach of MCTS-GA for optimization of neural network in this research.
## II Problem and Data Description
### _GA for Neural Network weights_
The first problem to consider when applying GA to optimizing the neural network weights is to confirm the validity of the child nodes generated in the process. The crossover and mutation operator applied on these weights may result in a suboptimal solution in the lower generation, but these may evolve to become better solutions in the later generations. Conversely, a solution which had highest fitness in earlier generations may end up providing a sub-optimal result in the later generation due to the nature of genetic operators used. Thus, the only way to identify the overall best solution is to let the entire tree expand and calculate fitness of each child node until a valuable solution is found. However, this solution is computation heavy, and an exhaustive tree search is rarely optimal even in cases of smaller trees. The number of nodes for a given tree which makes up for the search space is given by the below formula.
\(\frac{b^{h}-1}{b-1}\), branching factor b and height h
_Figure 3. Application of Monte Carlo Tree Search applied to Genetic Algorithm_
A quick calculation for a tree with branching factor of 10 and depth 10 shows the search space to be 1,111,111,111. Genetic algorithms often perform better when the size of the tree is large and with increase in tree size the number of nodes generated increases exponentially and thus the search space itself.
### _Maintaining the Integrity of the weights_
A well-known issue with genetic algorithms is the problem of competing conventions wherein the child nodes generated because of evolution are not viable and have decreased fitness. An example in case of neural network is a cross over operator applied to the weights of the neural network. The operator shuffles the weights and applies a random mutation which can propagate between layers and dimensions. The modified weights may not be appropriate for optimizing the loss function of the given neural network and may lead to an invalid solution.
### _Data Description_
The diabetes dataset is chosen to forecast the onset of diabetes mellitus. This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether a patient has diabetes, based on certain diagnostic measurements included in the dataset [1]. The dataset has been preprocessed by balancing using under sampling and random shuffling. A minmax scaler is used to scale the data.
For this classification problem we have developed a feedforward neural network with 4 hidden layers. There are 8 input nodes and (16-84-1) hidden nodes respectively in each layer.
Fig. 4: Heatmap of the diabetes dataset
Fig. 3: Application of Monte Carlo Tree Search applied to Genetic Algorithm
The neural network uses sigmoid activation function, binary cross entropy loss function and Adam optimizer with 0.01 learning rate. The network is trained for 200 epochs in batch size of 10.
The weights of this neural network are used as the point of optimization for our MCTS-GA approach. Each layer weights are vectorized, combined and labelled to form an individual upon which the algorithm is applied.
## III Approach
In our approach we try to address both the issues mentioned above by combining the approaches of MCTS and GA. We can take advantage of the genetic algorithm structure as it generates a tree with a mechanism to evaluate the reward in terms of the fitness of the individual. MCTS will use the same underlying tree structure along with the fitness function to calculate the Q-value which indicates the cumulative sum of rewards; the N-value which indicates the visit count of each node is also maintained for the calculation of upper bounds. The overall structure follows the complete expansion of child nodes using GA and using MCTS with UCT policy to search for the optimal nodes with fitness function as a method to assign rewards.
The process of evolution using GA involves starting with a random population which in our case is the weights of the neural network. We generate the initial population P\({}_{0}\) of certain size by adding a random uniform distribution on each dimension of the weights of the neural network and creating children with random weights.
Next, we introduce the concept of _genetic action_ for MCTS as illustrated by dashed box in figure 5.
The genetic action consists of the following three genetic operations applied on the parent population.
* \[\circ\] Selection \[\circ\] Crossover \[\circ\] Mutation
Selection: Various selection strategies such as roulette wheel, tournament, rank, steady state etc. are available for filtering individuals of higher fitness. The selection mechanism used in this approach is the k-way tournament selection. Tournament selection applies selection of k random individuals from the population and selects the best fit among the randomly selected individuals.
Crossover: A crossover operator is then applied on the initial population to generate children. 1-point crossover is applied where we have restricted the crossover operator to each layer of the neural network to mitigate the problem of competing conventions and maintain the integrity of the weights of the neural network.
Mutation: The mutation operator is used to introduce random mutation in the population. For our approach we have used random mutation on each layer of the neural network by swapping the weight values of 2 randomly chosen individuals. The subsequent child nodes are selected with the UCT policy of MCTS from the root node until a leaf node is found. The children among the leaf nodes are selected randomly and their corresponding Q and N values are backpropagated. The UCT policy using UCB1 is as follows.
Fig. 5: Application of Monte Carlo Tree Search applied to Genetic Algorithm
\[\text{UCB1}=\sqrt{\frac{c+\text{ln}\left(N\right)}{N+1}}\,,\text{N}_{\text{i}}\text{ is the i}^{\text{th}}\text{ child node and N is}\]
the number of child nodes
_Equation 1. Upper confidence bounds_
The next step in the process is the application of the concept of _rollout_ (simulation) based on the MCTS approach. For the selected child node, an _evolutionary rollout_ operation is applied wherein the individual is evolved using (\(\text{\text{\text{\text{\text{u}}}}}^{+}\lambda\)) evolutionary strategy unlike the regular random action simulations performed in prior research experiments [3]. The reason for applying the evolutionary rollout is to find out if the genetically mutated individual is of the highest fitness possible for its phenotype. We define this process as _ageing_ the individual in comparison to the biological phenomenon of ageing. Thus, the ageing process introduced in rollout determines the best possible age (generation) in which the individual would be best suited to genetically mutate again. The rollout/ageing process will replace the individual if a better fitness is found in later generations of the evolutionary strategy.
This process is repeated until the specified tree depth is reached. This approach provides computational flexibility in terms of configurable parameters such as tree height, tournament selection, number of rollout generations and branching factor. These parameters can be used in combination with each other as per the computational capacity available. Thus, the application of genetic action and evolutionary rollout in amalgamation with MCTS provides the basis for the approach discussed in this paper.
## IV Results
The MCTS-GA approach is a novel mechanism for optimization and is aimed to be data agnostic. The genetic algorithm representation can be configured in multiple ways which makes this approach suitable for wide range of optimization problems. In this experiment MCTS-GA was run using UCT for 20 generations (tree depth) with rollouts configured to 10 generations and branching factor 5. The GA was run for 200 generation and the neural net was run for 200 epochs.
The results obtained from primary testing has proven to be positive. The MCTS-GA approach was able to optimize the neural network weights for better classification of the diabetes data and obtained better accuracy results.
The comparison of the results obtained to that of the original neural network and canonical genetic algorithm are shown below.
The classification accuracy can also be noticed from the roc-auc curves indicated below.
The confusion matrix for the three approaches compared are shown below.
## V Discussion
The experiment confirms the working of MCTS-GA on optimization of neural net weights. The optimization of weights and thus the classification is seen to be better achieved by the MCTS-GA over the genetic algorithm and feedforward neural network approach. Although, the improvement is not large MCTS-GA does run in a comparable time in comparison to the other two techniques. There is scope for improvement of the algorithm discussed here and the representations of different problems are
Fig. 6: ROC-AUC curve for neural network, genetic algorithm and MCTS-GA respectively
Fig. 7: confusion matrix for neural network, genetic algorithm and MCTS-GA respectively
to be tested. In all, we discussed a novel approach that can prove to be a strong and valid technique for optimization techniques in the future.
|
2304.06831 | DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph
Neural Network Inference | Dynamic Graph Neural Networks (DGNNs) are becoming increasingly popular due
to their effectiveness in analyzing and predicting the evolution of complex
interconnected graph-based systems. However, hardware deployment of DGNNs still
remains a challenge. First, DGNNs do not fully utilize hardware resources
because temporal data dependencies cause low hardware parallelism.
Additionally, there is currently a lack of generic DGNN hardware accelerator
frameworks, and existing GNN accelerator frameworks have limited ability to
handle dynamic graphs with changing topologies and node features. To address
the aforementioned challenges, in this paper, we propose DGNN-Booster, which is
a novel Field-Programmable Gate Array (FPGA) accelerator framework for
real-time DGNN inference using High-Level Synthesis (HLS). It includes two
different FPGA accelerator designs with different dataflows that can support
the most widely used DGNNs. We showcase the effectiveness of our designs by
implementing and evaluating two representative DGNN models on ZCU102 board and
measuring the end-to-end performance. The experiment results demonstrate that
DGNN-Booster can achieve a speedup of up to 5.6x compared to the CPU baseline
(6226R), 8.4x compared to the GPU baseline (A6000) and 2.1x compared to the
FPGA baseline without applying optimizations proposed in this paper. Moreover,
DGNN-Booster can achieve over 100x and over 1000x runtime energy efficiency
than the CPU and GPU baseline respectively. Our implementation code and
on-board measurements are publicly available at
https://github.com/sharc-lab/DGNN-Booster. | Hanqiu Chen, Cong Hao | 2023-04-13T21:50:23Z | http://arxiv.org/abs/2304.06831v1 | # DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference
###### Abstract
Dynamic Graph Neural Networks (DGNNs) are becoming increasingly popular due to their effectiveness in analyzing and predicting the evolution of complex interconnected graph-based systems. However, hardware deployment of DGNNs still remains a challenge. First, DGNNs do not fully utilize hardware resources because temporal data dependencies cause low hardware parallelism. Additionally, there is currently a lack of generic DGNN hardware accelerator frameworks, and existing GNN accelerator frameworks have limited ability to handle dynamic graphs with changing topologies and node features. To address the aforementioned challenges, in this paper, we propose DGNN-Booster, which is a novel Field-Programmable Gate Array (FPGA) accelerator framework for real-time DGNN inference using High-Level Synthesis (HLS). It includes two different FPGA accelerator designs with different dataflows that can support the most widely used DGNNs. We showcase the effectiveness of our designs by implementing and evaluating two representative DGNN models on ZCU102 board and measuring the end-to-end performance. The experiment results demonstrate that DGNN-Booster can achieve a speedup of up to 5.6\(\times\) compared to the CPU baseline (6226K), 8.4\(\times\) compared to the GPU baseline (A6000) and 2.1\(\times\) compared to the FPGA baseline without applying optimizations proposed in this paper. Moreover, DGNN-Booster can achieve over 100\(\times\) and over 1000\(\times\) runtime energy efficiency than the CPU and GPU baseline respectively. Our implementation code and on-board measurements are publicly available at [https://github.com/share-lab/DGNN-Booster](https://github.com/share-lab/DGNN-Booster).
## I Introduction
Graph Neural Networks (GNNs) are powerful tools for capturing relationships within graph-structured data and can be applied in a wide range of domains, including recommendation systems [1], drug discovery [2], fraud detection [3] and traffic prediction [4]. In real-world applications, DGNNs have several advantages over traditional GNNs. They can capture temporal dependencies between the nodes and edges in a graph and thus achieve better performance in temporal-related tasks such as traffic pattern prediction [5] and stock price forecasting [6]. Additionally, DGNNs are highly flexible by combining different types of GNNs for spatial encoding and Recurrent Neural Networks (RNNs) for temporal encoding, resulting in improved performance.
While different types of DGNNs have seen success in software [7, 8, 9], challenges remain in their hardware deployment: (1) _Low parallelism_. It is hard to parallelize the computation on hardware due to temporal data dependencies between graphs at different times. (2) _Large memory consumption and frequent memory access_. The time-evolving graph embeddings leads to large memory consumption and frequent data transfer between on-chip and off-chip memory. (3) _High energy consumption_: DGNNs have high energy consumption due to computation-intensive matrix multiplications and complex mathematical operations.
To address the aforementioned challenges, in this paper, we propose **DGNN-Booster**, which is a generic FPGA accelerator framework for DGNNs that achieves high-speed and low energy consumption on-board inference and can be applied to various popular DGNNs. Our contributions can be summarized as follows:
1. **Generic and open-source.** DGNN-Booster is a model-generic framework, developed using High-Level Synthesis (HLS) for ease of use. It has modularized processing elements (PEs) for GNN and RNN and supports multiple types of GNNs and RNNs. It's publicly available, with on-board measurement and end-to-end functionality verified by crosschecking with PyTorch code.
2. **Hardware efficient.** DGNN-Booster has multi-level parallelism with hardware architecture optimizations, aiming to deliver real-time performance with lower energy consumption compared to CPU and GPU. Different graphs at different time steps can be streamed in consecutively and processed on-the-fly.
3. **Two accelerator designs with different dataflows.** DGNN-Booster has two designs that support different dataflows between GNN and RNN. DGNN-Booster V1 overlaps them in adjacent time steps and DGNN-Booster V2 connects them in data streaming within one time step. Both designs feature data streaming inside RNN for increased parallelism and lower on-chip memory consumption.
4. **On-board evaluation.** We verify DGNN-Booster on Xilinx ZCU102 FPGA using two popular temporal graph datasets with varying graph sizes at different time steps based on two representative DGNN models. We also do an ablation study to demonstrate the effectiveness of our multi-level parallelism design.
## II Background about DGNNs
Dynamic Graph Neural Networks (DGNNs) are GNNs designed for dynamic graph structures and features. According to a recent survey [10], DGNNs can be classified into two different categories: discrete-time DGNNs and continuous-time DGNNs. DGNN-Booster supports discrete-time DGNNs, which use a set of ordered graphs (snapshots) to represent dynamic graphs.
\[DG=\{G^{1},G^{2},...,G^{T}\} \tag{1}\]
where T is the number of snapshots. This discrete-time representation of dynamic graphs enables the use of traditional static GNNs for spatial information encoding and RNNs for temporal information encoding.
We identify three types of discrete-time DGNNs based on the dataflow relationship between GNN and RNN, with a summary in Table I.
* **Stacked DGNNs.** It is the most straightforward way to model a discrete-time dynamic graph. GNN encodes them as time-series information and feeds them into RNN. This process can be represented as: \[\begin{split}& X^{1},X^{2},...,X^{t}=\text{GNN}(G^{1},G^{2},...,G^{t}) \\ & O^{t+1}=\text{RNN}(X^{1},X^{2},...,X^{t})\end{split}\] (2) where \(G^{t}\) is the node embedding of the snapshot at time \(t\), \(X^{t}\) is the updated node embedding by GNN at time \(t\), \(O^{t+1}\) is the output of RNN for time \(t+1\). The high-level dataflow diagram of this type of DGNN is shown in Fig. 1.
* **Integrated DGNNs.** This type of discrete-time graph encoding combines GNN and RNN together within one time step by replacing matrix multiplications in RNN with graph-related operations, such as graph convolution. It can be expressed as the following equations: \[\begin{split}& X_{1}^{t}=\text{GNN}1(G^{t})\\ & X_{2}^{t}=\text{GNN}2(G^{t})\\ & G^{t+1}=\text{RNN}(X_{1}^{t},X_{2}^{t})\end{split}\] (3) where \(G^{t}\) represents the node embedding of the snapshot at time \(t\). \(X_{1}^{t}\) and \(X_{2}^{t}\) are two different updated node embeddings by GNN1 and GNN2 with different weights. The high-level dataflow diagram of this type of DGNN is shown in Fig. 2.
* **Weights-evolved DGNNs.** The dataflow between GNN and RNN of this type of DGNN is similar to Stacked DGNNs. The difference is in what is evolved by RNN. Different from stacked DGNN, where node embeddings updated by GNN are evolved by RNN, the weights of GNN are evolved by RNN. It can be expressed as the following equations: \[\begin{split}& W^{t}=\text{RNN}(W^{t-1})\\ & O^{t}=\text{GNN}(W^{t},G^{t})\end{split}\] (4) where \(W^{t}\) is the weight of GNN at time \(t\), \(G^{t}\) is the node embedding at time \(t\), \(O^{t}\) is the output of GNN at time \(t\). The high-level dataflow diagram of this type of DGNN is shown in Fig. 3.
## III Motivations and Innovations
### _Related Works and motivations_
There are some recent developments in DGNN hardware accelerators. Zhou et al. [19] perform model-architecture co-design on memory-based Temporal Graph Neural Networks. Cambricon-G [20] is the first hardware accelerator aiming to exploit more opportunities for data reuse using multidimensional multilevel tiling. Chakaravarthy et al. [21] finish the first scaling study on DGNNs by designing a multi-GPU DGNN training system. DynaGraph [22] and TGL [23] are another two high-performance DGNN training frameworks on GPU focusing on spatial and temporal knowledge unifying and simple user configuration.
Fig. 1: A high-level overview of CPU and GPU implementation dataflow of stacked DGNNs. The output from GNN at different time steps will be fed into RNN in a sequential manner. Only the output from the RNN in the last time step will be used for the following computation. (NE: node embedding)
Fig. 3: A high-level overview of CPU and GPU implementation dataflow of weights-evolved DGNNs. Weights are evolved by RNN and used by GNN in a sequential manner. (NE: node embedding)
Fig. 2: A high-level overview of CPU and GPU implementation dataflow of integrated DGNNs. The output from RNN in the last time step will be used as the input of GNN in the next time step. Two GNNs are in sequential. The GNNs and RNN are also computed in a sequential manner. (NE: node embedding)
However, there still remain some challenges on DGNN hardware deployment. **High energy consumption and low computation resource utilization.** Previous works primarily focus on deploying DGNNs on GPUs. However, these designs suffer from high energy consumption and low computation resource utilization because of temporal data dependencies. **Lack of parallelism between GNN and RNN.** Previous research focuses on treating GNN and RNN as separate parts, which limits parallelism. **Lack of integrating GNN and RNN optimizations together into a single system.** Previous research usually optimizes GNN and RNN individually, which limits achieving optimal hardware efficiency.
### _Innovations_
Motivated by these challenges, we propose DGNN-Booster to achieve high-speed and low energy consumption DGNN inference on FPGA. Our design can cover most DGNN types shown in Table. I. It has several advantages over previous DGNN accelerator designs:
* **Better hardware performance with high flexibility.** DGNN-Booster has lower latency and energy consumption than CPU and GPU. Its modularized design of GNN and RNN using High-level Synthesis (HLS) allows for easy integration of different GNNs and RNNs. Additionally, GNN is implemented using the message passing mechanism, and we emphasize DGNN-Booster's support for edge embeddings, which are not considered by existing DGNN accelerators but are widely used in most GNN models.
* **Multi-level parallelism.** There are two levels of parallelism in our design. In higher-level parallelism, we parallelize GNN and RNN in adjacent time steps in DGNN-Booster V1 while parallelizing GNN and RNN within one time step in V2. The DGNN type supported is shown in Table I. Moreover, we overlap graph loading with GNN inference in V1. In lower-level parallelism, we implement GNN using message passing mechanism based on GenGNN [24]. The message passing and node transformation are in streaming DGNN-Booster V2. Besides, we implement data streaming for different stages inside RNN in both accelerator designs.
* **Harware efficient architecture design.** We propose a task scheduling scheme to allocate the most suitable tasks for CPU and FPGA. Besides, we implement graph renumbering and format transformation to make our design more hardware efficient. Additionally, we utilize different types of RAMs on-chip to achieve memory efficiency.
## IV Hardware Architecture Design
DGNN-Booster is developed based on a CPU-FPGA heterogeneous platform, where a host program loads weights and node features to DRAM and does graph preprocessing. After graph preprocessing finishes, FPGA loads prepared data to on-chip buffers via PCIe. The FPGA accelerator is optimized for various DGNNs with customized IPs designed using HLS, which contain parallel processing elements (PEs) for concurrent inference of GNN and RNN to achieve optimal hardware performance.
### _Graph preprocessing and data communication_
The input graphs to DGNN-Booster are in the coordinate (COO) format, which is the most widely used format in dynamic graph datasets. In COO format, edges are stored in an arbitrarily ordered list, where each list entry consists of the source node, the destination node, the data and the time associated with the edge. The host program is responsible for slicing the large input graph into small snapshots in the order of time based on the time splitter we choose. The time splitter should be set appropriately to ensure that the size of each snapshot is not too large or too small. During the snapshot generation, the CPU will also calculate the number of nodes edges of each snapshot.
The data communication modes of the weights and snapshot information are different. The weights are shared between different time steps, so the overhead of weight loading is a one-time cost before the computation on FPGA starts. The edge list, node embedding, edge embedding and the number of nodes and edges of each snapshot are sent from DRAM to on-chip buffers in the order of time waiting for computation on FPGA. Since the on-chip memory resources are very limited on FPGA, it is impossible to transmit the information of snapshots with different node and edge features at different time steps all at once to the on-chip buffers. As a result, only the information of the snapshot to be processed in the next time step will be sent to on-chip buffers.
### _Graph renumbering and format transformation_
During FPGA runtime, only a snapshot is stored in on-chip buffers. To ensure the correct data access to the on-chip buffer,
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{**DGNN type**} & \multirow{2}{*}{**Related works**} & \multirow{2}{*}{**Dataflow type**} & **DGNN** & **DGNN** \\ & & **Booster V1** & **Booster V2** \\ \hline Stacked & GCRN-M1 [11], RgCNN [12] & * Data dependencies between GNN and RNN within one time step. & **✓** & **✓** \\ DGNN & WD-GCN [13], DyGGNN [14] & * Independent GNN at different time steps. & **✓** & **✓** \\ \hline Integrated & GCRN-M2 [11], GC-LSTM [15] & * Data dependencies between GNN and RNN in adjacent time steps. & **✓** & **✓** \\ DGNN & LRGCN [16], RE-Net [17] & * Dependent GNN at different time steps. & **✓** & **✓** \\ \hline Weights-evolved & EvolveGCN [18] & * Weights for GNN are evolved by RNN. & **✓** & **✓** \\ DGNN & EvolveGCN [18] & * Independent GNN at different time steps. & **✓** & **✓** \\ \hline \end{tabular}
\end{table} TABLE I: A summary of different types of discrete-time DGNNs and their dataflow types. We also showcase which accelerator design in DGNN-Booster can be applied.
we need to know the raw index of each node in the large raw graph and its corresponding address in the BRAM. The host program will generate a renumbering table for each snapshot to take a record of the node index renumbering information. During inference, processing units (PEs) will first check this renumbering table to do node index transformation and then correctly fetch data in the on-chip buffer. The renumbering table will also guide the FPGA to correctly fetch data from DRAM and write back.
Although the COO format is convenient for graph producers and is usually the raw data format in real-life applications, it is not hardware-friendly because finding the neighborhood nodes contains irregular computation patterns and usually brings a large overhead on FPGA. Instead of using COO format, we use compressed sparse row (CSR) format or compressed sparse column (CSC) format for GNN inference by designing a converter on FPGA for format transformation.
Graph renumbering and format transformation together will ensure the data of the snapshot is stored in a continuous space in on-chip buffers, which avoids irregular on-chip memory access.
### _Multi-level parallelism_
The techniques used to achieve multi-level parallelism are different in DGNN-Booster V1 and V2.
#### V-C1 DGNN-Booster V1
DGNN-Booster V1 is proposed to reduce the hardware overhead caused by sequential RNN and GNN computation in stacked DGNN and weighs-evolved DGNN, whose CPU and GPU dataflows are shown in Fig. 1 and Fig. 3 respectively. This design makes GNN and RNN in adjacent time steps in parallel. Detailed FPGA implementation of DGNN-Booster V1 is shown in Fig. 4.
* **Ping-pong buffers and data-streaming FIFOs in RNN.** We avoid data conflict by using two pairs of ping-pong buffers for weights and node embeddings. As shown in Fig. 4, GNN can read the weights from buffer 2 while RNN can update the weights for the next time step and store the results in buffer 1 at the same time. Similarly, the ping-pong buffers for node embeddings allow for parallel data loading with GNN inference. We also utilize FIFOs (first-in first-out) to connect different computation stages inside RNN so that these stages can be pipelined at the node level to further reduce the latency of RNN.
* **Execution flow.** The inference process within one time step can be divided into four separate parts: graph loading (GL), message passing (MP), node transformation (NT) and RNN. Among them, MP must wait for the result from GL and NT must wait for MP and RNN. To achieve the optimal performance, we schedule RNN in \(t+1\) with MP in \(t\) parallel and GL in \(t+1\) with NT in \(t\) in parallel. This is because MP and RNN are two relatively more computation-intensive modules than GL and NT, and scheduling in this scheme can avoid workload imbalance.
#### V-C2 DGNN-Booster V2
DGNN-Booster V2 is proposed to reduce the hardware overhead caused by sequential RNN and
Fig. 4: Optimized FPGA implementation of DGNN-Booster V1 based on EvolveGCN and V2 based on GCRN-M2. DGNN-Booster V1 overlaps the computation of GNN and RNN in adjacent time steps by using ping-pong buffers to store the weights. Graph loading is overlapped with GNN inference using the same way. DGNN-Booster V2 overlaps the computation of GNN and RNN within one time step by utilizing node queues implemented with FIFOs.
GNN computation in stacked DGNN and integrated DGNN, whose CPU and GPU dataflows are shown in Fig. 1 and Fig. 2 respectively. This design makes GNN and RNN within the same time step in parallel. The high-level dataflow and FPGA architecture of DGNN-Booster V2 is shown in Fig. 4.
* **Node queues.** The node queues are implemented using FIFOs to overlap GNN and RNN computation. When the GNN finishes the updating of one node embedding, it will send the updated node embedding to the node queue. The PEs which execute the RNN will fetch node embeddings from the node queue in the same order. At the same time, GNN will be triggered to fetch new node embeddings. As a result, GNN and RNN can work in parallel on different nodes.
* **Execution flow.** In this design, the message passing and node transformation in GNN are in data streaming. Furthermore, different stages in RNN are in data streaming. Combining these optimizations with node queues connected between GNN and RNN, we achieve node-level pipelining end-to-end.
### _Task scheduling on CPU-FPGA heterogeneous platform_
To better utilize the advantages of both CPU and FPGA, we propose a task scheduling scheme. CPU offers generality over a wide range of tasks while FPGA provides high peak performance on tasks with simple computation patterns. In order to achieve optimal hardware performance, we schedule graph preprocessing and renumbering to CPU. The graph format transformation, GNN and RNN inference are scheduled to the FPGA. This is because graph preprocessing and renumber table generation need complex control flows and irregular and frequent memory access but with low computation intensity. GNN and RNN inference has many matrix multiplications, which are computation-intensive but with simple computation patterns.
### _On-chip buffer design_
There are two types of RAMs on FPGA. LUTRAM is built out of LUTs, which is more efficient for small RAMs as it can be created to fit many different sizes. For BRAM, the minimum memory size is 18KB. If any of the memory space is one BRAM is unused, it cannot be used for something else and thus wasted. DGNN-Booster contains fine-grained pipelining inside GNN and RNN. In this case, weight buffers will be partitioned into so many small RAMs, and it is a huge waste of on-chip memory resources if we store the weights in BRAM. As a result, weights are allocated to LUTRAMs. On the contrary, since the data size of node embedding and edge embedding is larger than weights, and needed to be stored in a continuous space on-chip, so we allocate them to BRAM. Our on-chip buffer design demonstrates memory efficiency and supports relatively larger snapshots to be stored on-chip to avoid frequent data exchange between the host and FPGA during the message-passing stage.
## V Experiments
### _Implementation Details_
We deploy DGNN-Booster using High-Level Synthesis (HLS) by Vitis HLS and Vivado for the Xilinx ZCU102 FPGA development board, as shown in Fig. 5, whose available resources are shown in Table II, targeting at 100MHz clock frequency. We choose EvolveGCN [18] as the base model for DGNN-Booster V1 and GCRN-M2 [11] for V2. Both use GCN [25] as GNN. GRU [26] and LSTM [27] are used as RNN in EvolveGCN and GCRN-M2 respectively. Both of the weights and graph embeddings are in 32-bit floating point.
low GPU resource utilization and a large communication overhead between CPU and GPU [31], the latency reported by GPU baseline is a little higher than CPU.
Additionally, we evaluate the energy efficiency of DGNN-Booster on BC-Alpha and UCI datasets using a power meter connected to the board, as shown in Fig. 5. Table V and VI show the total and runtime energy efficiency of DGNN-Booster compared to CPU and GPU baselines respectively. The remarkable speedup and energy efficiency demonstrate the effectiveness of DGNN-Booster.
### _Design space exploration and ablation study_
We perform a simple design space exploration to select the optimal accelerator configuration, taking into account the trade-off between GNN and RNN computations. In DGNN-Booster V1, we allocate more DSPs to RNN since it is computationally heavier than GNN. Conversely, in DGNN-Booster V2, we allocate more DSPs to GNN since GNN is computationally heavier. Table VII provides a detailed breakdown of the DSP allocation for each version. We further evaluate DGNN-Booster through an ablation study by comparing it to a GPU and a non-optimized FPGA baseline. We test two levels of pipeline optimization (Pipeline-O1 and Pipeline-O2). Pipeline-O1 involves pipelining different stages inside RNN, Pipeline-O2 adds more optimizations by overlapping GNN and RNN at a module level. Fig. 6 shows the incremental improvement of inference speed over the two baselines. It demonstrates that pipelining GNN and RNN from a multi-level can be very effective in reducing latency.
## VI Conclusions and Future Works
In this paper, we propose **DGNN-Booster**, which is an FPGA accelerator framework including two designs with different dataflow patterns and multi-level parallelism. DGNN-Booster is open-sourced and can achieve real-time inference speed with low energy consumption. Moreover, it is a generic framework that supports various types of GNNs and RNNs, integrated with GNN and RNN hardware optimizations together to achieve optimal performance improvement. Future works include the on-board implementation to avoid redundant data communication and computation because of the similarity between snapshots in adjacent time steps. We also plan to do design space exploration to balance computation resources for spatial and temporal encoding parts.
## Acknowledgment
This work is partially supported by National Science Foundation (NSF) Grant ECCS-2202329.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline
**Model** & **CPU** & **GPU** & **FPGA** & **FPGA** & **FPGA** \\
**(Dataset)** & & & **(Ours)** & **(vs. CPU)** & **(vs. GPU)** \\ \hline
**EvolveGCN** & & & & & \\
[MISSING_PAGE_POST] |
2308.10438 | Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep
Neural Networks | In this paper, we propose a novel layer-adaptive weight-pruning approach for
Deep Neural Networks (DNNs) that addresses the challenge of optimizing the
output distortion minimization while adhering to a target pruning ratio
constraint. Our approach takes into account the collective influence of all
layers to design a layer-adaptive pruning scheme. We discover and utilize a
very important additivity property of output distortion caused by pruning
weights on multiple layers. This property enables us to formulate the pruning
as a combinatorial optimization problem and efficiently solve it through
dynamic programming. By decomposing the problem into sub-problems, we achieve
linear time complexity, making our optimization algorithm fast and feasible to
run on CPUs. Our extensive experiments demonstrate the superiority of our
approach over existing methods on the ImageNet and CIFAR-10 datasets. On
CIFAR-10, our method achieves remarkable improvements, outperforming others by
up to 1.0% for ResNet-32, 0.5% for VGG-16, and 0.7% for DenseNet-121 in terms
of top-1 accuracy. On ImageNet, we achieve up to 4.7% and 4.6% higher top-1
accuracy compared to other methods for VGG-16 and ResNet-50, respectively.
These results highlight the effectiveness and practicality of our approach for
enhancing DNN performance through layer-adaptive weight pruning. Code will be
available on https://github.com/Akimoto-Cris/RD_VIT_PRUNE. | Kaixin Xu, Zhe Wang, Xue Geng, Jie Lin, Min Wu, Xiaoli Li, Weisi Lin | 2023-08-21T03:22:47Z | http://arxiv.org/abs/2308.10438v2 | # Efficient Joint Optimization of Layer-Adaptive Weight Pruning
###### Abstract
In this paper, we propose a novel layer-adaptive weight-pruning approach for Deep Neural Networks (DNNs) that addresses the challenge of optimizing the output distortion minimization while adhering to a target pruning ratio constraint. Our approach takes into account the collective influence of all layers to design a layer-adaptive pruning scheme. We discover and utilize a very important additivity property of output distortion caused by pruning weights on multiple layers. This property enables us to formulate the pruning as a combinatorial optimization problem and efficiently solve it through dynamic programming. By decomposing the problem into sub-problems, we achieve linear time complexity, making our optimization algorithm fast and feasible to run on CPUs. Our extensive experiments demonstrate the superiority of our approach over existing methods on the ImageNet and CIFAR-10 datasets. On CIFAR-10, our method achieves remarkable improvements, outperforming others by up to 1.0% for ResNet-32, 0.5% for VGG-16, and 0.7% for DenseNet-121 in terms of top-1 accuracy. On ImageNet, we achieve up to 4.7% and 4.6% higher top-1 accuracy compared to other methods for VGG-16 and ResNet-50, respectively. These results highlight the effectiveness and practicality of our approach for enhancing DNN performance through layer-adaptive weight pruning. Code will be available on [https://github.com/Akimoto-Cris/RD_VIT_PRUNE](https://github.com/Akimoto-Cris/RD_VIT_PRUNE).
## 1 Introduction
Deep Neural Networks (DNNs) [22, 34, 35, 17, 19] play a critical role in various computer vision tasks. However, to achieve high accuracy, DNNs typically require large number of parameters, which makes it very energy-consuming and is difficult to be deployed on resource-limited mobile devices [16, 15]. Pruning is one of the powerful ways to reduce the complexity of DNNs. By removing the redundant parameters, the operations can be significantly reduced (e.g., FLOPs), which leads to faster speed and less energy-consuming. Typically, pruning approaches can be divided into two categories: structured pruning [14, 1, 9, 31, 18, 30] and weight (unstructured) pruning [27, 32, 28, 39, 16, 15]. Structured pruning approaches consider a channel or a kernel as a basic pruning unit, while weight pruning approaches consider a weight as a basic pruning unit. The former is more hardware-friendly and the latter is able to achieve higher pruning ratio.
In this paper, we focus on improving the weight pruning and propose a novel jointly-optimized layer-adaptive approach to achieve state-of-the-art results between FLOPs and accuracy. Recent discoveries [10, 13, 25] demonstrate that layer-adaptive sparsity is the superior pruning scheme. However, one drawback in prior layer-adaptive approaches is that they only consider the impact of a single layer when deciding the pruning ratio of that layer. The mutual impact between different layers is ignored. Moreover, another challenge is that the search space of the pruning ratio for each layer increases exponentially as the number of layers. In a deep neural network, the number of layers can be a hundred or even a thousand, which makes it very difficult to find the solution efficiently.
In our approach, we define a joint learning objective to learn the layer-adaptive pruning scheme. We aim to minimize the output distortion of the network when pruning weights on all layers under the constraint of target pruning ratio. As the output distortion is highly related to accuracy, our approach is able to maintain accuracy even at high pruning ratios. We explore an important property of the output distortion and find that the additivity property [42, 41, 38] holds when we prune weights on multiple layers. In other words, the output distortion caused by pruning all layers'
weights equals to the sum of the output distortion due to the pruning of each individual layer. We provide a mathematical derivation for the additivity property by using the Taylor series expansion.
Moreover, utilizing the additivity property, we develop a very fast method to solve the optimization via dynamic programming, which has only linear time complexity. We rewrite the objective function as a combinatorial optimization problem. By defining the state function and the recursive equation between different states, we can decompose the whole problem into sub-problems and solve it via dynamic programming. In practice, our approach is able to find the solution in a few minutes on CPUs for deep neural networks. Note that different from other approximation algorithms, dynamic programming is able to find the global optimal solution, which means that our approach provides optimal pruning scheme with minimal output distortion. We summarize the main contributions of our paper as follows:
* We propose a novel layer-adaptive pruning scheme that jointly minimizes the output distortion when pruning the weights in all layers. As the output distortion is highly related to the accuracy, our approach maintains high accuracy even when most of the weights are pruned. We also explore an important additivity property for the output distortion based on Taylor series expansion.
* We develop a fast algorithm to solve the optimization via dynamic programming. The key idea is to rewrite the objective function as a combinatorial optimization problem and then relax the whole problem into tractable sub-problems. Our method can find the solution of a deep neural network in a few minutes.
* Our approach improves state-of-the-arts on various deep neural networks and datasets.
The rest of our paper is organized as follows. We discuss the related works in section 2. In section 3, we develop our approach in detail. We present the objective function, the optimization method, and the time complexity analysis of the algorithm. In the last section, we provide the comprehensive experimental results.
## 2 Related Works
Our focus of this work generally falls into the magnitude-based pruning (MP) track within model compression of neural networks, with early works such as OBD [24]. MP is done by ranking or penalizing weights according to some criterion (_e.g._ magnitude) and removing low-ranked weights. Many efforts have been made ever since under the context of [24, 16], which can be roughly divided into the following approaches depending on their timing of pruning embedded in the network training.
**Post-training Pruning.** Post-training pruning scheme prunes out network parameters after standard network training, _i.e._ prunes from a pretrained converged model. Under this context, parameters can be pruned out at once to achieve target sparsity constraint (ont-shot pruning), or pruned out gradually during the sparse model fine-tuning (iterative pruning). [16] proposed an iterative pruning scheme that determines layerwise sparsity using layer statistics heuristic. [45, 10] adopted a global pruning threshold throughout all layers in the network to meet the model sparsity constraint. [5][33] pooled all layers together and determined pruning thresholds for different layers in an integrated fashion. [12] proposed to rewind the weights from previous iterative pruning phase based on the lottery ticket hypothesis. LAMP[25] derived a closed-form layerwise sparsity selection from a relaxed layerwise \(l2\) distortion minimization problem that is compatible with various post-training pruning schemes including iterative and one-shot pruning. PGMPF [4] adopted simple \(l2\)-based layerwise
Figure 1: An example of additivity property collected on ResNet-32 on CIFAR-10. The vertical axis shows the output distortion when pruning only two consecutive layers. The horizontal axis shows the sum of the output distortion due to the pruning the involved two layers individually. Sub-figures display the situations when all layers in the model are assigned with the corresponding sparsity.
pruning criterion and improved the weight masking and updating rules during finetuning. [6] adopted a one-shot pruning method by leveraging zero-invariant groups. [23] proposed to re-calibrate the biases and variances of model weights and activations, similar to the widely adopted bias correction in model quantization [11, 2]. [32] presented an iterative-pruning method that leverage taylor expansion of model loss and derived a gradient based pruning criteria. Our method leverages taylor expansion on output distortion parametrized by layer weights, which is fundamentally different from [32]. SuRP [20] recursively applies triangular inequality and assumes laplacian distribution to approximate output distortion to achieve joint-optimization similar to us. However, our approximation is more straight-forward and do not need any assumptions on the distribution.
**Pruning at Initialization.** In contrast to the previous scheme, there is an emerging line of work that aims to remove connections or neurons from scratch at the initialization of training, with the merit of avoiding pretraining and complex pruning schedules. SNIP [26] prunes parameters only once at the initialization phase of training. The normalized magnitudes of the derivatives of parameters are defined as the pruning criterion. [7] presented a modified saliency metric based on SNIP [26], allowing for calculating saliences of partially pruned networks. [36] engineered the gradient flow when training sparse networks from scratch to achieve better convergence. Since pruning at initialization out of our research scope, one may refer to related surveys [37] for more comprehensive introduction.
**Other Pruning Schemes.**[3] interleaves the pruning in between normal training course, gradually pruning out more connections and neurons from the networks. This scheme is similar to the previous iterative pruning, however, here the model is trained from scratch. ProbMask [43] similarly leverages projected gradient descent with progressive pruning strategy to directly train sparse networks. [40] integrates supermask training with gradient-drive sparsity for training sparse networks.
Since our main contribution is the improvement of the pruning criteria, we mainly evaluate our method under post-training unstructured pruning paradigms, such as iterative pruning and one-shot pruning. Although our method may have equal potential effectiveness on other sparsity structures and pruning schemes like Pruning at Initialization, we leave such validations for future works.
## 3 Approach
In this section, we present our approach in detail. We first give the formulation of our objective function and then provide the optimization method. An additivity property is derived based on Taylor series approximation. The implementation details of the dynamic programming and the analysis of the time complexity are also provided.
### Objective Function
Following the notations in [25], let \(f\) denote a neural network, define \(W^{(1:l)}=\big{(}W^{(1)},...,W^{(l)}\big{)}\) as all the parameters of \(f\), where \(l\) is the number of layers and \(W^{(i)}\) is the weights in layer \(i\). When we prune part of the parameters of \(f\), we will receive a modified neural network with the new parameter set \(\tilde{W}^{(1:l)}\). We view the impact of pruning as the distance between the network outputs \(f(x;W^{(1:l)})\) and \(f(x;\tilde{W}^{(1:l)})\). The learning objective is to minimize the output distortion caused by pruning under the constraint of the pruning ratio,
\[\min\ \|f(x;W^{(1:l)})-f(x;\tilde{W}^{(1:l)})\|^{2}\ \ s.t.\ \frac{\|\tilde{W}^{(1:l)}\|_{0}}{\|W^{(1:l)}\|_{0}} \leq R, \tag{1}\]
where \(R\) denotes the pruning ratio for the entire network.
An important property we discover is that the expectation of output distortion, caused by pruning all layers' weights, equals the sum of expectation of output distortion due to the pruning of each individual layer,
\[E\left(\|f(x;W^{(1:l)})-f(x;\tilde{W}^{(1:l)})\|^{2}\right)=\sum_{i=1}^{l}E( \delta_{i}), \tag{2}\]
where \(\delta_{i}\) denotes the output distortion when only pruning the weights in layer \(i\).
### Analysis
We provide a mathematical derivation for the additivity property. We make the following two assumptions for the proof of additivity property:
**Assumption 1**: _Taylor first order expansion: The neural network \(f\) parametrized by \(W^{(1:l)}\) when given a small perturbation \(\Delta W^{(1:l)}\) resulting in \(\tilde{W}^{(1:l)}=W^{(1:l)}+\Delta W^{(1:l)}\) can be expanded as the following:_
\[f(x;\tilde{W}^{(1:l)})=f(x;W^{(1:l)})+\sum_{i=1}^{l}\frac{\partial f}{ \partial W^{(i)}}\Delta W^{(i)}. \tag{3}\]
**Assumption 2**: _I.d.d. weight perturbation across layers [44]: \(\forall 0<i\neq j<L,E(\Delta W^{(i)})E(\Delta W^{(j)})=0\)._
According to Eq. (3), \(\delta=\|f(x;W^{(1:l)})-f(x;\tilde{W}^{(1:l)})\|^{2}\) can be written as
\[\delta=\Big{(}\sum_{i=1}^{l}\Delta{W^{(i)}}^{\top}\frac{\partial f}{\partial W ^{(i)}}^{\top}\Big{)}\Big{(}\sum_{j=1}^{l}\frac{\partial f}{\partial W^{(j)} }\Delta W^{(j)}\Big{)}. \tag{4}\]
When we take the expectation of Eq. (4) for both sides, the right hand side can be opened up into additive terms (vector transpose is agnostic inside expectation):
\[E(\delta)=\sum_{1\leq i,j\leq l}E\left(\Delta W^{(i)}\frac{\partial f}{ \partial W^{(i)}}\right)E\left(\Delta W^{(j)}\frac{\partial f}{\partial W^{(j) }}\right)\!. \tag{5}\]
Further, since the derivative \(\frac{\partial f}{\partial W^{(i)}}\) is a constant as we consider trained fixed network weights, we can derive the following from Assumption 2:
\[E\left(\Delta W^{(i)}\frac{\partial f}{\partial W^{(i)}}\right)E\left(\Delta W^{ (j)}\frac{\partial f}{\partial W^{(j)}}\right)=0. \tag{6}\]
Therfore, the cross terms (\(i\neq j\)) in Eq. (5) disappear, obtaining:
\[E(\delta)=\sum_{i=1}^{l}E\left(\|\frac{\partial f}{\partial W^{(i)}}\Delta W^{ (i)}\|^{2}\right). \tag{7}\]
Eq. (7) is the result we want because, again, according to Assumption 1,
\[\begin{split}\frac{\partial f}{\partial W^{(i)}}\Delta W^{(i)}& =f(x;W^{(1:i-1)},\tilde{W}^{(i)},W^{(i+1,l)})\\ &-f(x;W^{(1;l)}).\end{split} \tag{8}\]
Therefore, the left hand side of Eq. (7) becomes the real output distortion \(\delta\) when all layers are pruned, and the right hand side becomes the sum of the output distortion due to the individual pruning of each single layer's weights, which can be used to approximate the output distortion.
We have done an empirical examination of our theoretically proposed additivity property on real network. As shown in Fig. 1, when we examine the cases where only pruning two adjacent layers each time in a pretrained model, contributing to the right hand side addable distortion terms while other layers contributing zero to the approximation, we observe that the additivity holds quite well with marginal residuals, where almost all observation points sit close to the identity line.
### Optimization via Dynamic Programming
By utilizing the additivity property, we can rewrite the objective function as a combinatorial optimization problem and solve it efficiently using dynamic programming. The objective function is re-written as,
\[\min\ \delta_{1}+\delta_{2}+...+\delta_{l}\ \ s.t.\ \ t_{1}+t_{2}+...+t_{l}=T, \tag{9}\]
where \(T\) denotes the total number of weights to prune and \(t_{i}\) denotes the number of weights to prune in layer \(i\). We solve (9) by decomposing the whole problem into sub-problems. The basic idea is that we define a state function and find the recursive equation between the states. The problem is solved based on the recursive equation.
Specifically, define \(g\) as the state function, in which \(g_{i}^{j}\) means the minimal distortion caused when pruning \(j\) weights at the first \(i\) layers. Our goal is to calculate \(g_{l}^{T}\). For initialization, we have,
\[g_{1}^{j}=\delta_{1}(j),\ for\ \ 1\leq j\leq T, \tag{10}\]
where \(\delta_{i}(j)\) denotes the distortion caused when pruning \(j\) weights at layer \(i\). Then we have the recursive equation between the states \(g_{i}\) and \(g_{i-1}\), which is,
\[g_{i}^{j}=\min\{g_{i-1}^{j-k}+\delta_{i}(k)\},\ where\ \ 1\leq k\leq j. \tag{11}\]
The state functions are calculated based on equation (11) in a bottom-up manner from \(g_{1}\) to \(g_{l}\). In practice, we need another variable \(s\) to store the decision of each state to know the number of weights pruned in each layer. \(s\) is defined as
\[s_{i}^{j}=\operatorname*{arg\,min}_{k}\{g_{i}^{j}=g_{i-1}^{j-k}+\delta_{i}(k)\}. \tag{12}\]
Algorithm 1 shows the pseudo-codes to calculate the state function and find the pruning solution.
### Time complexity analysis
The time complexity of the optimization algorithm using dynamic programming is \(O(l\times T^{2})\), as we have \(l\times T\) different states, and each state needs to enumerate the number of weights pruned in a layer. In practice, this algorithm is very fast which costs just a few seconds on CPUs for deep neural networks. We show the detailed results of the speed in the experimental section.
```
0: Output distortion \(\delta_{i}(j)\) when pruning \(j\) weights in single layer \(i\), for \(1\leq i\leq l\) and \(1\leq j\leq T\).
0: The number of weights \(p_{i}\) pruned in layer \(i\). Initialize minimal output distortion \(g_{i}^{j}=0\) when pruning \(j\) weights in the first \(i\) layers. Initialize state function \(s_{i}^{j}=-1\) where \(s_{i}^{j}\) denotes the number of weights pruned in layer \(i\) when pruning \(j\) weights in the first \(i\) layers. for\(i\) from \(1\) to \(l\)do for\(j\) from \(0\) to \(T\)do If \(i=1\): \(g_{1}^{j}=\delta_{1}(j)\), \(s_{1}^{j}=j\). Else: \(g_{i}^{j}=\min\{g_{i-1}^{j-k}+\delta_{i}(k)\}\), \(s_{i}^{j}=\operatorname*{arg\,min}_{k}\{g_{i}^{j}\}\). endfor endfor The number of weights pruned in layer \(l\) is \(p_{l}=s_{l}^{T}\). Update \(T=T-s_{l}^{T}\). for\(i\) from \(l-1\) to \(1\)do The number of weights pruned in layer \(i\) is \(p_{i}=s_{i}^{T}\). Update \(T=T-s_{i}^{T}\). endfor
```
**Algorithm 1** Optimization via dynamic programming.
## 4 Experiment Results
**Implementation Details.** As our contribution to the existing pruning schemes is on the layer-wise sparsity selection, we evaluate our rate-distortion-based pruning method under different experimental settings, including iterative pruning and one-shot pruning, as well as on multiple network
\begin{table}
\begin{tabular}{c|c|c|c c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Arch} & \multirow{2}{*}{Method} & Sparsity & Remaining & \multirow{2}{*}{Top-1 (\%) \(\uparrow\)} & Top-1 \\ & & & (\%) & FLOPs (\%) & & drop (\%) \(\downarrow\) \\ \hline \multirow{8}{*}{CIFAR-10} & \multirow{8}{*}{ResNet-32} & LAMP [25] & \(79.03\) & \(36.02\) & \(92.58\pm 0.25\) & \(1.41\) \\ & & Ours & \(58.65\) & \(34.5\) & \(\mathbf{93.62\pm 0.23}\) & \(\mathbf{0.37}\) \\ \cline{2-6} & \multirow{8}{*}{ResNet-32} & LAMP [25] & \(89.3\) & \(21.66\) & \(91.94\pm 0.24\) & \(2.05\) \\ & & Ours & \(73.76\) & \(22.21\) & \(\mathbf{93.34\pm 0.10}\) & \(\mathbf{0.65}\) \\ \cline{2-6} & \multirow{8}{*}{(Dense: \(93.99\))} & LAMP [25] & \(95.5\) & \(11\) & \(90.04\pm 0.13\) & \(3.95\) \\ & & Ours & \(86.57\) & \(11.25\) & \(\mathbf{92.56\pm 0.20}\) & \(\mathbf{1.43}\) \\ \cline{2-6} & & LAMP [25] & \(98.85\) & \(3.29\) & \(83.66\pm 0.29\) & \(10.33\) \\ & & Ours & \(95.5\) & \(3.59\) & \(\mathbf{90.83\pm 0.24}\) & \(\mathbf{3.16}\) \\ \cline{2-6} & \multirow{8}{*}{CIFAR-10} & \multirow{8}{*}{CIFAR-16} & E-R ker. [10] & \(95.6\) & / & \(91.99\pm 0.14\) & \(-0.79\) \\ & & DPF [29] & \(95\) & / & \(93.87\pm 0.15\) & \(-0.13\) \\ & & LAMP [25] & \(95.6\) & \(15.34\) & \(92.06\pm 0.21\) & \(-0.86\) \\ & & SuRP [20] & \(95.6\) & / & \(92.13\) & \(-0.93\) \\ & & Ours & \(95.6\) & \(6.83\) & \(\mathbf{92.59\pm 0.17}\) & \(\mathbf{-0.88}\) \\ \cline{2-6} & \multirow{8}{*}{(Dense: \(91.71\))} & Global [33] & \(98.85\) & / & \(81.56\pm 3.73\) & \(9.64\) \\ & & Uniform [45] & \(98.85\) & / & \(55.68\pm 12.20\) & \(35.52\) \\ & & Uniform+ [13] & \(98.85\) & / & \(87.85\pm 0.26\) & \(3.35\) \\ & & E-R ker. [10] & \(98.85\) & / & \(90.55\pm 0.19\) & \(0.65\) \\ & & LAMP [25] & \(98.85\) & 6.65 & \(91.07\pm 0.4\) & \(0.13\) \\ & & SuRP [20] & \(98.84\) & / & \(91.21\) & \(-0.01\) \\ & & Ours & \(98.85\) & \(3.43\) & \(\mathbf{92.14\pm 0.18}\) & \(\mathbf{-0.43}\) \\ \cline{2-6} & \multirow{8}{*}{DenseNet-121} & PGMPF [4] & / & \(33\) & \(\mathbf{93.6}\) & \(0.08\) \\ & & LAMP [25] & \(86.58\) & \(33.53\) & \(92.22\pm 0.05\) & \(-0.51\) \\ & & Ours & \(67.21\) & \(35.49\) & \(92.76\pm 0.18\) & \(\mathbf{-1.05}\) \\ \hline \multirow{8}{*}{DenseNet-121 (Dense: \(91.14\))} & LAMP [25] & \(95.5\) & \(6.45\) & \(90.11\pm 0.13\) & \(1.03\) \\ & & SuRP [20] & \(95.5\) & / & \(90.75\) & \(0.39\) \\ & & Ours & \(95.5\) & \(6.72\) & \(\mathbf{91.49\pm 0.21}\) & \(\mathbf{-0.35}\) \\ \cline{2-6} & \multirow{8}{*}{(Dense: \(91.81\))} & Global [33] & \(98.85\) & / & \(45.30\pm 27.75\) & \(45.84\) \\ & & Uniform [45] & \(98.85\) & / & \(66.46\pm 18.72\) & \(24.68\) \\ & & Uniform+ [13] & \(98.85\) & / & \(69.25\pm 19.28\) & \(21.89\) \\ & & E-R ker. [10] & \(98.85\) & / & \(59.06\pm 25.61\) & \(32.08\) \\ & & LAMP [25] & \(98.85\) & \(1.71\) & \(85.13\pm 0.31\) & \(6.01\) \\ & & SuRP [20] & \(98.56\) & / & \(86.71\) & \(4.43\) \\ & & Ours & \(98.85\) & \(2.02\) & \(\mathbf{87.7\pm 0.24}\) & \(\mathbf{3.44}\) \\ \hline \multirow{8}{*}{ImageNet} & \multirow{8}{*}{VGG-16-BN} & LAMP [25] & \(95.5\) & \(37.16\) & \(64.63\) & \(8.73\) \\ & & Ours & \(95.5\) & \(9.12\) & \(\mathbf{66.9}\) & \(6.47\) \\ \cline{1-1} & & Ours & \(73.54\) & \(34.95\) & \(\mathbf{69.35}\) & \(\mathbf{4.02}\) \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{(Dense: \(73.37\))} & LAMP [25] & \(98.85\) & \(16.73\) & \(51.59\) & \(21.78\) \\ \cline{1-1} & & Ours & \(89.3\) & \(17.71\) & \(\mathbf{68.88}\) & \(\mathbf{4.49}\) \\ \cline{1-1} & & Ours & \(98.85\) & \(3.51\) & \(\mathbf{59.41}\) & \(\mathbf{13.96}\) \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{ResNet-50 (Dense: \(76.14\))} & PGMPF [4] & / & \(53.5\) & \(75.11\) & \(0.52\) \\ \cline{1-1} & & Ours & \(41\) & \(53.5\) & \(\mathbf{75.90}\) & \(\mathbf{0.24}\) \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{(Dense: \(76.14\))} & LAMP [25] & \(89.3\) & \(26.1\) & \(72.56\) & \(3.58\) \\ \cline{1-1} & & Ours & \(67.22\) & \(28.52\) & \(\mathbf{73.47}\) & \(\mathbf{2.67}\) \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{(Dense: \(76.14\))} & LAMP [25] & \(95.5\) & \(15.47\) & \(66.04\) & \(10.1\) \\ \cline{1-1} & & Ours & \(95.5\) & \(2.85\) & \(\mathbf{66.06}\) & \(\mathbf{10.08}\) \\ \cline{1-1} & & Ours & \(79.01\) & \(16.58\) & \(\mathbf{72.26}\) & \(\mathbf{3.88}\) \\ \cline{1-1} \cline{2-6} & \
architectures and image classification datasets. We consider 3 models on CIFAR-10 dataset [21], _i.e._, VGG-16 following the adapted architectures in [25], ResNet-32 [17], DenseNet-121 [19], while on ImageNet dataset [8], we evaluate VGG-16 with BatchNorm [34] and ResNet-50 [17]. On CIFAR-10, following the baseline method [25], we perform _five independent trials_ for each method, and we report the averages and standard deviations among the trials. On the much larger scaled ImageNet, we only perform one trial for each method. For other implementation details, please refer to the supplementary material.
**Details when generating rate-distortion curves.** In the experimentations, we need to generate rate-distortion curves for every layer to enable sparsity optimization, where points on the curves are a pair of sparsity level and the model output distortion when certain layer is pruned to that sparsity. For non-data-free scheme, the curves are sampled on a randomly selected calibration set from training dataset, while it is also possible to make it data-free by leveraging synthesized data sampled from certain distribution, _e.g._ standard normal distribution. The size of calibration set is set to \(1024\) samples for CIFAR-10 and \(256\) for ImageNet respectively. However, rate-distortion curves obtained by the above process may be interfered by real-world factors resulting in noisy curves. Therefore we designed various strategies to refine the raw rate-distortion curves and better aid the optimization thereafter. Specifically, (1) **Worst case sampling**: inspired by LAMP [25], we calculate the distortion as the _maximum_ squared norm error among all calibration samples instead of calculating the _MSE_ for the whole calibration set; (2) **Outliers filtering**: treat the local maxima points on the curves that break monotonicity as outlier noises and remove them in order to facilitate Algorithm 1, especially to effectively perform Eq. (12). We provide ablation studies in the later Sec. 4.4 to discuss the individual effects of these strategies.
### Iterative Pruning Results
In the iterative pruning scheme, one starts with a pretrained full-capacity model. During the finetuning process of the pretrained model, we gradually prune out parameters from the model by a certain amount at each iterative stage.
Figure 2: Iterative pruning process of various classification models and datasets.
Under different stages of iterative pruning, we get a set of sparse models with gradually increasing sparsity and decreasing computation complexity (FLOPs). Following the iterative pruning settings in LAMP [25], we prune out \(20\%\) of the remaining parameters from the model each time after a round of finetuning. The hyper-parameters setup of the finetuning is detailed in the supplementary material. Tab. 1 compares the results of model accuracies produced during the iterative pruning process by our method and other pruning method counterpart.
Given non-standardized adoption of CNN models for experimentations in post-training pruning works, we examined as most models as appeared in various literature and add those as baselines in our comparison, including Global [33], Uniform [45], Uniform+ [13], LAMP [25], E-R ker. [10], which is an extended Erdos-Renyi method for CNNs pruning, where layer-wise sparsity is selected by a closed-form criterion dependent on merely the layer architecture (e.g., the numbers of input and output channels, the convolutional kernel sizes). Fig. 2 further demonstrates the detailed iterative pruning procedures of different methods, where the remaining FLOPs (X-axis) gradually decreases in the course of finetuning.
**Results on CIFAR.** From Tab. 1, we observe that our method consistently produces pruned models with higher test performance and less test accuracy drop for the same computational complexity (FLOPs) compared to other methods. Fig. 2 further verifies that this observation holds throughout the pruning procedure. For example for ResNet-32 on CIFAR-10, our method obtains Top-1 accuracy at \(92.56\) on average with \(11.25\%\) remaining FLOPs, while baseline result [25] is only \(90.04\) at \(11\%\) FLOPs; When remaining FLOPs is around \(3\%\), we even improve the accuracy by \(7.17\), _i.e._ only \(3.16\) accuracy drop with only \(4.4\%\) survived parameters. For VGG-16 on CIFAR-10, we also observe similar results, where our method achieves the least accuracy drop among various counterparts, _e.g._, when FLOPs are within the range of \(33\pm 2\%\), without the advance design of soft gradient masking and weight updating strategies adopted in [4], ours achieves \(-1.05\%\) drop of Top-1 at \(35.49\%\) FLOPs, which means that the pruned network performs better by \(1.05\%\) than the unpruned one. PGMPF [4] achieves a higher accuracy score than us on VGG-16 model with \(33\%\) remaining FLOPs, which was obtained from a higher performance unpruned model, but still underperforms us regarding the accuracy drop (Top-1 dropped by \(0.08\%\)).
**Results on ImageNet.** On the larger scale dataset ImageNet, we also observe similar behaviors from our approach. For VGG16-BN, we outperform others on both \(35\pm 2\%\) and \(16\pm 2\) FLOPs groups. Noticeably, when model sparsity is as high as \(98.85\%\), _i.e._ only \(1.15\%\) surviving parameters, our method still has \(59.41\%\) accuracy, while LAMP already drops to around \(52\). This is also observed on ResNet-50, where we outperform LAMP by a large margin at \(6\%\) FLOPs group. From Fig. 2c, there is a minor observation that although consistently higher test accuracy with \(<50\%\) FLOPs, VGG-16-BN performs slightly lower within the \(30\)\(50\%\) FLOPs range before going up again in the following finetuning iterations. It is speculated that VGG-16-BN is more sensitive to large structural changes for post-train pruning.
In all, for both datasets, we observe that our method generates higher accuracy sparse models given either the same FLOPs or sparsity constraint.
### One-shot Pruning Results
In one-shot pruning scheme, we directly prune the model to the target computation or parameter constraint, followed by a one-time finetuning. Tab. 2 summarizes the one-shot pruning results using various unstructured pruning algorithms. We carry out comparison on ResNet-50 on ImageNet. The result verifies that our method still fits in the one-shot pruning scheme, with higher accuracy at \(34.5\%\) FLOPs than both baselines [25, 6].
### Zero-data Pruning
To evaluate whether our method is compatible with zero-data pruning scheme, which is promising to achieve better generalizability than standard prun
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Method} & Sparsity & Remaining & Top-1 & Top-1 \\ & (\%) & FLOPs (\%) & (\%) & drop(\%) \\ \hline Unpruned & \(0\) & \(100\) & \(76.14\) & - \\ \hline LAMP [25] & \(64.5\) & \(55\) & \(75.43\) & \(0.71\) \\ OTO [6] & \(64.5\) & \(34.5\) & \(75.1\) & \(1.04\) \\ Ours & \(58\) & \(34.5\) & \(\mathbf{75.59}\) & \(\mathbf{0.55}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: One-shot pruning results of ResNet-50 on ImageNet.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Method} & Sparsity & Remaining & Top-1 & Top-1 \\ & (\%) & FLOPs (\%) & (\%) & drop(\%) \\ \hline Unpruned & \(0\) & \(100\) & \(76.14\) & - \\ \hline
[23] & \(50\) & / & \(73.89\) & \(2.16\) \\ LAMP [25] & \(50\) & \(67.05\) & \(74.9\) & \(1.24\) \\ Ours* & \(50\) & \(42.48\) & \(\mathbf{75.13}\) & \(\mathbf{1.01}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Zero-data one-shot pruning results of ResNet-50 on ImageNet dataset. **Ours*** denotes the zero-data alternative of our method by using white noise data to generate rate-distortion curves.
usually data dependant, we attempt to adopt our method to zero-data pruning, by replacing the calibration images set that is sampled from real test set with white noise (pixels in each color channel are independently generated by the same distribution required by the classification model, _e.g._, standard normalized distribution \(\mathcal{N}(0,1)\)).
Tab. 3 summarizes the results of zero-data variant of our approach compared to the baseline [23] result with the same data synthesize strategy (white-noise). We also include LAMP [25] in the comparison since it happens to require no calibration set to obtain layerwise pruning thresholds for good. Our approach still achieves superior results under zero-data scenario, with only \(1.01\%\) performance drop. This is within our expectation since our rate-distortion theory-based algorithm does not depend on any specific input data distribution.
### Ablation Studies
Since our major contribution is the joint-optimization strategy, we first conducted a comparison with the case not using joint-optimization where we directly solve layer-wise sparsity on the output features of each layer, resulting in the performance shown in Tab. 4. As indicated in the table, we observe deteriorated performances for such single-layer optimization on both tested models, showing that our joint-optimization strategy is optimal.
We also evaluate the individual effectiveness of the aforementioned rate-distortion curves refining strategies. We first perform ablation on CIFAR-10 dataset. From Tab. 5, we observe that at the same model sparsity \(89\%\), which is relatively high for one-shot pruning scheme, both strategies are shown to work positively for our approach. Therefore, we included both strategies to conduct experiments of main results. We also observe the same on ImageNet dataset, as shown in Tab. 6. Particularly, Outlier filtering strategy brings slightly more improvement on both CIFAR-10 and ImageNet, where Worst case sampling makes no difference at this particular sparsity target.
### Other discussions
There is also an interesting observation from Tab. 1 that with the same model sparsity, our method constantly reduces more FLOPs from the model. To better analyze this phenomenon, we take a closer look into the layerwise sparsity solution given by different approaches. As shown in Fig. 3, our method prunes out more parameters from deeper layers than LAMP [25]. Since activations in deeper layers in CNNs usually have more channels and features than shallow layers, pruning out more parameters from deep layers will reduce more operations, resulting in less remaining FLOPs. From Fig. 3, another observation is that both methods prune out more parameters from the last layer of ResNet-32 which is the fully-connected layer, implying that parameters of the last layer contain large redundancy. Meanwhile, we observe that DenseNet-121 on CIFAR-10 does not display the above phenomenon, where our method reduces the same level of FLOPs compared with LAMP under the same sparsity. We elaborate this in the supplementary material.
### Time Complexity
We provide the empirical optimization time complexity analysis in Tab. 7. In practice, we use ternary search algorithm to search the solution of \(s_{i}^{j}\) in Eq. (12), which has logarithmic time complexity given the search range. On small datasets like CIFAR-10, with \(35\) layers, our method takes less than a second to calculate the layerwise sparsity,
\begin{table}
\begin{tabular}{c|c|c|c c} \hline \hline \multirow{2}{*}{WCS} & \multirow{2}{*}{OF} & \multirow{2}{*}{Sparsity (\%)} & Top-1 & Top-1 \\ & & & (\%) & drop(\%) \\ \hline \multirow{2}{*}{ResNet-50} & Ours & \(58\) & \(\mathbf{75.59}\) & \(\mathbf{0.55}\) \\ \cline{2-5} & Ours* & \(60\) & \(74.89\) & \(1.45\) \\ \hline \multirow{2}{*}{VGG-16-BN} & Ours & \(60\) & \(\mathbf{69.01}\) & \(\mathbf{4.36}\) \\ \cline{2-5} & Ours* & \(59\) & \(62.50\) & \(10.87\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of joint-optimization objective and vanilla (single-layer) optimization (denoted by Ours*).
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Arch} & \multirow{2}{*}{Method} & Sparsity & Top-1 & Top-1 \\ & & (\%) & (\%) & drop(\%) \\ \hline \multirow{2}{*}{ResNet-50} & Ours & \(58\) & \(\mathbf{75.59}\) & \(\mathbf{0.55}\) \\ \cline{2-5} & Ours* & \(60\) & \(74.89\) & \(1.45\) \\ \hline \multirow{2}{*}{VGG-16-BN} & Ours & \(60\) & \(\mathbf{69.01}\) & \(\mathbf{4.36}\) \\ \cline{2-5} & Ours* & \(59\) & \(62.50\) & \(10.87\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Different post-processing strategies of RD curves on ResNet-50 on ImageNet with one-shot pruning scheme. Test accuracy of the model **before** finetuning is reported.
\begin{table}
\begin{tabular}{c c|c|c c} \hline \hline \multirow{2}{*}{WCS} & \multirow{2}{*}{OF} & \multirow{2}{*}{Sparsity (\%)} & Top-1 & Top-1 \\ & & & (\%) & drop(\%) \\ \hline Unpruned & \(0\) & \(93.99\) & - \\ \hline \multirow{4}{*}{\(\surd\)} & \(89.3\) & \(91.15\) & \(2.84\) \\ & & \(89.2\) & \(91.31\) & \(2.68\) \\ \cline{1-1} & \(\surd\) & \(89\) & \(91.51\) & \(2.58\) \\ \cline{1-1} \cline{2-5} \(\surd\) & \(\surd\) & \(89.3\) & \(\mathbf{92.3}\) & \(\mathbf{1.69}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Different post-processing strategies of RD curves on ResNet-32 on CIFAR-10 with iterative pruning scheme. **WCS**: Worst case sampling, **OF**: Outlier filtering.
while on larger ImageNet, our method still only takes a few seconds.
We also analyze the curve generation costs. For each layer, we traverse all calibration set samples to calculate output distortion at all sparsity levels to generate a rate-distoriton curve. Therefore, the cost of generating rate-distortion curves becomes \(O(lSN)\), where \(l\) is the number of layers, \(S\) is the number of sparsity levels (we set \(S=100\) in practice), and \(N\) is the size of calibration set. We provide the actual time costs for two CIFAR-10 models in Tab. 8. In practice, we used optimized dataloader and parallelized curve generation of different layers to cut down the inference time per sample.
### Analysis of Approximation Error
Given the locality nature of taylor expansion, we expect an increasing discrepancy of the taylor approximation under large distortion. We analyze the empirical approximation error in Fig. 4. The left figure visualizes the relations between the taylor-based approximated output distortion (X-axis) and the real output distortion (Y-axis), we notice that the data points in the figure are very close to the diagonal. The right figure plots the approximation error at different sparsity levels. The approximation error inflates at large sparsities, e.g. \(>50\%\).
## 5 Conclusions
We have presented a new rate-distortion based unstructured pruning criterion. We revealed the output distortion additivity of CNN models unstructured pruning, supported by theory and experiments. We exploited this property to simplify the NP-hard layerwise sparsity optimization problem into a fast pruning criterion with only \(O(l\times T^{2})\) complexity. Benefiting from the direct optimization on the output distortion, our proposed criterion shows superiority over existing methods in various post-training pruning schemes. Our criterion prefer to prune deep and large layers, leading to significant model size and FLOPs reductions.
## Acknowledgement
This research is supported by the Agency for Science, Technology and Research (A*STAR) under its Funds (Project Number A1892b0026 and C211118009 and MTC Programmatic Funds (Grant No. M23L7b0021)). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the A*STAR.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Configuration & No. & Sparsity & Time (s) \\ \cline{2-4} & layers & (\%) & \\ \hline ResNet-32@CIFAR-10 & \(35\) & \(20\) & \(0.46\pm 0.09\) \\ \hline ResNet-50@ImageNet & \(54\) & \(50\) & \(2.08\pm 0.21\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Time spent on layerwise sparsity optimization of our method.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Configuration & Curve & Optimize (s) \\ \hline ResNet-18@CIFAR-10 & 1052.64 & 0.84 \\ \hline VGG-16@CIFAR-10 & 664.19 & 2.20 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparison of time cost of RD curve generations and optimization.
Figure 4: Empirical Approximation Error Analysis.
Figure 3: Layer-wise sparsity statistics of ResNet-32 on CIFAR-10 of different methods during iterative pruning. Height of bars denotes the pruning rate with \(\{0.36,0.74,0.89,0.96,0.98\}\) model sparsities. |
2303.17025 | Deep convolutional neural networks to restore single-shot electron
microscopy images | State-of-the-art electron microscopes such as scanning electron microscopes
(SEM), scanning transmission electron microscopes (STEM) and transmission
electron microscopes (TEM) have become increasingly sophisticated. However, the
quality of experimental images is often hampered by stochastic and
deterministic distortions arising from the instrument or its environment. These
distortions can arise during any stage of the imaging process, including image
acquisition, transmission, or visualization. In this paper, we will discuss the
main sources of distortion in TEM and S(T)EM images, develop models to describe
them and propose a method to correct these distortions using a convolutional
neural network. We demonstrate the effectiveness of our approach on a variety
of experimental images and show that it can significantly improve the
signal-to-noise ratio resulting in an increase in the amount of quantitative
structural information that can be extracted from the image. Overall, our
findings provide a powerful framework for improving the quality of electron
microscopy images and advancing the field of structural analysis and
quantification in materials science and biology. | I. Lobato, T. Friedrich, S. Van Aert | 2023-03-29T21:09:05Z | http://arxiv.org/abs/2303.17025v1 | # Deep convolutional neural networks to restore single-shot electron microscopy images
###### Abstract
State-of-the-art electron microscopes such as scanning electron microscopes (SEM), scanning transmission electron microscopes (STEM) and transmission electron microscopes (TEM) have become increasingly sophisticated. However, the quality of experimental images is often hampered by stochastic and deterministic distortions arising from the instrument or its environment. These distortions can arise during any stage of the imaging process, including image acquisition, transmission, or visualization. In this paper, we will discuss the main sources of distortion in TEM and S(T)EM images, develop models to describe them and propose a method to correct these distortions using a convolutional neural network. We demonstrate the effectiveness of our approach on a variety of experimental images and show that it can significantly improve the signal-to-noise ratio resulting in an increase in the amount of quantitative structural information that can be extracted from the image. Overall, our findings provide a powerful framework for improving the quality of electron microscopy images and advancing the field of structural analysis and quantification in materials science and biology.
## Introduction
The quality of modern electron microscopes, such as scanning electron microscopes (SEM), scanning transmission electron microscopes (STEM), and transmission electron microscopes (TEM), has greatly improved. However, the quality of the experimental images produced by these instruments is often compromised by stochastic and deterministic distortions arising from the instrument or its environment [1, 2, 3]. These distortions can occur during the acquisition, transmission, or reproduction of the image. Despite technical improvements in the design of high-performance electron microscopes [1, 2, 3, 4], the presence of these distortions in the recorded images may hinder the extraction of quantitative information from the samples under study [5].
In TEM, images are acquired in a single shot using parallel acquisition. Here, the main sources of distortions are the detector noise, which is a combination of counting noise associated with the uncertainty of photon/electron detection, dark current noise resulting from statistical variation in the number of thermally generated electrons within the detector, and readout noise resulting from the electronics that amplifies and digitizes the charge signal. Other sources of distortions for TEM include X-ray noise, which is produced by X-rays that saturate one or more nearby pixels as they pass through the detector [6, 7], and dead pixel noise, which is caused by permanently damaged pixels on the sensor and often appears as black spots in the recorded images.
In S(T)EM, images are formed pixel by pixel by scanning a convergent electron beam across the sample and detecting the scattered, back-scattered or secondary electrons at each point. The main sources of distortions are the detector noise, which is a combination of shot noise hitting the scintillator, Gaussian noise resulting from the photomultiplier tube (PMT) [8], and readout noise from the electronics that amplifies and digitizes the electron signals. Unlike TEM imaging, the serial nature of SEM and STEM can introduce additional distortions into the resulting images due to time delays between measurements. At high doses, the main source of nonlinear distortion is the probe's fly-back time, where data collection pauses until scanning on the next line resumes. This produces a net two-dimensional random displacement of the pixel row known as horizontal and vertical scan distortion. These nonlinear distortions can often be corrected using iterative algorithms that require a series of images [9, 10] or a single image with a high-resolution periodic structure [11, 12]. Moreover, S(T)EM images obtained through high-speed scans (dwell time \(<1\mu s\)[13]) may display a non-uniform scan speed along individual scan lines resulting in a smearing effect that produces another type of nonlinear distortion. While these distortions can be partly compensated for periodic structures [13], they cannot be fully compensated for arbitrary specimens. Other types of distortion include row-line noise, which is caused by the detector's non-response over a few pixels, and X-ray noise, which is produced by X-rays hitting
the detector. These distortions can reduce the signal-to-noise ratio (SNR) and limit the amount of retrievable information about the electron-specimen interaction. Moreover, they can cause translation, shear, rotation, expansion, or contraction of the entire image. Although increasing the beam current or acquisition time can improve the SNR, it can also increase other types of distortion, such as shear or rotation. Moreover, it is unsuitable for beam-sensitive materials or for dynamic imaging requiring a short exposure time for each frame. Lowering the electron dose can also decrease the quality of the recorded images and limit the reliability of structural information extracted from them.
Various algorithms have been developed to improve the SNR of electron microscopy (EM) images, including spatial filters such as median filters, Gaussian filters, Bragg filters, and Wiener filters [14, 15, 16]. More complex methods for denoising EM images include non-linear iterative Wiener filtering algorithms [17] and block matching [18, 19] although they can be computationally intensive. Another option for improving the SNR is to average a series of registered frames, using either rigid [20] or non-rigid [9, 10] registration methods. However, these methods require a high overall electron dose and repeated recordings of the material. In addition, EM images often exhibit a combination of different types of distortions due to several factors including the instrument environment, scan instabilities, scan speed, and dose. Therefore, there is a need for image restoration algorithms specifically designed for single-shot EM images.
In recent years, machine learning methods based on artificial neural networks, particularly convolutional neural networks (CNNs), have become the state-of-the-art approach for various tasks such as image classification [21], image segmentation [22], image denoising [23], image restoration [24], image deconvolution [25], and image super-resolution [26]. These methods, which involve adjusting the weight connections between neurons during training, have been made possible by the development of techniques such as the Rectified Linear Unit (ReLU) [27], dropout regularization [28], batch normalization [29], and improvements in GPU technology. While CNN-based approaches have achieved strong performance in denoising specific types of EM images, they are limited by their reliance on small simulated or experimental datasets and incomplete modelling of the various types of noise present in experimental EM data [30, 31, 32, 33, 34]. To the best of our knowledge, there is currently no algorithm that can effectively compensate for all types of distortion in a single-shot EM image without requiring retraining and regardless of the sample being studied.
In this study, we use a machine learning approach to restore EM images using a Concatenated Grouped Residual Dense Network (CGRDN) and a combination of loss functions and a generative adversarial network (GAN) [35]. This approach not only learns an end-to-end mapping between distorted and undistorted EM images, but also a loss function to train this mapping. Since we only have access to distorted data experimentally, we generate undistorted and distorted EM images by applying all distortions that can be corrected on single-shot EM images. By training the neural network to produce an undistorted output regardless of the level and combination of distortions in the input, it implicitly learns to detect and repair the distortions. This approach demonstrates impressive results for restoring both periodic and non-periodic specimens with different combinations of severe distortions. Importantly, the results show that both peak positions and intensities in atomic resolution images can be reliably determined. In addition, the restoration time is only of the order of seconds for a 2kx2k image.
## Results and Discussion
Electron microscopy techniques, namely SEM, STEM, and TEM, exhibit distinct sources of noise and variations in their features at both low and high resolution. Hence, we have trained our network architecture on six diverse datasets comprising low-resolution (LR) and high-resolution (HR) images for each microscopy modality. Our findings indicate that the best performance is achieved by training separate networks for LR and HR features, particularly at low doses, where the network can utilize the specific feature distribution acquired during the training phase. Detailed implementation and training information is provided in the supplementary material. Our study mainly focuses on HR-STEM, a widely used technique for the analysis and quantification of atomic structures.
### Ablation study and comparison to state-of-the-art algorithms
To improve the performance of a neural network, it is important to choose the right values for the hyperparameters. These values can affect the network's ability to minimize errors, run quickly, and fit within certain hardware constraints. In our case, we want the network to be able to process images of size \(1024\times 1024\) in less than one second, and we want to be able to run it on an Nvidia Volta GPU card with 12GB of memory. To find the best hyperparameters for our needs, we perform an ablation study. This involves varying the network architecture and some of its hyperparameters and measuring their effect on the \(\mathcal{L}_{1}\) error (see "Loss function" section). Since our hardware constraints limit the maximum number of residual dense blocks (RDB), grouped residual dense blocks (GRDB), and batch size to 4, we will keep these values constant at their maximum value. All other parameters of our generator are defined in the "Network architecture" section and will be kept constant unless otherwise specified. A grid search is used to find the optimal values for the learning rate and loss weighting parameters.
In the first part of this ablation study, we focus on the performance of the network when the number of convolutional layers \(n_{lay}\) within the RDB increases. Figure 1 shows the reduction of the \(\mathcal{L}_{1}\) error when the number of layers and network parameters
increases. This is expected since a deeper network can improve the performance of the model by increasing the number of parameters and allowing the model to learn more complex features.
We would like to highlight that our hardware constraints only allow us to use a maximum of 9 layers for \(n_{lay}\). Nonetheless, we observed that the \(\mathcal{L}_{1}\) error starts to reach a plateau for \(n_{lay}=9\), indicating that increasing the number of layers may not lead to substantial performance improvements.
Furthermore, we compared the performance of three different image denoising architectures: the Grouped Residual Dense Network (GRDN) [23], the Multi-resolution U-Net (MR-UNET) [31], and our proposed architecture, CGRDN. We assessed the performance of these architectures using the well-known peak signal-to-noise ratio (PSNR), which is defined as:
\[PSNR=10\log_{10}\left(\frac{MAX^{2}}{MSE}\right), \tag{1}\]
where \(MAX\) denotes the maximum possible pixel value of the images, and \(MSE\) represents the mean squared error between the distorted and undistorted images. However, it is important to note that PSNR only measures the pixel-wise differences between the original and reconstructed images and does not account for other crucial factors such as visual perception and structural similarity. The GRDN architecture was previously ranked first in terms of PSNR and structure similarity index in the NTIRE2019 Image Denoising Challenge. The MR-UNET extends the functionality of the decoder in a U-Net [36] by adding additional convolutional layers to the hidden layers in order to produce coarse outputs that match low-frequency components. The results of our comparison are summarized in Table 1, which shows the number of parameters and the resulting PSNR for each architecture and show that the GRDN and CGRDN are more efficient architectures because they require approximately 7 times fewer parameters than the MR-UNET, while still achieving a higher PSNR. It is interesting to note that our CGRDN architecture achieved a higher PSNR than the GRDN, while only requiring an additional 20,000 parameters.
We also compared the performance of our image restoration network to the Block-matching and 3D filtering (BM3D) [18] algorithm in terms of PSNR. BM3D is a widely used technique for removing noise from images through a process called denoising. It segments the image into overlapping blocks and identifies similar patterns among them to estimate the original image and reduce noise. BM3D has demonstrated effectiveness in denoising images with high levels of noise and serves as a benchmark for image denoising algorithms in image processing. The average PSNR of BM3D and our network on the validation dataset was 30.45 dB and 36.96 dB, respectively. These results demonstrate that our network outperforms BM3D by
\begin{table}
\begin{tabular}{l l l} \hline Method & \# parameters & PSNR \\ \hline MR-UNET [31] & 51.7M & 36.70dB \\ GRDN [23] & 7.02M & 36.90dB \\ CGRDN this work & 7.04M & 36.96dB \\ \hline \end{tabular}
\end{table}
Table 1: PSNR denoising performance comparison of different network architectures.
Figure 1: Ablation study of the CGRDN architecture based on \(\mathcal{L}_{1}\) metric as a function of the size of the model. The number of layers \(n_{lay}\) is indicated next to each data point.
a significant margin of 6.51 dB. Figure 2 illustrates the performance of our network and BM3D on two randomly generated, high-resolution STEM images with standard experimental noise values. These images were simulated using the procedure outlined in the "Data generation" section. The figure displays the original distorted images (a)&(e) and undistorted images (d)&(h), as well as the denoised output from BM3D (b)&(f) and the restored output from our network (c)&(g).
These results demonstrate that our image restoration network significantly enhances image quality, as measured by PSNR. However, it is noteworthy that PSNR is not always a reliable indicator of image quality since it merely measures pixel-wise differences between original and reconstructed images and overlooks other critical factors such as visual perception and structural similarity. Hence, it is crucial to employ various image quality metrics, along with PSNR, to obtain a more comprehensive evaluation of the performance of image restoration techniques.
### Atomic structure quantification
While the CNN was trained to restore images of a wide variety of imaging modes, STEM is of particular interest since it is routinely used for the quantification of atomic structures [37, 38, 39] in terms of atomic column positions and their corresponding scattering cross sections (SCS), which allows us to study the impact of the proposed image restoration method quantitatively. The probe position integrated scattering cross section, short SCS, in atomic resolution STEM images is defined as the integrated intensity of an atomic column, which is typically modelled as a 2D gaussian function. Since the SCS scales with the atomic number \(\approx Z^{1.7}\)[40, 41] and mostly increases monotonically with thickness for large collection angles, it is routinely used for atom counting. The evaluation of the effect of image restoration on the quantitative assessments of STEM images is done in three complementary approaches, using MULTEM [42, 43] to create multislice simulations and the StatSTEM software for all model fittings [39]. All evaluations are based on 100 distortion/noise realisations for each dose setting.
1. We demonstrate the effect of image denoising with an idealised setup in analogy to the study conducted in reference [39], where the precision of the determination of the location and SCS of an atomic column was determined over a wide range of signal-to-noise-ratios (SNRs) using pure Poisson noise. This setting allows the comparison to the theoretical limits of variance for unbiased estimators, the so-called Cramer-Rao-Lower Bounds(CRLBs). The simulated STEM dataset is a bulk Pt crystal in [001] orientation and contains STEM images over 75 depth sections with unit cell spacing in z-direction.
2. A more practical example, that includes crystal irregularities, is chosen to determine the impact of a combination of noise, scan-line-distortions and fast-scan distortion. In this case, we evaluate the mean absolute error (MAE) for atomic
Figure 2: CNN restoration results compared with BM3D in terms of PSNR for two random simulated STEM specimens using standard experimental noise values.
column positions and the mean absolute percentage error (MPE) for the SCSs of atomic columns, as well as the variance of these measurements. This serves to show in particular the independence of the approach on the structural periodicity for atomic-resolution STEM images.
3. For a simulated Pt-nanoparticle it is demonstrated that distortion correction yields not only a more accurate localisation of atomic columns but also enables more reliable atom counting.
The simulation settings for all samples are tabulated in the supplementary information. The results of the first study are shown in figure 3. Examples of the underlying STEM images are given for the extremes of SNRs (i.e. smallest thickness and lowest dose and largest thickness and highest dose) for raw and restored images in panels (e), (f), (g) and (h). Comparing figure 3(e) and (f) it can be seen visually that even at a very low dose, the CNN can recover the underlying structure faithfully. This effect is measurable both in terms of the precision with which atomic columns can be located, as well as in SCS measurement precision, and is particularly pronounced in the low dose range as illustrated in figure 3(a) and (b). As the dose increases the precision of the structural measurements of raw and restored data converge eventually (figure 3(c-d)). An interesting observation is that the theoretical precision limit given by the CRLB, can be overcome employing image restoration. This makes a strong point for using image restoration for quantitative studies, like atom counting or strain measurements in general.
The restoration results in the first example arguably benefit from the underlying perfect crystal symmetry, which is why we test the CNN also for imperfect structures. The Pt-bulk model depicted in figure 4(a) is in \([112]\) zone axis orientation, six unit cells thick and contains a unit edge dislocation of Burgers vector \(b=1/2[110]\) in the \((111)\) glide plane; a dislocation commonly observed in fcc metals [44]. The structure was created using the Atoms software, which determines atom positions
Figure 3: Precision of atomic column position and SCS-measurements over a series of Pt-bulk samples with a thickness varying from 2-75 atoms together with their 95% confidence intervals. (a) Precision of the atomic column locations for a dose of 5e2 \(e/\AA^{2}\). (b) Precision of SCS measurements for a dose of 5e2 \(e/\AA^{2}\). (c) Precision of atomic column locations for a dose of 5e4 \(e/\AA^{2}\). (d) Precision of SCS measurements for a dose of 5e4 \(e/\AA^{2}\). (e) Example of a raw STEM image at z=2 and dose=5e2 \(e/\AA^{2}\). (f) Example of a restored STEM image at z=2 and dose=5e2 \(e/\AA^{2}\). (g) Example of a raw STEM image at z=75 and dose=5e4 \(e/\AA^{2}\). (h) Example of a restored STEM image at z=75 and dose=5e4\(e/\AA^{2}\).
corresponding to the displacement fields predicted by the elastic theory of dislocations [45]. The simulated HAADF STEM images were subjected to varying noise levels from \(5e2~{}e/\AA^{2}\) to \(5e4~{}e/\AA^{2}\), and further corrupted by scan-line distortions as outlined in the "S(T)EM noise model" section. Example reconstructions for raw images at doses of \(5e2~{}e/\AA^{2}\) and \(5e4~{}e/\AA^{2}\) (figure 4(b) and (c)) are shown in figure 4(d) and (e), respectively. In the low-dose raw image individual atomic columns are hardly recognisable. Without the prior knowledge of the atomic column positions, any attempt of model fitting would have to overcome the challenge of performing reliable peak finding first, which is a factor not considered here. The reconstruction of this image (figure 4(d)) on the other hand shows very clear peaks. A burgers circuit is superimposed on the image to highlight that despite the poor separation of columns in the raw image, the dislocation with its correct burgers vector \(b\) is maintained, which means that the structure as a whole is retrieved correctly, albeit the individual column positions may not be fully accurate as can be seen in the mean absolute position error of the columns around the center of the dislocation (columns within the red circle in figure 4(a)) for low doses shown in figure 4(f). However, the error drops rapidly with increasing dose and shows a clear improvement against raw images. The position accuracy is therefore not only a result of denoising but also the result of the accurate correction of scan-line and fast-scan distortions. The comparatively high accuracy for the raw image fitting at low doses can be attributed to the fact that correct initial column positions are given for the fitting procedure. Since the column can hardly be located in the noisy images, the fitting algorithm on average does not move the position much away from this initial position. The CNN on the other hand reconstructs a clearly visible atomic column, but the available information in the underlying image is insufficient for accurate positioning. However, the proper retrieval of the dislocated atomic column at higher doses shows that the CNN is not by default just picking up on periodicity, but faithfully recovers the atomic structure also in the presence of non-periodic features in atomic resolution STEM images.
Also the SCS measurements improve in accuracy by the restoration, which would translate directly into improvements for atom counting studies. An example of such an atom counting scenario is presented in figure 5. These results were obtained from a simulated spherical Pt nanoparticle with 11 unit cells in diameter in [100] zone axis orientation under the same distortion and noise parameters as given in the previous example. Atom counts were obtained by matching retrieved SCS values against simulated library values[46]. The improvement in column position measurements over all dose settings again indicates the proper correction of scan-line and fast-scan distortions. The improvement of SCS measurement accuracies, especially at low-dose conditions greatly decreases the chance of miscounting atoms in the structure, which in turn may be very beneficial e.g. for the reconstruction of 3D information from atom-counts [47, 48].
### Experimental image restorations
One of the main advantages of our image restoration method is that the training data is generated using realistic physical models of the noise found in various microscopy modalities, as well as for an appropriate range of values for the noise model parameters, as detailed in the "Methods" section. This methodology allows for the direct application of our network to experimental data, without requiring additional training for a particular specimen or microscope settings. Figure 6 illustrates the effectiveness of our approach on diverse types of random experimental microscopy images. The top row of this figure
Figure 4: (a) Schematic of the Pt structure in [112] zone axis with a unit edge dislocation of Burgers vector \(b=1/2[110]\) in the \((111)\) glide plane. (b) Corrupted raw HAADF STEM image with a dose of \(5e2e/\AA^{2}\). (c) Corrupted raw image with a dose of \(5e5e/\AA^{2}\). (d) Restored image with a dose of \(5e2e/\AA^{2}\). (e) Restored image with a dose of \(5e5e/\AA^{2}\). (f) Quantification results for the atomic column positions and scattering cross sections of the atomic columns around the center of the edge dislocation (marked with red circles in panel (a)).
shows raw experimental images for HR-STEM, LR-STEM, HR-TEM, LR-TEM, LR-TEM, HR-SEM, and LR-SEM. The bottom row shows the corresponding restored versions of these images.
These results show that the trained networks have excellent performance on experimental data and can effectively handle a wide range of microscopy images with varying resolution and noise levels. It is important to note that in this study, "high resolution" refers to images with round and symmetrical features, while "low resolution" refers to images with a variety of different features. Additional examples of restored experimental images for each microscopy modality can be found in the github repository [https://github.com/Ivanlh20/r_em](https://github.com/Ivanlh20/r_em).
The importance of using realistic physical models of the noise to generate distorted data, along with selecting the correct range of values for the noise model parameters, is demonstrated in Figure 7. This figure illustrates how these factors can impact the accuracy of the restored image. Figures 7 (a) and (b) show two experimental STEM images that were acquired using a Fei Titan\({}^{3TM}\) S/TEM microscope. The images were obtained using fast scanning with dwell times of \(0.2\mu s\) and \(0.05\mu s\), respectively. The importance of accurately modelling fast scan distortion is evident from figures 7 (f) and (g). In these figures, our network architecture was trained using a model, which was not sufficient to completely compensate for the spread of pixel intensities along the scanning direction (see Equation 48 in the "S(T)EM noise model" section). If the dwell time decreases, these image artifacts become more pronounced, as shown in figure 7 (g). While the manufacturer recommends using dwell times larger than \(0.5\mu s\) to avoid image artifacts, correctly modelling fast scan distortion allows us to fully compensate for these artifacts, as shown in figures 7 (k) and (l). The study of beam-sensitive materials and dynamic imaging will greatly benefit from the compensation of this distortion. Figure 7 (c) shows a registered STEM image that contains interpolation noise. The interpolation process changes the dominant noise distribution, which can impact the restoration process, especially at
Figure 5: Quantification results for a spherical Pt nanoparticle with a diameter of 11 unit cells in [100] orientation. The values are based on all 333 atomic columns for 100 noise realisations. (a) The mean absolute error of the estimated atomic column positions. (b) The mean absolute percentage error of the fitted scattering cross sections, which are being used to estimate atom counts in each column. (c) The fraction of atomic columns with correctly estimated atom counts.
Figure 6: Experimental image restoration for various microscopy modalities. The top row illustrates the raw experimental images, while the bottom row displays the restored versions. Images (a), (b), (c), and (d) were obtained from reference [49], and images (e) and (f) were sourced from reference [50].
low doses, as shown in Figure 7 (h) where some atomic columns appear blurred. However, this issue can be addressed by including this type of noise in the training dataset, as explained in the "Methods" section. The effect of including this noise in the training dataset on the restored image can be seen in figure 7 m), where all atomic columns become clearly visible. Figure 7 (d) exhibits a STEM image with strong Y-jitter distortion. The impact of an incorrect range of values for this distortion during data generation on the restored image can be seen in figure 7 (i), where some atomic columns appear split. After retraining the data with newly generated data containing the proper range of Y-jitter distortion, the neural network can correctly compensate for this image artifact, as shown in figure 7 (n). In Figure 7 (e), an experimental STEM image of a nanoparticle taken using a gas cell holder is shown [51]. The dominant sources of noise in this image are detector noise and fast scan noise. Figure 7 (j) shows a restored STEM image produced by our network architecture that was trained using a dataset generated with Poisson noise as the only source of STEM detector noise (as described by Equation 45 in the "S(T)EM noise model" section). However, this restored image exhibits strong artifacts despite using an accurate model for fast scan noise (as described by Equation 47 in the "S(T)EM noise model" section). After retraining our network architecture with a new dataset that includes the correct STEM detector noise (as described by Equation 46 in the "S(T)EM noise model" section), the restored image in Figure 7 (o) shows a significant reduction in artifacts. Nonetheless, it is worth mentioning that some of the remaining artifacts in the image could be attributed to other sources of distortion not accounted for in our data modelling, such as the gas holder effect, charging artifacts, and residual electronic noise.
Another example that highlights the importance of properly modeling noise and distortion sources can be seen in Figure 8. In this figure, we compare the reconstruction performance of our CNN, AtomSegNet [33], and Noise2Void-NN (N2V) [53], which was retrained on the presented experimental image itself. The sample is a \(BaHfO_{3}\) nanoparticle (figure 8-3) embedded in a superconducting \(REBa_{2}Cu_{3}O_{7-\delta}\) (REBCO) matrix[54, 55] (figure 8-3), which was grown on a \(SrTiO_{3}\) substrate (figure 8-3). While all three networks successfully remove the noise from the image, there are notable differences in the reconstruction results. In region 1, the N2V reconstruction recovers all the weaker intensities of the \(Ti+O\) columns to some degree, which
Figure 7: Raw STEM images alongside the results of a restoration process employing inaccurate and accurate models of the noise. The top row shows the original STEM images, while the second and third rows show the restored versions of the images trained with distorted data based on inaccurate and accurate noise models, respectively. Images (a)-(c) were obtained from our experimental datasets, whereas (d) and (e) were obtained from references [52] and [51], respectively.
is not the case for the AtomSegNet reconstruction. There, some of the columns blur or even disappear. Our CNN reliably recovers all atomic columns with superior contrast to the other two methods. Similar improvements are evident also in region 2 but most notably in region 3. This region at the top of the image is also degraded, presumably by either FIB damage or carbon contamination. In both N2V and AtomSegNet reconstructions, features tend to blur into diagonal streaks, while our CNN recovers clearly distinguishable atomic columns and, given that the \(BaHfO_{3}\) nanoparticle grew epitaxially on the \(SrTiO_{3}\) substrate, that is indeed what would be expected [56]. Considering the N2V network is a generic denoising network, the results are quite remarkable, albeit the additional training step is somewhat inconvenient from a user perspective.
However, this example illustrates that the CNN presented in this work does not only benefit from the latest advances in deep learning, but also from the development of accurate, physically meaningful models of all distortions specific to HAADF-STEM. This CNN is shown to be accurate, not only in perceived contrast enhancement, but also in a quantitative way which boosts the accuracy and precision of atomic structure determination in ADF-STEM studies.
Figure 8: Comparison of different CNN-restoration approaches on an experimental HAADF-STEM dataset of a \(BaHfO_{3}\) nanoparticle (4) embedded in a superconducting \(REBa_{2}Cu_{3}O_{7-\delta}\) (REBCO) matrix (2), which was epitaxially grown on a \(SrTiO_{3}\) substrate(1). Images were acquired on a non-probe-corrected Titan microscope with 300 keV at KIT Karlsruhe. The data is descibed in detail in references [54] and [55]
## Methods
In single-shot EM image restoration, the goal is to estimate an undistorted image \(y\) from a distorted image \(x\). To achieve this, we train a generator \(G\) using a deep neural network approach, which learns to estimate the corresponding undistorted image \(y\) for a given input \(x\). During the training procedure, a loss function is minimised to evaluate the quality of the results.
Traditionally, pixel-wise losses such as \(\mathcal{L}_{1}\) or \(\mathcal{L}_{2}\) have been used to obtain quantitative results for the image restoration problem [57]. However, these losses often lead to blurred images that do not look realistic. To address this, we propose a conditional generative adversarial network (cGAN) that trains both a generator and a discriminator. The generator \(G\) maps the distorted image \(x\) to the undistorted image \(y_{g}=G(x)\), and the discriminator is trained to differentiate between real and generated images [58]. We use pixel-wise losses to ensure quantitative results while restricting the GAN discriminator to model high-frequency details, resulting in sharper and more realistic restored images.
Our training is supervised, which requires input pairs of distorted and undistorted EM images. However, in practice, we only have access to distorted EM data. To overcome this, we can partially address the problem by collecting time-series EM images and using an average procedure based on rigid and non-rigid registration to generate an undistorted image. However, the combination of high-speed scans, jitter, and low-dose leads to highly correlated distortions [13]. Furthermore, long exposure to the electron beam can result in charging, beam damage, atom hopping and rotation of the specimen under study, which can further hamper the average procedure. Therefore, the only solution is to train the GAN using synthetic pairs of undistorted/distorted EM images.
### Network architecture
A GAN [59] is a powerful framework that encourages predictions to be realistic and thus to be close to the undistorted data distribution. A GAN consists of a generator (G) and discriminator (D) playing an adversarial game. A generator learns to produce output that looks realistic to the discriminator, while a discriminator learns to distinguish between real and generated data. The models are trained together in an adversarial manner such that improvements in the discriminator come at the cost of a reduced capability of the generator and vice versa. The GAN involves the generation of conditional data, which is fed to the generator and/or the discriminator [35]. The generator and discriminator architectures proposed here are adapted from those described in [60] and [58], respectively. The details of these architectures are discussed in the following sections.
**Generator architecture**
Our generator architecture, called Concatenated Grouped Residual Dense Network (CGRDN), is shown in Fig. 9. This network architecture is an extension of the GRDN for image denoising [23], which was ranked first for real image denoising in terms of the PSNR and the structure similarity index measure in the NTIRE2019 Image Denoising Challenge [61]. The GRDB architecture is shown in Fig. 9(a). The building module of this architecture is the residual dense block (RDB) [60], which is shown in Fig. 9(b). The original GRDN architecture can be conceptually divided into three parts. The first part consists of a convolutional layer followed by a downsampling layer based on a convolutional stride, the middle part is built by cascading GRDBs and the last part consists of an upsampling layer based on transposed convolution followed by a convolutional block attention module (CBAM) [62] and a convolutional layer. The GRDN also includes the global residual connection between the input and the last convolutional layer. In the original version of the GRDN [23], residual connections are applied in three different levels (global residual connection, semi-global residual connection in GRDB, and local residual connection in each RDB). However, in the version submitted for the NTIRE2019 Image Denoising Challenge [61], residual connections for every 2 GRDBs were included.
Although it has been demonstrated that one architecture developed for a certain image restoration task also performs well for other restoration tasks [60, 63, 58, 64], an architecture for a given task will be data dependent. When applied to EM data, we found out that 2 modifications of GRDN are necessary in order to best handle the nature of our data, which involves different types and levels of distortions with high correlation between pixels:
1. The cascading of the GRDN is replaced by feature concatenation, feature fusion, and a semiglobal residual connection. This allows us to exploit hierarchical features in a global way, which is important for highly correlated pixels that extend over a large area of the image.
2. The CBAM, which is included in [60] is removed from our network. The reason for this is the use of large image sizes (256x256) for training, which reduces its gain [23].
### Discriminator architecture
The purpose of the discriminator network is to judge the quality of the output data resulting from the generator network. For our discriminator, we use the 70x70 convolutional patch discriminator described in [58] with some minor modifications. The zero-padding layers were removed and batch normalization layers [29] were replaced by instance normalization layers (IN) [65]. Figure 10 shows the structure of the discriminator network. The result of the network is the non-transformed output
\(C(y)\) or \(C(y_{g})\) of dimensions \(32x32\). Some benefits of the discriminator architecture shown in Fig. 10 include that it is fully convolutional and it only penalizes structure at the scale of image patches. Furthermore, we enhance our discriminator based on the relativistic GAN, which has been shown to improve the data quality and stability of GANs at no computational cost [66]. Different from the standard discriminator, which estimates the probability that input data is real, a relativistic discriminator predicts the probability that real data \(y\) is relatively more realistic than generated data \(y_{g}=G(x)\). If we denote our relativistic average patch discriminator as \(D_{Rap}(x)\), then the output of the discriminator can be written as:
\[D_{Rap}\left(y,y_{g}\right)= \sigma\left(C(y)-\mathbb{E}_{y_{g}}\left\{C(y_{g})\right\}\right) \tag{2}\] \[D_{Rap}\left(y_{g},y\right)= \sigma\left(C(y_{g})-\mathbb{E}_{y}\left\{C(y)\right\}\right) \tag{3}\]
where \(\sigma\) is the sigmoid function and \(\mathbb{E}_{x_{1},...,x_{n}}\left\{.\right\}\) is an operator representing the expectation value computed on the variables \(x_{1},...x_{n}\). In the next section, these functions will be used in the definition of the loss functions.
### Loss function
The loss function is the effective driver of the network's learning. Its goal is to map a set of parameter values of the network onto a scalar value, which allows candidate solutions to be ranked and compared. In our case, the discriminator and adversarial losses are based on the relativistic average GAN loss defined in [66]. We design our generator loss function as a sum of different contributions in such a manner that it keeps the quantitative information of the image at the pixel level and produces perceptually correct and realistic images. The different contributions of these loss functions are described in the following sections.
\(\mathcal{L}_{1}\) **loss**
Pixel-wise losses are advantageous to keep quantitative information of the ground truth image. In this work, we used the
Figure 10: Patch discriminator architecture.
Figure 9: Concatenated Grouped Residual Dense Network (CGRDN) architecture for EM image restoration. (a) Overall architecture, (b) GRDB architecture used in (a), (c) RDB architecture used in (b).
loss, which as compared to the \(\mathcal{L}_{2}\) loss yields less blurred results [57]. The \(\mathcal{L}_{1}\) loss can be written as:
\[\mathcal{L}_{1} = \mathbb{E}_{y,y_{g}}\left\{w_{y}\left\|y-y_{g}\right\|\right\}, \tag{4}\] \[w_{y} = 1/\max\left(\sigma_{\min},\text{Std}_{y}\left\{y\right\}\right) \tag{5}\]
where \(w_{y}\) is a weighting factor that gives equal importance to each example regardless of its contrast, \(\sigma_{\min}\) is a small value to limit the maximum scaling factor, and \(\text{Std}_{x_{1},...x_{n}}\left\{.\right\}\) is an operator that represents the standard deviation calculated on the variables \(x_{1},...x_{n}\).
\(\mathcal{L}_{2}\) **loss**
Due to the design of our architecture, which is learning the residual difference between the distorted and undistorted image and based on the fact that distorted images can have few outliers in the distribution of pixel intensities (i.e. X-rays hitting the EM detector, saturation of the detector, low dose and dead-pixels), the output of the generator will show a strong correlation at those pixel positions. For this reason, we also used the \(\mathcal{L}_{2}\) loss which strongly penalized the outliers:
\[\mathcal{L}_{2}=\mathbb{E}_{y,y_{g}}\left\{w_{y}\left\|y-y_{g}\right\|^{2}\right\} \tag{6}\]
**Multi-local whitening transform loss**
Local contrast normalisation (LCN) is a method that normalises the image on local patches on a pixel basis [67]. A special case of this method is the whitening transform which is obtained by subtracting the mean and dividing by the standard deviation of a neighborhood from a particular pixel:
\[y_{ij}^{S}=\left(y_{ij}-\mathbb{E}_{\hat{S}}\left\{y_{i,j}\right\}\right)/ \max\left(\sigma_{\min},\text{Std}_{\hat{S}}\left\{y_{i,j}\right\}\right), \tag{7}\]
where \(\hat{S}\) is a local neighbourhood around the pixel \(i,j\) of window size \(S\). The whitening transform makes the image patches less correlated with each other and can highlight image features that were hidden in the raw image due to its low local contrast. This effect can be seen in Fig. 11a), which shows a simulated ADF-STEM image of a random nanoparticle on a carbon support. The edge of the nanoparticle shows low contrast due to its reduced thickness, resulting in lower intensity values. Based on this observation, we introduce a multi-local whitening transform (MLWT) loss which pays more attention to fine details independent of the intensity value. Specifically, the generated and the ground truth image are local whitening transforms corresponding to different window sizes of \(2x2\), \(4x4\), \(8x8\), and \(16x16\) pixels.
Using different windows sizes for the calculation of the whitening transform, we ensure that the relevant features present in the image are highlighted independently of its pixel size. Figs. 11(b)-(e) show an enhancement of the edge of the nanoparticle as well as the carbon support after applying the whitening transform to Fig. 11(a) by using different window sizes.
Then, we calculate the average \(\mathcal{L}_{1}\) loss for these 4 images:
\[\mathcal{L}_{mlwt}=\frac{1}{4}\sum_{S=2,4,8,16}\mathbb{E}_{y^{S},y^{S}_{g}}\left\{\left\|y^{S}-y^{S}_{g}\right\|\right\}. \tag{8}\]
**Fourier space loss**
In electron microscopy, Fourier space contains crucial information about the sample and any distortions that may be difficult to discern in real space. To address this issue, we introduce the \(\mathcal{L}\gamma\) loss in the 2D Fourier transform of the difference between the generated data \(y_{g}\) and the ground truth image \(y\).Nevertheless, it is noted that high-frequency information typically possesses
Figure 11: a) Undistorted ADF STEM image of a nanoparticle on a carbon support. Images are generated by applying the whitening transform to (a) by using different window sizes of (b) 2, (c) 4, (d) 8 and (e) 16.
smaller values than low-frequency information. Consequently, to accentuate the high-frequency information, we apply a power transform to the aforementioned difference and define the loss function as follows:
\[\mathcal{L}\text{fs-}\gamma=\mathbb{E}y,y_{g}\left[|\mathcal{F}(y-y_{g})|^{7} \right], \tag{9}\]
Here, \(\mathcal{F}\) symbolises the 2D Fourier transform, and \(\gamma\) is a parameter in the range \((0.0,1.0]\). In our investigation, we utilise \(\gamma=0.125\).
**Constraint losses**
Some important parameters for EM quantification are the total intensity and the standard deviation of the images. The reason for this is that they carry information about physical quantities of the sample or microscope, such as the number of atoms, defocus and spatial and temporal incoherence [68, 69]. Therefore, we encourage that the restored images have to minimize the above quantities, resulting in the following two loss functions:
\[\mathcal{L}_{mean} = \left\|\mathbb{E}_{y}\left\{y\right\}-\mathbb{E}_{y_{g}}\left\{ y_{g}\right\}\right\|, \tag{10}\] \[\mathcal{L}_{std} = \left\|\text{Std}_{y}\left\{y\right\}-\text{Std}_{y_{g}}\left\{ y_{g}\right\}\right\|. \tag{11}\]
**Adversarial loss**
The job of the relativistic adversarial loss is to fool the discriminator which can be expressed as:
\[\mathcal{L}_{Adv}=-\mathbb{E}_{x,y}\left\{\log\left(1-D_{Rap}(y,y_{g})\right) \right\}-\mathbb{E}_{\gamma_{g}}\left\{\log\left(D_{Rap}(y_{g},y)\right) \right\}, \tag{12}\]
with \(D_{Rap}(y,y_{g})\) and \(D_{Rap}(y_{g},y)\) defined in equations 2 and 3, respectively. This definition is based on the binary cross entropy between the ground truth and the generated images. Different from the conventional adversarial loss, in which \(y\) is not used, our generator benefits from \(y\) and \(y_{g}\) in the adversarial training.
**Generator loss**
Our total generator loss function can be written as:
\[\mathcal{L}_{G} = \mathcal{L}_{pixel-wise}+\lambda_{Adv}\mathcal{L}_{Adv}, \tag{13}\] \[\mathcal{L}_{pixel-wise} = \lambda_{1}\mathcal{L}_{1}+\lambda_{2}\mathcal{L}_{2}+\lambda_{ mult}\mathcal{L}_{mult}+\lambda_{fs-\gamma}\mathcal{L}_{fs-\gamma}+\lambda_{ mean}\mathcal{L}_{mean}+\lambda_{std}\mathcal{L}_{std}, \tag{14}\]
where \(\mathcal{L}_{pixel-wise}\) is our pixel-wise loss function, \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{mult}\), \(\lambda_{fs-\gamma}\), \(\lambda_{mean}\), \(\lambda_{std}\) and \(\lambda_{Adv}\) are the weighting parameters to balance the different loss terms.
**Discriminator loss**
Symmetrically to the relativistic adversarial loss, the relativistic discriminator is trying to predict the probability that real data is relatively more realistic than generated data, and it can be expressed as:
\[\mathcal{L}_{D}=-\mathbb{E}_{x,y}\left\{\log\left(D_{CRap}(x,y,y_{g})\right) \right\}-\mathbb{E}_{x,y_{g}}\left\{\log\left(1-D_{CRap}(x,y_{g},y)\right) \right\}. \tag{15}\]
### Data generation
While it is possible to fully describe the electron-specimen interaction and image formation in an electron microscope, generating realistic EM image simulations for specimens on a support with sizes of a few nanometers is too time-consuming even with the most powerful GPU implementations of the multislice method [42, 43]. However, our goal is to train a neural network to correct EM distortions without the need to know the specific specimen or microscope settings. Therefore, we only need to generate undistorted images that closely mimic the appearance of real EM data, while the EM distortions must be accurately modelled. The generated undistorted images should also include physical parameters of the specimen and microscope settings, such as atomic sizes, atomic distances, atomic vibrations, lattice parameters, and relative intensities of atomic species, as well as acceleration voltage, aberrations, magnification, detector sensitivity, detector angles, and the transfer function of the detection system.
#### Specim generation
In order to optimise the simulation process, we generate a specimen that fully covers the extended simulated box size \(\hat{I}^{e}_{xyz}\). This is an expanded version of the required simulation box size \(\hat{I}_{xyz}\). The calculation of \(\hat{I}_{xyz}\) starts by randomly selecting a pixel size \(\text{d}r\) within the range \([0.025,0.90]\text{A}\). By using the required image size \((n_{x},n_{y})\), \(n_{z}=\max(n_{x},n_{y})\) and \(\text{d}r\), the required simulation box size can be expressed as \(\hat{I}_{xyz}=\{n_{x}\text{d}r,n_{y}\text{d}r,n_{z}\text{d}r\}\). From these values, an extended number of pixels \(n^{e}_{i}=n_{i}+round(d_{ext}/dr)\) and an extended simulation box size \(\hat{I}^{e}_{xyz}=\{n^{e}_{i}\text{d}r,n^{e}_{i}\text{d}r,n^{e}_{i}\text{d}r\}\) are obtained, where \(d_{ext}\) is the maximum correlation distance for a given value of scanning distortions. The specimen generation is divided in 3 steps.
The first step of specimen generation involves randomly selecting a specimen type from the following options: crystalline specimen, amorphous specimen, or individual points. If the selected specimen is crystalline, the generation process starts by randomly choosing up to 16 unique atomic types with atomic number \(Z\) in the range \([1,103]\). The crystallographic space group is randomly chosen from a range \([1,230]\). The lattice parameters and the angles of the chosen space group are selected randomly from a range \([3.1,25.0]\)A and \([45^{\circ},120^{\circ}]\), respectively. Atomic positions of the asymmetric unit cells are generated randomly within the volume that is allowed by their space-group symmetry. This specimen generation process is subject to a physical constraint: after applying the space group symmetry to the atomic positions on the asymmetric unit cells, the minimum distance between the atoms in the unit cell must be within the range \([0.95,7.0]\)A. If this requirement is not met, the generation process is restarted. The generation of amorphous specimens is based on randomly choosing only one atomic number \(Z\) from the range \([1,103]\). The atomic positions of amorphous specimens are generated by randomly placing atoms within the extended simulation box, subject to the requirement that the minimum distance between atoms is within the range \([0.95,1.6]\)A. This process continues until the desired density within the range \([2.0,7.0]g/cm^{3}\) is achieved. In contrast, the generation of individual points starts by randomly choosing a number of points within a given range of positive integers. The 3D positions of the particles are then generated randomly within the extended simulation box, subject to the requirement that the minimum distance between particles is within the range \([1,20]dr\). This option is also used to generate low-resolution images.
The second step begins by randomly choosing between a specimen orientation along the zone axis or a random orientation. The probability of choosing a zone axis orientation is 0.75. If the specimen is crystalline, the zone axis orientation is randomly chosen from the first eight main zone axes, and a small random mistilt angle is generated for the chosen orientation using a normally distributed random number with a standard deviation of \(5^{\circ}\). For non-crystalline specimens, a random 3D orientation is generated. To prevent alignment of crystalline specimens along the \(xy\) directions, an additional random rotation is applied along the \(z\) axis. For a given generated orientation, the specimen is oriented and cropped in the \(xy\) plane so that it fits within the extended simulated box. This is followed by a random generation of a wedge on the specimen with a probability of 0.75. The wedge can be generated on the top, bottom, or both surfaces of the specimen, each with a probability of occurrence of 0.33. The wedge orientation is generated randomly in the \(xy\) plane, and its angle is chosen randomly from the range \([5^{\circ},45^{\circ}]\). Shapes can be applied to the specimen with a probability of 0.5. To avoid any preference for the three different types of shapes, the probability of occurrence for each type is set to 0.33. The first type of shape is a polygon rod, for which the number of cross-section vertices sliced along its length is randomly chosen from the range \([3,15]\). The rod is also placed and oriented randomly. The radius of the polygon is chosen randomly from the range \([0.01,0.5]\max(\hat{l}_{xyz})\). The second shape is a convex polyhedron, for which the radius and the number of vertices are chosen randomly from the ranges \([0.01,0.5]\max(\hat{l}_{xyz})\) and \([4,20]\), respectively. The third shape is a hard shape, in which all atoms on one side of a randomly generated \(3d\) plane parallel to the \(z\) orientation are removed. The application of a chosen shape can be used to either remove or keep the atoms of the specimen, with a probability of keeping the atoms of 0.5. Defects are generated randomly with a probability of 0.8. The process starts by randomly selecting a number of atoms, \(n_{sel}\), within the specimen. This number is chosen randomly from the range \([0,n_{max}]\), where \(n_{max}\) is equal to the number of atoms in the specimen multiplied by 0.25 and rounded to the nearest whole number. The positions of the selected atoms are randomly changed with a probability of 0.5. This is done by adding a normally distributed random number with a standard deviation equal to the atomic radius to the position of each selected atom.
The final step of specimen generation adds a support layer with a probability of 0.95. The support layer can be either crystalline or amorphous, each with a probability of 0.5. The thickness of the support layer is chosen randomly from the range \([1,30]\)nm. The process described above for crystalline and amorphous specimen generation is used for the support layer, with the exception of shape generation. Finally, the generated atoms are added to the specimen.
_Undistorted data generation_
**High/medium resolution electron microscopy data** can be synthesized as a linear superposition of the projected signal of each atom in the specimen at a given orientation. Moreover, each projected atomic signal can be modelled as a two-dimensional radial symmetric function, \(f_{Z}^{i}(r)\), where the index \(i\) refers to an atom with atomic number \(Z\) in the specimen. Under this assumption, \(y\) can be expressed as:
\[y=\sum_{Z}\sum_{i}f_{Z}^{i}(|\mathbf{r}-\mathbf{r}_{i}|), \tag{16}\]
where \(\mathbf{r}\) is a two-dimensional vector. Additionally, we model \(f_{Z}(r)\) for each atom with atomic number \(Z\) as a weighted sum of Gaussian, Exponential, and Butterworth functions:
\[f_{Z}(r)=h_{1}e^{-\frac{r^{2}}{2(r)}^{2}}+h_{2}e^{-\frac{r}{Z}}+\frac{h_{3}}{ 1+(r/r_{Z}^{m})^{2n}}, \tag{17}\]
where \(h_{1}\), \(h_{2}\), \(h_{3}\), \(n\) and \(r_{m}\) are the parameters of our model which are restricted to positive values. This parameterization has 3 benefits. First, it accurately models almost any simulated/experimental incoherent EM image. Second, it allows for an easy
inclusion of physical constraints. Third, it only requires 5 parameters. To allow realistic tails of \(f_{Z}(r)\), we constrain \(n\) to be a uniform random variable between \([4.0,16.0]\). We would also like to emphasize that all numerical ranges for the data generation were fine-tuned based on analyzing around 2000 real simulations of (S)TEM images for different specimens and microscope settings.
In order to encode physical information into this model, \(r_{Z}^{m}\) is chosen proportionally to the transformed two-dimensional mean square radius, \(\hat{r}_{Z}\), of the projected atomic potential, \(V_{Z}^{p}(r)\)[70]:
\[r_{Z}^{m}=a\times\left(\hat{r}_{Z}\right)^{\alpha}+b \tag{18}\]
where
\[a = \mathrm{Std}_{Z}\left\{\hat{r}_{Z}\right\}/\mathrm{Std}_{Z}\left\{ \left(\hat{r}_{Z}\right)^{\alpha}\right\}, \tag{19}\] \[b = \mathbb{E}_{Z}\left\{\hat{r}_{Z}\right\}-a\times\mathbb{E}_{Z} \left\{\left(\hat{r}_{Z}\right)^{\alpha}\right\},\] (20) \[\hat{r}_{Z} = \left[\frac{\int_{0}^{\infty}r^{2}V_{Z}^{p}(r)r\mathrm{d}r}{\int _{0}^{\infty}V_{Z}^{p}(r)r\mathrm{d}r}\right]^{1/2} \tag{21}\]
and \(\alpha\) is a uniform random variable between \([0.75,1.25]\). On the other hand, the linear coefficients \(h_{1}\), \(h_{2}\) and \(h_{3}\) are randomly chosen within the range \([0.5,1.0]\) with the following constraint:
\[\int f_{Z_{i}}(r)dr>\int f_{Z_{j}}(r)dr,\text{ if }Z_{i}>Z_{j} \tag{22}\]
where \(Z_{i}\) and \(Z_{j}\) are the atomic numbers of two elements of the specimen. This constraint arises from the fact that the integrated intensity of quasi-incoherently scattered electrons of a given atomic number is proportional to \(Z^{\prime}\), in which \(\gamma\) is a real number between \(1.0\) and \(2.0\) depending on the microscope settings [71].
The process of **generating low-resolution images** begins by randomly choosing a set of low-resolution image types from the following options: soft particles, sharp particles, grains, bands, boxes, and cracks. This stage uses the specimen type "individual points" to generate random positions where different objects will be placed. Finally, the low-resolution image is obtained by linearly superimposing these individual objects.
The generation of soft particles starts by randomly choosing a number of particles in the range \([15,85]\). Each soft particle image is generated by randomly rotating the asymmetric version of Eq. 17, where \(r_{Z}^{m}=(r_{Z}^{m_{x}},r_{Z}^{m_{y}})\) and \(r_{Z}^{m_{y}}=\alpha r_{Z}^{m_{x}}\), with \(\alpha\) a random variable in the range \([0.8,1.2]\). In the case of sharp particles, there is a sharp transition between the border and background of the particle, and the particle can be either polygonal or elliptical with equal probabilities of occurrence. The process starts by randomly choosing a number of particles in the range \([15,40]\). For the polygon option, the number of vertices is randomly chosen in the range \([3,5]\). Each sharp particle image is generated by masking a 3D random positive plane intensity with its randomly rotated shape. This masking creates an intensity gradient over the \(x-y\) plane such that the object does not appear flat.
Grain generation in \(2D\) is performed using the Voronoi tessellation method [72], which is one of the available techniques for producing random polygonal grains within a domain. This process starts by randomly selecting a number of points within the range \([15,175]\). Each grain image is created by masking a 3D random positive plane with its corresponding Voronoi cell. Additionally, the grain borderline is included with a probability of occurrence of 0.5, where its intensity value is randomly assigned within the range \([0.5,1.5]\times\mathrm{mean}(\mathrm{grain\ intensity})\).
EM images may exhibit contrast inversion related to the projected specimen, which can be easily simulated by inverting the image:
\[y\leftarrow\mathrm{max}(y)-\mathrm{y}. \tag{23}\]
The probability of this mechanism occurring was set to 0.5. To introduce non-linear dependence between the generated image intensity and the projected specimen's structure, \(y\) is non-linearly transformed with a probability of occurrence of 0.5:
\[y\leftarrow|y|^{\beta} \tag{24}\]
where \(\beta\) is a uniform random number selected from the range \([0.5,1.5]\).
To further break this linearity, a random background was added to \(y\). The background is randomly chosen between a 3D plane and a Gaussian, with an occurrence probability of 0.5 for each. In the first case, a randomly orientated positive 3D plane is generated with a random height between \([0,\mathrm{max}(y)/2]\). In the second case, the Gaussian centre and its standard deviation are
randomly chosen within the range of the \(xy\) simulation box size and \([0.2,0.6]\times\min(n_{x},n_{y})\), respectively. From the analysis of the experimental and simulated data, we found that the ratio \(r_{std/mean}=\text{Std}\left\{y\right\}/\mathbb{E}\left\{y\right\}\) is between \([0.01,0.35]\). Therefore, if the EM image does not fulfill the latter constraint, then it is linearly transformed as:
\[y\gets cy+d \tag{25}\]
where \(c\) and \(d\) are chosen to bring \(r_{std/mean}\) within the range of the constraint. Finally, the EM image is normalized through dividing by its maximum value.
\[y\leftarrow\frac{y}{\max(y)} \tag{26}\]
Note that the correct parameterization of the model and the randomness of its parameters are subject to physical constraints allowing to encode information in the generated high/medium resolution EM image of the atomic size, atomic vibration, relative intensities between atomic species, detector angle, acceleration voltage, aberrations and/or detector sensitivity.
#### TEM noise model
The TEM noise model is based on the fact that TEM images are recorded using parallel illumination, and that most signal acquisitions for electrons are set up so that the detector output is directly proportional to the time-averaged flux of electrons reaching the detector. In case of TEM, the electrons are detected indirectly using a charge coupled device (CCD) sensor [73] or a complementary metal oxide semiconductor (CMOS) sensor [74], or directly using a direct electron detector [75].
For indirect detection, primary electrons are converted to photons in a scintillator, which are then directed to the CCD/CMOS sensor through a lens or fiber optic coupling. In contrast, for direct electron detectors, the CMOS sensor is directly exposed to the electron beam.
**TEM camera modulation-transfer function**
Scattering of incident electrons over the detector leads to the detection of electrons in multiple pixels, which can be quantitatively described using the modulation-transfer function (MTF). Because the effect of the MTF is to produce an isotropic smear out of features on the recorded TEM image, which in general cannot be distinguished from an undistorted TEM image recorded with other microscope settings, we embedded this effect into the undistorted TEM image by convolving it with the point-spread function (PSF), which is the Fourier transform of the MTF:
\[y\gets y\otimes\text{PSF}. \tag{27}\]
The MTF itself can be separated into a rotationally symmetric part, \(\text{MTF}_{r}\), describing the spread of electrons in the detector, and a part describing the convolution over the quadratic area of a single pixel. This yields the following equation:
\[\text{MTF}=\text{MTF}_{r}\operatorname{sinc}(\pi u/2)\operatorname{sinc}(\pi v /2), \tag{28}\]
where the Fourier space coordinates \((u,v)\) are defined in units of the Nyquist frequency [76]. Furthermore, we found that the general shape of \(\text{MTF}_{r}\) can be expressed parametrically as:
\[\text{MTF}_{r}=ae^{-\frac{x^{2}}{2b^{2}}}+(1-a)e^{-\frac{x^{2}}{2c^{2}}}, \tag{29}\]
where \(a\), \(b\) and \(c\) are positive real numbers. These numbers were randomly generated until they fulfill the constraint that on a numerical grid of 1000 points with a length of 10 units of the Nyquist frequency, the \(\text{MTF}_{r}\) is a positive and monotonically decreasing function.
**TEM detector noise**
TEM detectors are subject to three main sources of noise: shot noise, dark current noise, and readout noise. These noise sources can be classified into two types: temporal and spatial noise. Temporal noise can be reduced by frame averaging, whereas spatial noise cannot. However, some spatial noise can be mitigated by using techniques such as frame subtraction or gain/offset correction. Examples of temporal noise discussed in this document include shot noise, reset noise, output amplifier noise, and dark current shot noise. Spatial noise sources include photoresponse non-uniformity and dark current non-uniformity. Each of these noise sources can lower the SNR of a sensor imaging device.
**Photon shot noise**
After the initial conversion of the incident electron to its photon counterpart, the generated photons will hit the photosensor pixel area, liberating photo-electrons proportional to the light intensity. Due to the quantum nature of light, there is an intrinsic uncertainty arising from random fluctuations when photons are collected by the photosensor. This uncertainty is described by the Poisson process \(\mathbb{P}\) with mean \(\alpha x\), where \(\alpha\) is a dose scale factor.
The distribution of \(\alpha\) is exponential, with a scale parameter of \(0.5\) and a range \([0.5,750]/\mathbb{E}\{y\}\). The use of the exponential distribution yields higher probabilities for the generation of images at lower doses which is the focus of our research. The division by \(\alpha\) in the equation below brings \(x\) back to its original range:
\[x\leftarrow\frac{P(\alpha x)}{\alpha} \tag{30}\]
**Fixed-pattern noise**
Fixed-pattern noise (FPN) is a pixel gain mismatch caused by spatial variations in the thickness of the scintillator, fiber-optic coupling, substrate material, CCD bias pattern, and other artifacts that produce variations in the pixel-to-pixel sensitivity and/or distortions in the optical path to the CCD or in the CCD chip itself [77]. Since FPN is a property of the sensor, it cannot be fully eliminated. However, it can be suppressed using a flat-field correction procedure. We model the remaining distortion as a normal distribution \(\mathbb{N}\) with zero mean and standard deviation \(\sigma_{fpn}\).
\[x\gets x+x\mathbb{N}(0,\sigma_{fpn}) \tag{31}\]
**Dark-current noise**
Dark current is the result of imperfections or impurities in the depleted bulk Si or at the \(SiO_{2}/Si\) interface. These sites introduce electronic states in the forbidden gap which allows the valence electrons to jump into the conduction band and be collected in the sensor wells. This noise is independent of electron/photon-induced signal, but highly dependent on device temperature due to its thermal activation process [78].
**Dark-current nonuniformity**
Dark-current nonuniformity (DCNU) arises from the fact that pixels in a hardware photosensor cannot be manufactured exactly the same and there will always be variations in the photo detector area that are spatially uncorrelated, surface defects at the \(SiO_{2}/Si\) interface, and discrete randomly-distributed charge generation centers [79]. This means that different pixels produce different amounts of dark current. This manifests itself as a fixed-pattern exposure-dependent noise and can be modelled by superimposing two distributions. The Log-Normal distribution (\(ln\mathbb{N}\)) is used for the main body and the uniform (\(\mathbb{U}\)) distribution is used for the "hot pixels" or "outliers" [80].
\[\text{DCNU}\gets ln\mathbb{N}(\mu,\sigma)+\mathbb{U}(a,b) \tag{32}\]
with \(\mu\) the mean value, \(\sigma\) the standard deviation, \(a=\mu+5\sigma\), and \(b=\mu+8\sigma\).
**Dark-current shot noise**
Additional noise arises from the random arrival of electrons generated as part of the dark signal, which is governed by the Poisson process. To simulate a single frame, it is necessary to apply shot noise to the DCNU array.
\[x\gets x+\mathbb{P}(\text{DCNU}) \tag{33}\]
**Readout noise**
Readout noise is temporal noise and is generally defined as the combination of the remainder circuitry noise sources between the photoreceptor and the ADC circuitry. This includes thermal noise, flicker noise and reset noise [81].
**Thermal noise**
Thermal noise arises from equilibrium fluctuations of an electric current inside an electrical conductor due to the random thermal motion of the charge carriers. It is independent of illumination and occurs regardless of any applied voltage. The noise is commonly referred to as Johnson noise, Johnson-Nyquist noise, or simply white noise. It can be modelled by the normal distribution with zero mean and an appropriate standard deviation \(\sigma\)[81].
\[x\gets x+\mathbb{N}(0,\sigma) \tag{34}\]
**Flicker noise**
Flicker noise, also known as \(1/f\) noise or pink noise, is often caused by imperfect contacts between different materials at a junction, including metal-to-metal, metal-to-semiconductor, and semiconductor-to-semiconductor. MOSFETs are used in the construction of CMOS image sensors, which tend to exhibit higher levels of \(1/f\) noise than CCD sensors [79]. The amount of flicker noise in a CCD sensor depends on the pixel sampling rate. The equation below describes the effect of flicker noise on a signal \(x\):
\[x\gets x+\mathcal{F}(\mathbb{N}(0,\sigma)/f) \tag{35}\]
Here, \(\mathcal{F}\) is the two-dimensional Fourier transform, \(\sigma\) is the appropriate standard deviation, and \(f\) is the reciprocal distance.
#### 3.2.2 Reset noise
Before a measurement of the charge packet of each pixel is taken, the sense node capacitor of a specific row is reset to a reference voltage level. This causes all pixels in that row to be exposed to noise coming in through the reset line, transfer gate, or read transistor. As a result, images may have horizontal lines due to the fixed and temporal components of the noise. This type of noise, known as reset noise (RN), follows a normal distribution with mean zero and a standard deviation \(\sigma\). It can be simulated by adding a random intensity value, generated for each row, to the intensity values of all pixels in that row [80]:
\[x\gets x+\mathcal{N}(0,\sigma) \tag{36}\]
#### 3.2.3 Black pixel noise
Black pixels are dots or small clusters of pixels on the sensor that have significantly lower response than their neighbors, resulting in black spots on the image. Some black pixels may be created during the production process of the CCD camera, while others may appear during its lifetime. Black pixels are time-invariant and will always appear at the same locations on the image. They can be modelled by generating a sensitivity mask (\(S_{\text{Black}}\)) with a spatially uniform distribution of a specified number of black points. Regions can be generated by applying a random walk process for a given number of random steps to the black point positions. The equation below describes the effect of black pixels on a signal \(x\):
\[x\gets xS_{\text{Black}} \tag{37}\]
#### 3.2.4 Zinger noise
Zingers are spurious white dots or regions that can appear randomly in CCD images [82]. Electron-generated X-rays, cosmic rays, and muons can produce a burst of photons in the scintillator, resulting in white spots or streaks in the image. Radioactive elements (such as thorium) present in fiber-optic tapers can also cause zingers [77]. They can be modelled by generating a sensitivity mask (\(S_{\text{Zinger}}\)) with a spatially uniform distribution of a specified number of zinger points. Similar to the black pixel noise, regions can be generated by applying a random walk process for a given number of steps to the zinger point positions:
\[x\gets xS_{\text{Zinger}} \tag{38}\]
#### 3.2.5 Upper-clip noise
Upper clip noise, also known as saturation noise, is a type of noise that occurs when the intensity value of a pixel exceeds the maximum value that the CCD sensor can detect. This causes the pixel to be "clipped" at the maximum value, resulting in an overly bright image with lost details. This type of noise can be modelled by setting a threshold value for the maximum intensity and clipping any pixel values above that threshold \(T_{u}\):
\[x\leftarrow\min(x,T_{u}) \tag{39}\]
#### 3.2.6 Quantisation noise
To generate a digital image, the analog voltage signal read out during the last stage is quantized into discrete values using analog-to-digital conversion (ADC). This process introduces quantization noise, which can be modelled with respect to the ADC gain \(\alpha\):
\[x\leftarrow\text{round}(\alpha x) \tag{40}\]
Figure 12 shows simulated TEM images with different types of noise. These distortions have been randomly added to the images to mimic real TEM conditions and make it easier to identify the different types of noise.
#### 3.2.7 S(T)EM noise model
S(T)EM images are formed one pixel at a time by scanning a convergent electron beam along scan lines across the sample with constant stationary probing, which is known as dwell time. The dimension of each square-shaped pixel in the physical space is determined by the magnification. The scanning direction is called the fast/row scan direction. For conventional scan patterns, the scanning begins at the top left corner and after scanning one row of \(n\) pixels, the electron probe moves to the next row's first pixel. The time required to move the beam to the beginning of the scan line is commonly known as fly-back-time. Inaccuracies in beam positions during the scanning process give rise to characteristic scan-line/jitter distortions. Despite all technical improvements in the design of high-performance S(T)EM [3], the presence of these distortions on the recorded images still hampers the extraction of quantitative information from the sample under study [5].
#### 3.2.8 Scanning jitter distortion
Scanning jitter is caused by beam instabilities while scanning a raster pattern across the sample during the image acquisition process. There are two distinguishable jitter effects: X-jitter causes random pixel shifts along the fast-scan direction, while
Y-jitter causes stretching or squishing of scan lines or line interchanges along the slow-scan direction [11]. Although these displacements are not completely random due to serial acquisition, they depend on the previous scan position. Realistic modelling of scanning jitter distortion can be achieved using the Yule-Walker correlation scheme on time series [83, 84]. Furthermore, the fast and slow scanning directions can be modelled independently due to their different time scales. Here, we focus on displacement series in discrete pixels, in which each term of the series depends on the previous one. Mathematically, these displacement series can be described as:
\[\begin{split}\Delta_{t}^{k}&=\frac{a_{t}^{k}}{ \sqrt{(1-\phi_{t}^{2}}}\quad\text{if $k=1$}\\ \Delta_{t}^{k}&=\phi\Delta_{t}^{k-1}+a_{t}^{k}\quad \text{if $k>1$}\end{split} \tag{41}\]
where \(t=x,y\) and \(k\) is the pixel index along a given \(t\) direction. \(\phi_{t}\) is the correlation coefficient which describes the coupling between two consecutive values of the series within the range \([0,1]\). \(a_{t}^{i}\) is a normally distributed random number with zero mean and standard deviation \(\sigma_{t}\). The distorted image is created by using bicubic interpolation and evaluating on the non-regular grid, which is built by adding the positions of the regular grid and the generated displacements.
\[x\leftarrow\text{SJ}(y) \tag{42}\]
The described effects of individual jitter distortions for \(\sigma_{x}=\sigma_{y}=0.75\) and \(\phi_{x}=\phi_{y}=0.6\) along the fast and slow scan directions can be seen in Fig. 13(a) and Fig. 13(b), respectively. Fig. 13(c) shows the undistorted ADF STEM random generated image. Based on our analysis of experimental data, we set the occurrence probability of jitter distortion to 0.9. In addition, we assign the occurrence probability of the X-jitter, Y-jitter and the XY-jitter to 0.25, 0.25 and 0.50, respectively. The values of \(\sigma_{t}\) and \(\phi_{t}\) are randomly chosen within the range \([0.0025,0.8]\)A and \([0.0,0.7]\), respectively.
**S(T)EM detector noise**
Electrons are detected by a scintillator coupled to a photomultiplier tube (PMT) via a mirror or reflective tube. Impact of the incident electrons on the scintillator cause photons to be emitted, which are directed to the PMT through a light pipe. The PMT consists of a photocathode that emits photoelectrons when illuminated by these photons, followed by a series of stages amplifying the signal. The resulting current at the anode can be measured using conventional ADC electronics [8]. The statistics of the electron multiplication as a series of Poisson events with full width at half maximum (FWHM) of the pulse at the anode
Figure 12: Simulated TEM images with random distortions showing the various types of noise.
per single incident electron is given by [85]:
\[\text{FWHM}=2\sqrt{2\log 2m_{c}}\eta G\sqrt{\frac{1-\eta+\frac{1}{\delta-1}}{m_{c }\eta}+\frac{\delta_{c}^{2}}{m_{c}^{2}}} \tag{43}\]
This equation assumes that the secondary gain \(\delta\) at each stage inside the PMT is the same. In this equation, \(G\) represents the PMT gain, \(\eta\) is the detective quantum efficiency, \(m_{c}\) is the number of photons collected per incident electron, and \(\delta_{c}^{2}\) is the variance of that number [85]. A good approximation for the noise spectrum of a photomultiplier is the Poisson distribution, which can be approximated by a Gaussian distribution for large means. Since for each electron reaching the scintillator, around 100 photons reach the cathode of the photomultiplier, a Gaussian approximation can be used with standard deviation
\[\sigma=m_{c}\eta G\sqrt{\frac{1-\eta+\frac{1}{\delta-1}}{m_{c}\eta}+\frac{ \delta_{c}^{2}}{m_{c}^{2}}} \tag{44}\]
In addition, the number of electrons hitting the scintillator is described by the Poisson process (\(\mathbb{P}\)) [86]. The signal can therefore be constructed in two steps:
\[x\leftarrow\mathbb{P}(\alpha x) \tag{45}\]
\[x\leftarrow(x+\mathbb{N}(0,\sigma))/\alpha \tag{46}\]
where \(\alpha\) is a dose scale factor. Dividing by \(\alpha\) in the latter equation brings \(x\) back to approximately its original range.
**Fast scan noise**
Fast scan noise arises due to the use of short dwell times during data acquisition and appears as horizontal blur in the recorded images. This effect can also be seen in the Fourier domain as a damping effect on the high frequencies in the horizontal direction. This blurring is caused by the finite decay time of the detection system, which consists of a scintillator, a photomultiplier, and additional readout electronics [86, 87]. In addition to blurring in the horizontal direction, fast scans may introduce other artifacts due to the limited response time of the scan coils. In particular, strong distortions may appear on the left-hand side of the images due to the discontinuity in the scan pattern between consecutive lines. This can be avoided by using a small delay (flyback time) between scanning lines. The optimal value of this delay is hardware-specific, but results in additional dose to the sample, which will be localized on the left-hand side of each image [88]. In general, the effect of fast scan distortion can be modelled by convolution in one dimension along the fast-scan direction between \(x\) and the point spread function (PSF) of the system. After careful analysis of the experimental data, we find that the PSF of the system can be decomposed into contributions from the detector and the readout system.
\[\text{Im}_{fad}(x,y)=\text{Im}\,\,\text{\char[-]}\,\,\text{psf}_{detector}\, \,\text{psf}_{readout} \tag{47}\]
Figure 13: Image (a) and (b) are distorted jitter images along the fast and slow scan direction, respectively. (c) Undistorted ADF STEM image of a random sample.
with
\[\text{psf}_{detector}=\left\{\begin{array}{cc}\frac{\alpha}{4\pi^{2}x^{2}+ \alpha^{2}}&:x<=0\\ 0&:x>0\end{array}\right. \tag{48}\]
\[\text{psf}_{readout}=\left\{\begin{array}{cc}ae^{-x/\beta}\sin(2\pi x/\gamma+ \theta)&:x<=0\\ 0&:x>0\end{array}\right. \tag{49}\]
where
\[a=\frac{\beta\gamma(\gamma\sin(\theta)+4\pi\beta cos(\theta))}{\gamma^{2}+16 \pi^{2}\beta^{2}} \tag{50}\]
is the normalization factor which ensures that the total integral of the \(\text{psf}_{readout}\) is equal to 1, \(k\) is the pixel value in real space, and \(\alpha\) is the parameter of the Lorentzian function that describes the PSF of the detector. The parameters \(\beta\), \(\gamma\), and \(\theta\) are the parameters of the damped harmonic oscillator which is used to describe the PSF of the readout system. The model parameters were obtained by fitting to experimental images and by applying random variation to the fitting parameters.
**Row-line noise**
Row-line (RL) noise arises due to the non-response of the detector over some pixels during the scanning process along the fast-scan direction. This noise can be modelled by generating a random number of row lines with random length. The pixel intensities of the lines in the image are replaced by their average intensity multiplied by a random factor within the range \([0.5,1.5]\). This can be represented as:
\[x\leftarrow\mathbb{RL}(x) \tag{51}\]
**Black pixel noise**
Black pixels are randomly occurring pixels that have significantly lower values than their neighbouring pixels, causing black spots to appear in the image. These black pixels may result from information loss during data transmission, cosmic rays, or the detector's non-response. As black pixels are time-dependent, they can be modelled by generating a sensitivity mask (\(S_{\text{Black noise}}\)) with a spatially uniform distribution of a specified number of black points. This can be represented mathematically as:
\[x\gets xS_{\text{Black noise}} \tag{52}\]
However, in the case of SEM images, black spots in the images may be attributed to pores present in the sample, and hence, this type of distortion is not generated.
**Zinger noise**
Zingers are random white dots that appear in an image. They are caused by bursts of photons produced by electron-generated X-rays, cosmic rays, and muons in the scintillator [77]. Zinger noise can be simulated by creating a sensitivity mask (\(S_{\text{Zinger noise}}\)) with a spatially uniform distribution of a specified number of Zinger points.
\[x\gets xS_{\text{Zinger noise}} \tag{53}\]
**Upper-clip noise**
Upper clip noise, also known as saturation noise, occurs when the intensity value of a pixel exceeds the maximum value that the analog-to-digital converter can detect. This causes the pixel to be "clipped" at the maximum value, resulting in an overly bright image with lost details. This type of noise can be modelled by setting a threshold value for the maximum intensity and clipping any pixel values above that threshold \(T_{u}\).
\[x\leftarrow\min(x,T_{u}) \tag{54}\]
**Quantisation noise**
To generate an image in digital form, the analog voltage signal read out during the last stage is quantized into discrete values using an ADC with a gain \(\alpha\). This process introduces quantisation noise.
\[x\gets round(\alpha x) \tag{55}\]
Figure 14 shows simulated STEM images of the different types of noise that can be found in STEM images. These distortions were randomly added to the images to simulate real STEM conditions and make it easier to identify the different types of noise.
#### Post-processing distortions
Post-processing distortions are typically added after the image is recorded. These distortions, such as interpolation and blurring, can affect the noise in the image in a non-linear way. Post-processing distortions can also include annotations and cropping, which replace part of the original image. Ideally, these distortions should be preserved by the restoration process. **Interpolation distortions** may happen when a user applies a transformation function to the image before it is restored. This might be done to make the image suitable for further post-processing or to better visualise an area of interest. Interpolation distortion can be modelled by applying a random transformation, such as a random linear transformation matrix, to the training image pair. **Gaussian blurring** is a way of distorting an image to reduce noise and improve the SNR. This is done by applying a 2D Gaussian function to the image with a given standard deviation \(\sigma\). Although this type of blurring can improve the quality of an image, it can also alter the distribution of noise in the image. Therefore, when restoring an image, the blurring must be removed along with the distortion. In our training set, we only applied random \(\sigma\) values between 0 and 1 pixel to the distorted images. **Annotations** are added to an image to provide additional information or to highlight specific areas of the image. These can include text, shapes, and arrows, and may be added by the software or by the user. When creating training image pairs, we model the annotations by adding the same random annotations at the same pixel location in both the ground-truth and distorted images. **Cropping** is a type of post-processing distortion that involves removing one or more areas of an image. This can be done manually by the user or automatically in a processing workflow, such as after the image has been shifted, rotated or aligned. The removed areas are usually filled in with a constant value or the median of the image's value range. When creating training image pairs, we model this process by randomly replacing the intensity value in a randomly selected area in both images. The selected area is typically outside a central square or rectangle, such as 50% of the total image area, to mimic the fact that cropping is typically not applied to the central region, which may already be adjusted to show the main feature of interest.
## Code and Data Availability
All of the trained models, alongside example scripts for using them, are available on the github repository [https://github.com/Ivanlh20/r_em](https://github.com/Ivanlh20/r_em). Additional material may be provided by the authors upon reasonable request.
Figure 14: Random distorted simulated STEM images showing the various types of noise.
## Acknowledgements
This work was supported by the European Research Council (Grant 770887 375 PICOMETRICS to S.V.A. and Grant 823717 ESTEEM3). The authors acknowledge financial support from the Research Foundation Flanders (FWO, Belgium) through project fundings (G.0346.21N and EOS 40007495). S.V.A. acknowledges funding from the University of Antwerp Research fund (BOF). The authors thank Lukas Grunewald for data acquisition and support for figure 8.
## Author Contributions
I.L. and S.V.A. designed the study. I.L. created the mathematical models for the undistorted and distorted EM images, implemented, trained, and evaluated the NN models. T.F. conducted quantitative analysis of STEM images for the models. All authors contributed to the planning and execution of the research, discussed the results, and helped write the manuscript.
## Competing Interests
The authors declare no competing interests.
## Additional Information
**Supplementary information** is available for this article.
**Correspondence** and requests for materials should be addressed to I.L.([email protected]) or S.V.A. ([email protected]).
|
2303.04878 | DeepGD: A Multi-Objective Black-Box Test Selection Approach for Deep
Neural Networks | Deep neural networks (DNNs) are widely used in various application domains
such as image processing, speech recognition, and natural language processing.
However, testing DNN models may be challenging due to the complexity and size
of their input domain. Particularly, testing DNN models often requires
generating or exploring large unlabeled datasets. In practice, DNN test
oracles, which identify the correct outputs for inputs, often require expensive
manual effort to label test data, possibly involving multiple experts to ensure
labeling correctness. In this paper, we propose DeepGD, a black-box
multi-objective test selection approach for DNN models. It reduces the cost of
labeling by prioritizing the selection of test inputs with high fault revealing
power from large unlabeled datasets. DeepGD not only selects test inputs with
high uncertainty scores to trigger as many mispredicted inputs as possible but
also maximizes the probability of revealing distinct faults in the DNN model by
selecting diverse mispredicted inputs. The experimental results conducted on
four widely used datasets and five DNN models show that in terms of
fault-revealing ability: (1) White-box, coverage-based approaches fare poorly,
(2) DeepGD outperforms existing black-box test selection approaches in terms of
fault detection, and (3) DeepGD also leads to better guidance for DNN model
retraining when using selected inputs to augment the training set. | Zohreh Aghababaeyan, Manel Abdellatif, Mahboubeh Dadkhah, Lionel Briand | 2023-03-08T20:33:09Z | http://arxiv.org/abs/2303.04878v5 | # DeepGD: A Multi-Objective Black-Box Test Selection Approach for Deep Neural Networks
###### Abstract.
Deep neural networks (DNNs) are widely used in various application domains such as image processing, speech recognition, and natural language processing. However, testing DNN models may be challenging due to the complexity and size of their input domain. Particularly, testing DNN models often requires generating or exploring large unlabeled datasets. In practice, DNN test oracles, which identify the correct outputs for inputs, often require expensive manual effort to label test data, possibly involving multiple experts to ensure labeling correctness.
In this paper, we propose _DeepGD_, a black-box multi-objective test selection approach for DNN models. It reduces the cost of labeling by prioritizing the selection of test inputs with high fault-revealing power from large unlabeled datasets. _DeepGD_ not only selects test inputs with high uncertainty scores to trigger as many mispredicted inputs as possible but also maximizes the probability of revealing distinct faults in the DNN model by selecting diverse mispredicted inputs.
The experimental results conducted on four widely used datasets and five DNN models show that in terms of fault-revealing ability: (1) White-box, coverage-based approaches fare poorly, (2) _DeepGD_ outperforms existing black-box test selection approaches in terms of fault detection, and (3) _DeepGD_ also leads to better guidance for DNN model retraining when using selected inputs to augment the training set.
Deep Neural Network, Test Selection, Diversity, Uncertainty, Faults.
## 1. Introduction
Deep Neural Networks (DNNs) have become widely used in a variety of application areas, including image processing (Abadi et al., 2016), medical diagnostics (Bahdan et al., 2017), and autonomous driving (Bahdan et al., 2017). However, DNNs may produce unexpected or incorrect results that could lead to significant negative consequences or losses. Therefore, effective testing of such models is crucial to ensure their reliability (Bahdan et al., 2017). For testing and enhancing the performance of DNN-driven applications, a significant amount of labeled data is required. However, obtaining the correct output labels for a large amount of unlabeled data, referred to as oracle information, can be a challenging and resource-intensive task (Bahdan et al., 2017). This process often requires extensive manual efforts by domain experts and can be a significant challenge when resources are limited. This makes it difficult to effectively test and improve the performance of DNN models. In order to address these challenges and make DNN testing feasible and cost-effective in practice, it is essential to select a small subset of test inputs that possess a high fault-revealing power. This approach reduces the number of inputs required for DNN testing, making it more cost-effective.
Several test selection approaches for DNN models have been proposed in recent years (Bahdan et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al.
terms of fault-revealing power and computational time. However, they did not study how such a metric can be used for DNNs test selection.
In this paper, we propose _DeepGD_, a black-box multi-objective search-based test selection approach. It relies on the Non-dominated Sorting Genetic Algorithm (NSGA-II) (Shi et al., 2017) to select test inputs that aim at revealing a maximal number of diverse faults in DNN models within a limited testing budget. _DeepGD_ prioritizes selecting a subset of test inputs with high fault-revealing power from a large unlabeled dataset. The main objectives of _DeepGD_ are twofold: (1) trigger as many mispredicted inputs as possible by selecting inputs with high uncertainty scores, and (2) maximize the probability of revealing distinct faults in the DNN model by selecting diverse mispredicted inputs. Such an approach alleviates the lack of effectiveness of existing techniques relying exclusively on model output uncertainty scores when used on insufficiently trained DNNs.
Similar to failures in traditional software systems, many mispredictions result from the same faults in the DNN model and are therefore redundant (Shi et al., 2017; Shi et al., 2017). This is why testing studies typically focus on faults, not failures (Shi et al., 2017; Shi et al., 2017). Detecting faults in DNNs is however not as straightforward as in traditional software and it is a challenging task to find the root cause of mispredictions. To validate the effectiveness of _DeepGD_, we use a fault estimation approach proposed by Aghababevan _et al._(Aghababevan et al., 2017) which is based on clustering mispredicted inputs (Shi et al., 2017). Each cluster represents a distinct fault in the DNN model since it embeds similar inputs that are mispredicted due to the same root causes.
In this paper, we present an empirical evaluation of the effectiveness of _DeepGD_ for selecting test input sets with high fault-revealing power using five widely-used DNN models and four image recognition datasets. The results are compared with nine SOTA baseline approaches, both white-box and black-box, that have been recently published. The experimental results demonstrate that _DeepGD_ shows a statistically significant and consistent improvement compared to SOTA approaches in terms of its ability to reveal DNNs faults.
Specifically, results show that _DeepGD_ is consistently the best approach and up to 6 percentage points (pp) better than the second-best alternative, and 13 pp better than the worst black-box alternative, when excluding random selection (RS) since RS showed far inferior results in general. It is important to note that the ranking of alternatives vary across datasets, models, and test set sizes. Consequently, selecting any approach other than DeepGD may end up being the worst choice and lead to significant differences in performance. Further, we also investigate the effectiveness of _DeepGD_ in guiding the retraining of DNNs and demonstrate that it consistently provides better results with that respect. _DeepGD_ is therefore the only technique we can confidently recommend. To summarize, the key contributions of this paper are as follows:
* We propose a black-box test selection approach (_DeepGD_) for DNNs that relies on a customized multi-objective genetic search and uses both diversity and uncertainty scores to guide the search toward finding test input sets with high fault-revealing power.
* Unlike existing test selection approaches, we consider in our validation a clustering-based approach to estimate faults in DNN models since test input sets typically contain many similar mispredicted inputs caused by the same problems in the model (Shi et al., 2017; Shi et al., 2017). We explain why this is important to evaluate any test set selection approach based on faults.
* We conduct a large-scale empirical evaluation to validate _DeepGD_ by considering five DNN models, four different datasets, and nine SOTA test selection approaches for DNNs as baselines of comparison. We show that _DeepGD_ provides better guidance than baselines for (1) selecting inputs with high fault-revealing power, and (2) improving the performance of the model through retraining based on an augmented training set.
## 2. Approach: Reformulation as an NSGA-II Search Problem
A central problem in testing DNNs, especially when the labeling of test data is costly, is the selection of a small set of test inputs with high fault revealing power. In this paper, we aim to support the testing of DNN models by relying on _DeepGD_, a black-box search-based test selection approach using a genetic algorithm to select small sets of test inputs with high fault-revealing power in DNN models. Intuitively, testers should select a set of diverse test inputs with high failure probabilities in order to be more likely to detect as many diverse faults as possible (Shi et al., 2017). Due to the combined high labeling cost of test inputs and large input space, we rely on NSGA-II to select test inputs with high fault-revealing capability. Such inputs are then selected for labeling and will be used to effectively test the DNN model. We choose NSGA-II since it is widely used in the literature and showed its performance to solve many search-based test selection problems (Shi et al., 2017; Shi et al., 2017). We also rely on NSGA-II since it is specifically adapted to our multi-objective search problem. It tries to find solutions with diverse trade-offs between fitness functions instead of covering all fitness functions separately (which is the case of MOSA, the many objective sorting algorithm (Shi et al., 2017) for example). Specifically, the search is driven by two objectives: (1) maximizing the uncertainty score of the test inputs to trigger a maximum number of mispredictions, and (2) maximizing the diversity of the test input set to trigger diverse mispredictions caused by distinct faults. To properly translate the process into a search problem using NSGA-II, we need to define the following elements.
### Individuals and Initial Population
In genetic algorithms, individuals consist of a set of elements called genes. These genes are connected and form an individual that is also called a solution. In our approach, we assign a unique identifier \(id_{i}\) to each input \(i\) in the test dataset where \(id_{1\leq i\leq n}\in[1,2,..,n]\) and \(n\) is the size of the test dataset. Our test selection problem has a fixed budget \(\beta<n\) which corresponds to the total number of inputs selected from the original test dataset to test the DNN model. In our search problem, an individual, therefore, corresponds to a subset of inputs of size \(\beta\). Each gene forming an individual corresponds to an \(id\) of a test input in the test dataset. In our context, each individual contains \(\beta\) distinct test inputs. We use random selection to build our initial population of individuals.
### Fitness Functions
Our search is guided by two fitness functions: (1) maximizing the uncertainty score of the selected inputs to trigger as many mispredicted inputs as possible, and (2) maximizing the diversity of the selected inputs to maximize the probability of revealing distinct faults in the DNN model. We will describe next the two fitness functions and detail how to compute them.
#### 2.2.1. **Gini Score**
We consider the _Gini_ score to estimate the likelihood of a test input being mispredicted. Feng _et al._[20] proposed this metric to measure the classification uncertainty of DNN models and therefore identify potential failing test inputs. Intuitively, a test input is likely to be misclassified by a DNN if the model is uncertain about the classification and outputs similar probabilities for each class [20]. We choose this metric since it has been widely used in the literature and showed good performance in prioritizing test inputs for DNN models [20; 34]. It is also a black-box metric that only requires the output probabilities of DNN models as we will describe in the following. Given a test input \(x\) and a DNN model that outputs \(DNN(x)=<P_{x_{1}},P_{x_{2}},...,P_{x_{m}}>\), where \(P_{x_{1\leq i\leq m}}\) is the probability that input \(x\) belongs to class \(C_{i}\) and \(m\) is the total number of classes, the _Gini_ score of the test input \(x\) is defined as:
\[\zeta(x)=1-\sum_{i=1}^{m}P_{x_{i}}^{2} \tag{1}\]
The higher the _Gini_ score, the higher the DNN's uncertainty. We compute the _Gini_ score of a subset \(S=\{s_{1},s_{2},....,s_{g}\}\) of size \(\beta\) by computing the average _Gini_ score of all inputs in the subset as follow:
\[Gini(S)=\frac{\sum_{i=1}^{\beta}\zeta(s_{i})}{\beta} \tag{2}\]
#### 2.2.2. **Geometric Diversity**
We consider geometric diversity (GD), one of the widely used metrics to measure the diversity of test input sets [16; 25; 35]. In a recent study, Aghababeyan _et al._[16] investigated several coverage and diversity metrics and found that the GD metric is positively correlated to DNN faults and outperforms SOTA coverage metrics in terms of fault-revealing capabilities. In other words, when the geometric diversity of a test input set increases, its fault-revealing power increases as well since the diverse test input set will cover more faults in the DNN model. Furthermore, GD is a black-box diversity metric that requires neither knowledge about the model under test nor access to the training set. Consequently, we have relied on this metric as a second fitness function to guide the search towards finding a diverse test input set with high fault-revealing capability.
**Feature Extraction**. In order for diversity to account for the content of images, we need to first extract features from each input image and then compute the diversity based on those extracted features. In our work, we use VGG-16 [36], one of the SOTA feature extraction models. VGG-16 is a pretrained convolutional neural network model which was already trained on ImageNet [37], a huge dataset including over 14 million labeled images. As a result, it has learned rich feature representations for a wide range of images and datasets [36]. We extract features of each image in the test input set \(S\) using VGG-16 and generate the corresponding feature matrix \(F=(f_{ij})\in R^{nsm}\) where \(n\) is the number of input images in \(S\), and \(m\) is the number of features. Each row of this matrix represents the feature vector of an image, and each column \((F_{j})\) represents a feature. Next, we normalize the matrix as a pre-processing step to eliminate the dominance effect of features with large value ranges and to make the computation of the selected diversity metrics more scalable [16]. We apply the _Min-Max normalization_ per feature, and transform the maximum and minimum values of that feature to one and zero, respectively. For every feature \(F_{j}\) in the feature matrix \(F\) where \(F_{j}(i)\) is the value of feature number \(j\) for \(i^{th}\) input image in the \(S\), the normalized feature \(F_{j}^{\prime}\) is calculated as follows:
\[F_{j}^{\prime}(i)=\frac{F_{j}(i)-min(F_{j})}{max(F_{j})-min(F_{j})} \tag{3}\]
**Computation of geometric diversity scores.** After extracting the feature vectors, we calculate the geometric diversity of the test inputs as our second fitness function with the goal of selecting as much diverse test input sets as possible. Given a dataset \(\mathbb{D}\), the normalized feature matrix \(F^{\prime}\) where \(F_{S}^{\prime}\) represents feature vectors of a subset \(S\subseteq\mathbb{D}\), the geometric diversity of \(S\) is defined as:
\[GD(S)=det(F_{S}^{\prime}*F_{S}^{\prime T}) \tag{4}\]
which corresponds to the squared volume of the parallelepiped spanned by the rows of \(F_{S}^{\prime}\), since they correspond to vectors in the feature space. The larger the volume, the more diverse is \(S\) in the feature space as depicted in Figure 1.
#### 2.2.3. **Multi-Objective Search**
We need to maximize in our search the two aforementioned fitness functions to increase the fault-revealing capability of the selected test input set. This is therefore a multi-objective search problem that can be formalized as follows:
\[\underset{S\subset\mathbb{D}}{max}\ Fitness(S)=(Gini(S),GD(S)) \tag{5}\]
where \(\mathbb{D}\) is the test dataset, \(S\) is a subset of test inputs of size \(\beta\), function \(\mathit{Fitness}:S\rightarrow\mathbf{R}^{2}\) consists of two real-value objective functions \((Gini(S),GD(S))\), and \(\mathbf{R}^{2}\) is the objective space of our optimization problem.
### Genetic Operators
We describe below the two genetic operators in our test selection approach. The first operator is crossover, which generates new offspring by cutting and joining high-fitness parents. The second operator is mutation which introduces small changes in individuals by mutating specific genes to (1) make the search more exploratory and (2) thus attempt to increase the uncertainty score and the diversity among individuals in the population. We provide below a
Figure 1. Illustration of the geometric diversity metric [16]
detailed description of how we customized these genetic operators to fit our test selection problem.
#### 2.3.1. **Crossover**
The crossover operator takes as input two parents (i.e., two subsets) and generates new offspring by slicing and joining the different parts of the selected parents. Let \(S_{1}\) and \(S_{2}\) be the two selected parents for crossover. We give in the following an example of selected parents:
\[S_{1}=\{\mathit{Input}_{1},\mathit{Input}_{2},\mathit{Input}_{3},\cdots, \mathit{Input}_{1}^{\mathit{a_{1}}},\cdots,\mathit{Input}_{4},\mathit{Input}_{5}\}\]
\[S_{2}=\{\mathit{Input}_{0},\mathit{Input}_{7},\mathit{Input}_{8},\cdots, \mathit{Input}_{1}^{\mathit{a_{2}}},\cdots,\mathit{Input}_{9},\mathit{Input}_{10}\}\]
where \(\mathit{Input}_{i}^{\mathit{a_{1}}}\) and \(\mathit{Input}_{i}^{\mathit{a_{2}}}\) correspond to the \(i^{th}\) input in \(S_{1}\) and \(S_{2}\), respectively. Before applying the crossover operator, we start by sorting the inputs forming each parent according to their _Gini_ scores. Inputs with higher _Gini_ scores are placed at the beginning of each corresponding parent, as in the example below.
\[S_{1}=\{\mathit{Input}_{3},\mathit{Input}_{1},\mathit{Input}_{5},\cdots, \mathit{Input}_{1}^{\mathit{a_{1}}},\cdots,\mathit{Input}_{4},\mathit{Input}_{ 2}\}\]
\[S_{2}=\{\mathit{Input}_{7},\mathit{Input}_{10},\mathit{Input}_{6},\cdots, \mathit{Input}_{1}^{\mathit{a_{2}}},\cdots,\mathit{Input}_{9},\mathit{Input}_{ 8}\}\]
Such reordering will help with the creation of potential high-fitness offspring with high uncertainty scores as we will explain in the following. After such sorting, we randomly select a crossover point using the uniform distribution. To form the first offspring, we slice and join the first parts of each parent based on the crossover point. Such offspring includes inputs with the highest _Gini_ scores thanks to sorting. Finally, the second offspring is formed by joining the remaining parts of the selected parents. Since this offspring has inputs with the lowest _Gini_ scores from both parents, it will be potentially discarded later by the selection operator or improved by the mutation operator as we will detail in the next section. Assuming that the crossover point is at position \(i\) in the example above, the generated offspring would be:
\[\mathit{Offspring}_{1}=\{\mathit{Input}_{3},\mathit{Input}_{1},\cdots, \mathit{Input}_{1}^{\mathit{a_{1}}},\mathit{Input}_{7},\mathit{Input}_{10}, \cdots,\mathit{Input}_{a_{-1}^{\mathit{a_{2}}}}\}\]
\[\mathit{Offspring}_{2}=\{\mathit{Input}_{i+1}^{\mathit{a_{1}}},\cdots, \mathit{Input}_{4},\mathit{Input}_{2},\mathit{Input}_{a_{-1}^{\mathit{a_{2}}},\cdots,\mathit{Input}_{9},\mathit{Input}_{8}}\}\]
However, applying the crossover on the selected parents may lead to redundant inputs in the created offspring. Since one of our search goals is to maximize the diversity of the selected subsets, we remove redundant inputs (and therefore increase the diversity of the offspring) by replacing them with random inputs that are not present in the created offspring.
#### Mutation
After completing the crossover, the offspring are selected for mutation to randomly change some genes that have (1) a low _Gini_ score, and (2) a low contribution to the diversity of the selected offspring. Therefore, the mutation operator is considered in our search approach as a corrective operator with the goal of improving the offspring in terms of both fitness functions. As an example, let us assume that \(S_{m}\) is the test input set to mutate. We select 2% of the inputs from \(S_{m}\) that have the lowest _Gini_ score. We mutate half of these inputs that have the least contribution to the diversity of \(S_{m}\). Consequently, only 1% of its genes are mutated. To measure the contribution of an \(\mathit{input}_{i}\) to increasing the diversity in \(S_{m}\), we measure the difference \(\mathit{GD}_{\mathit{diff}}(S_{m},i)\) between \(\mathit{GD}(S_{m})\) and \(\mathit{GD}(S_{m}\setminus\{\mathit{input}_{i}\})\). The lower the difference, the more similar the input compared to the other ones in \(S_{m}\). Inputs with low differences should therefore be mutated to accelerate our search process.
An example of offspring generated through mutation is provided below. \(S_{m}\) is the original offspring and \(S_{m}^{\prime}\) is the mutated one. Suppose that \(S_{m}\) has 10 genes. We mutate \(\mathit{Input}_{1}\) since it has a low _Gini_ score and a low contribution to the diversity of the selected offspring.
\[S_{m}:\quad\mathit{Input}_{3}\quad\mathit{Input}_{1}\quad\mathit{Input}_{5} \quad\cdots\quad\mathit{Input}_{7}\quad\mathit{Input}_{10}\]
\[\mathit{Gini}:\quad\mathit{Input}_{70}\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
mutation rate in the literature is proportional to the length of the individual (Srivastava et al., 2016; Wang et al., 2017; Wang et al., 2018), we have not used such a rate since we used mutation for a different purpose than just exploration, i.e., to help increase both uncertainty and diversity scores. According to our preliminary experiments (which we do not include in this paper), smaller recommended mutation rates led to poorer search results. The population size is set to 700 individuals. Finally, the stopping criterion is set to 300 generations. We should note that the number of generations was determined through empirical evaluation. This was done by monitoring the evolution of fitness functions over multiple generations using various datasets and models. The maximum number of generations was carefully selected to ensure the convergence of the search process. In this paper, we also modified the traditional NSGA-II algorithm by customizing its fundamental operators, as described above. Specifically, we introduced a novel crossover function and mutation operator. Preliminary empirical results showed that the modified NSGA-II outperformed the traditional version, as evidenced by higher Gini and diversity scores, and a faster convergence rate. Finally, we used Google Colab and the Pymoo library (Pymoo, 2017) to implement our genetic search.
## 3. Empirical Evaluation
This section describes the empirical evaluation of _DeepGD_, including the research questions we address, the datasets and DNN models on which we perform our assessments, our experiments, and results.
### Research Questions
Our empirical evaluation is designed to answer the following research questions.
**RQ1. Do we find more faults than existing test selection approaches with the same testing budget?** Similar to traditional software testing, selecting a subset of test inputs with high fault-revealing ability is highly important in DNN testing, as it should increase the effectiveness of the testing process while reducing input labeling costs. Consequently, we aim in this research question to compare the effectiveness of our approach (_DeepGD_) with existing baselines in terms of their ability to reveal faults in DNNs, while considering the same testing budget. Identifying the source of failures in traditional software systems is relatively straightforward due to the clear and explicit decision logic in the code. However, in DNNs, this task is more challenging as the complexity and non-linearity of the decision-making process make it difficult to determine the cause of the failure. Hence, many papers rely on mispredictions (Beng et al., 2016; Wang et al., 2018; Wang et al., 2018) for test selection evaluation. However, similar to failures in traditional software systems, many mispredicted inputs can be due to the same faults in the DNN model and are therefore redundant (Wang et al., 2018; Wang et al., 2018). When selecting inputs on a limited budget, we should therefore avoid similar or redundant mispredictions as they do not help reveal additional root causes or faults in DNN models. To accurately answer this research question, we thus rely on a clustering-based fault estimation approach (Wang et al., 2018) that we describe in section 3.2, to investigate the effectiveness of the test selection approaches in detecting faults for a fixed testing budget.
**RQ2. Do we more effectively guide the retraining of DNN models with our selected inputs than with baselines?** Retaining DNN models with unseen inputs carefully selected based on DNN testing results is expected to enhance the model's performance compared to that of the original training set. More specifically, it is highly recommended to retrain DNNs with inputs that have the potential to lead to mispredictions (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Consequently, we aim in this research question to investigate the effectiveness of _DeepGD_ in guiding the retraining of DNN models by selecting inputs that will be labeled and used to improve the model through retraining. We will not only measure the accuracy improvement resulting from retraining but also analyze it considering the maximum improvement possible based on the available data for retraining.
### Estimating Faults in DNNs
Before addressing our research questions, one essential issue is how to count faults in DNN models. The motivation for counting faults rather than mispredictions, when evaluating selection techniques, stems from our objective of detecting different root causes for the latter. According to Chan et al. (Chan et al., 2017) for traditional software testing, failure-causing inputs tend to be dense and close together. The same insight applies to DNN model testing, since similar mispredicted inputs tend to be due to the same fault (Wang et al., 2018; Wang et al., 2018). Our goal is to devise a test selection method that is not only capable of detecting more mispredicted inputs but also diverse ones in terms of root causes.
We estimate faults in a DNN by clustering mispredicted inputs using the approach of Aghababaeyan _et al_(Aghababaeyan et al., 2018). They proposed a clustering-based fault estimation approach consisting of four main steps: feature extraction, dimensionality reduction, clustering, and evaluation. The first step relies on VGG-16 to extract the feature matrix of the mispredicted inputs (Wang et al., 2018). Then, two extra features that are actual and mispredicted classes related to each input are added to the feature matrix. This information about the misprediction behavior of the model under test helps build better clusters to reflect common causes of mispredictions.
In the next step, dimensionality reduction techniques boost the performance of clustering given the high dimensional feature space. HDBSCAN (Dong et al., 2016) is then applied to cluster mispredicted inputs based on the resulting features. Final clusters are investigated with SOTA clustering evaluation metrics and through manual analysis. Empirical results show that (1) inputs in the same cluster tend to be mispredicted due to the same fault, and (2) inputs belonging to different clusters are mispredicted because of distinct faults (Wang et al., 2018).
### Subject Datasets and Models
Table 1 shows the combinations of datasets and models that we have used in our experiments. We should note that we have considered the most used combinations of datasets and models in the literature. We consider four widely used image recognition datasets which are Cifar-10 (Cifar-10), MNIST (Cifar-10), Fashion-MNIST (Cifar-10) and SVHN (Dong et al., 2016). We use these datasets with five SOTA DNN models: LeNet1, LeNet4, LeNet5, 12 layers Convolutional Neural Network (12-layer ConvNet) and ResNet20.
We have selected datasets and models that are widely used by SOTA test selection approaches (Beng et al., 2016; Wang et al., 2018; Wang et al., 2018). Moreover, each input
in the provided datasets has proper labeling. Finally, because they provide a variety of distinct inputs (in terms of classes and domain concepts) and different models (in terms of internal architecture), these datasets and models are regarded as a good benchmark for observing key trends in DNN test selection.
### Baseline Approaches
We compare _DeepGD_ with two categories of baseline approaches: (1) four white-box test selection approaches including three well-known coverage-based approaches (Beng et al., 2015), along with Likelihood-based Surprise Adequacy (LSA) and Density-based Surprise Adequacy (DSA) (Dai et al., 2017), and (2) four SOTA black-box prioritization approaches including Maximum Probability (MaxP) (Krause et al., 2017), DeepGini (Li et al., 2017), Adaptive Test Selection (ATS) (Krause et al., 2017) and Random Selection (RS). We have selected the most prominent and recent baselines that can be applied and replicated on our models and datasets.
**A- White-box test selection baselines Neuron Coverage (NC).** It is the first coverage metric that has been proposed in the literature to test DNN models (Beng et al., 2015). It is defined as the ratio of neurons activated by a test input to the total number of neurons in the DNN model. A neuron is activated when its activation value is greater than a predefined threshold.
**Neuron Boundary Coverage (NBC).** This coverage metric measures the ratio of corner case regions that have been covered by test input(s). Corner case regions are defined as the activation values that are below or higher than the activation ranges observed during the training phase of the DNN model under test.
**Strong Neuron Activation Coverage (SNAC).** Similar to the NBC metric, SNAC measures how many upper corner cases have been covered by test input(s). Upper corner cases are defined as neuron activation values that are above the activation ranges observed during the training phase of the DNN model under test.
**ISA and DSA.** These two metrics have been proposed by Kim _et al._(Kim et al., 2017) and are based on the analysis of how surprising test inputs are with respect to the training dataset. LSA uses Kernel Density Estimation (KDE) (KDE, 2017) to estimate the likelihood of seeing a test input during the training phase. According to Kim _et al._(Kim et al., 2017), test inputs with higher LSA scores are preferred since they are closer to the classification boundaries. Thus it could be considered a priority score for DNN test selection. DSA is an alternative to LSA that uses the distance between the activation traces (Kim et al., 2017) of new test inputs and the activation traces observed during training.
**B- Black-box test selection baselines DeepGini.** It is a test selection approach that prioritizes test inputs with higher uncertainty scores (Li et al., 2017). It relies on the _Gini_ metric (Section 2.2.1) to estimate the probability of misclassifying a test input.
**MaxP.** It relies on the maximal prediction probability of the classification task to estimate the prediction confidence of the DNN model for a given input. Such method prioritizes inputs with lower confidence. The maximum probability score of a test input \(x\) is defined as \(MaxP(x)=1-max_{i=1}^{m}P_{x_{i}}\) where \(P_{x_{1}\leq i\leq m}\) is the probability that input \(x\) belongs to class \(C_{i}\) and \(m\) is the total number of classes (Krause et al., 2017). Intuitively, higher MaxP scores are more likely to lead to mispredictions (Krause et al., 2017).
**ATS.** It is a recent test selection method proposed by Gao _et al._(Krause et al., 2017). The selection is guided by a fitness function based on which test inputs are incrementally added to the final test set. The fitness function measures the difference between a test input and the currently selected test set based on computing a fault pattern coverage score. This score is obtained through analyzing the diversity of the output probability vectors of the DNN under test. They select test inputs with different fault patterns and higher uncertainty scores.
**Random Selection (RS)** It is the most basic and simplest test selection method in the literature. RS consists in randomly selecting (without replacement) \(\beta\) inputs from the test dataset. Each test input has the same probability of being selected.
### Evaluation and Results
We will describe in this section our experimental evaluation and present in detail the obtained results.
5.1. **Rq1. Do we find more faults than existing test selection approaches with the same testing budget?**
To investigate the effectiveness of test selection approaches in a DNN, existing baselines usually compare the misprediction detection rate (Krause et al., 2017; Li et al., 2017). Although the number of mispredicted inputs is a useful metric in some contexts, it is misleading for test selection in DNN-based systems since many mispredicted inputs may be redundant and caused by the same fault or root cause in the DNN model(Krause et al., 2017). Therefore, we compare the effectiveness of _DeepGD_ with the existing baselines based on the fault detection rate (_FDR_) (Krause et al., 2017). For each subject and for different sizes of the test input set \(\beta\in\{100,300,500\}\), we report the fault detection rate _FDR_ for each approach in Table 1, calculated as follows:
\[FDR(S)=\frac{|F_{s}|}{min(|S|,|F|)} \tag{6}\]
where \(S\) is the selected test input set, \(|F_{s}|\) is the number of faults in \(S\), \(|S|\) is the test input set size, and \(|F|\) is the total number of faults in the dataset.
Detailed results of the fault detection rate for six subjects, resulting from the combination of datasets and DNN models, for two subset sizes \(\beta=100\) and \(\beta=300\) are shown in Table 2 and Table 3 respectively. Due to space limitations, we only present results for two subset sizes. The results follow the same trend and the final conclusion is consistent for larger test subset sizes. Because of randomness in _DeepGD_, ATS, and random selection, we re-executed each of them five times and reported the corresponding average fault detection rates. Our results show that all of the test selection approaches guided by coverage metrics are ineffective in detecting faults. Even after repeating the experiment with various parameters as suggested in their original papers, the results are similar. The
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Dataset** & **DNN Model** & **Accuracy** & **\# Faults** \\ \hline \hline MNIST & LeNet5 & 87.85\% & 85 \\ & LeNet1 & 84.5\% & 137 \\ \hline Fashion-MNIST & LeNet4 & 88\% & 141 \\ \hline Cifar-10 & 12-layer ConvNet & 82.93\% & 187 \\ & ResNet20 & 86\% & 177 \\ \hline SVHN & LeNet5 & 88\% & 147 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Datasets and models used for evaluation
fault detection rate of NC, for example, was between 4% and 25% for subset sizes of 100. SNAC and NBC also showed poor fault detection rates, which vary between 11% and 19% for the same subset size.
Compared with LSA and DSA, both _DeepGD_ and other black-box prioritization selection methods showed a much higher fault detection rate for all subjects. Overall, it can be concluded that black-box approaches are more effective in detecting faults than white-box approaches. _DeepGini_ and MaxP fare the same as _DeepGD_ for two different subjects with a test subset size of 100. However, for other subjects, _DeepGD_ achieves a higher fault detection rate than these two approaches. For larger test subset sizes (\(\beta=300\)), _DeepGD_ performs better than other black-box baselines in all subjects, with a fault detection rate between 53% and 64%.
Note that the second best approach is not consistently the same across subjects and sizes, therefore strengthening our conclusion that _DeepGD_ is the only solution we can confidently recommend regardless of the model, dataset, and test set size. For example, with \(\beta=100\), compared to black-box SOTA baselines, _DeepGD_ is, on average, 2 pp better (with a maximum of 6 pp across all models and datasets) than the second-best black-box alternative and 7 pp better (with a maximum of 13 pp across all models and datasets) than the fourth-best black-box alternative (excluding random selection which consistently showed poor performance). Since we cannot predict beforehand how a given alternative will fare compared to the others for a dataset and model, this implies that the effect of selecting another technique than _DeepGD_ is potentially significant as we may end up using the worst alternative. For example, if we rely on the inputs selected by _DeepGini_ to test LeNet1 on the MNIST dataset (\(\beta=300\)), we will only reach a fault detection rate of 44% instead of 55% with _DeepGD_. Further, we also report that _DeepGD_ is on average 15 pp better than the best white-box test selection approach and 40 pp better than the worst white-box alternative. We performed a statistical analysis using Wilcoxon signed-rank tests (Srivastava et al., 2014), with a significance level of 0.05, to investigate whether _DeepGD_ significantly outperforms each SOTA baseline in terms of fault-revealing power across all subjects and subset sizes. We found that all p-values are less than 0.05, indicating that _DeepGD_ significantly surpasses SOTA baselines in finding more faults in DNNs.
**Answer to RQ1:**_DeepGD_ outperforms both white-box and black-box test selection approaches for DNNs, in terms of detecting distinct faults given the same testing budget. Further, the second-best black-box approach after _DeepGD_ is not consistently the same.
5.2. **RQ2. Do we more effectively guide the retraining of DNN models with our selected inputs than with baselines?**
Having examined the effectiveness of _DeepGD_ and other baselines in selecting test input sets with high fault-revealing power, we next focus on the extent to which the test selection approaches can help select data to effectively retrain the DNN models under test. We only consider black-box test selection approaches in this research question, as they showed much better performance than white-box test selection baselines. We consider the same models and datasets as in the previous experiment. We augment the original training dataset with the test inputs selected in RQ1 by _DeepGD_, ATS, DeepGini, and MaxP, respectively, to retrain the DNN model. We measure the accuracy of the retrained model on both the whole test dataset and a newly generated dataset that is obtained by applying five realistic image transformations to the original test dataset. The latter allows for a fairer and more complete comparison between our proposed method and the other baseline approaches since none of the inputs in the generated dataset were used for retraining the model. We used the generated datasets of Gao _et al._(Gao et al., 2017), where various
\begin{table}
\begin{tabular}{|p{11.4pt} p{11.4pt}|p{11.4pt} p{11.4pt} p{11.4pt} p{11.4pt}|p{11.4pt} p{11.4pt} p{11.4pt}|p{11.4pt} p{11.4pt} p{11.4pt}|p{11.4pt} p{11.4pt}|} \hline \multirow{2}{*}{Data} & \multirow{2}{*}{Model} & \multicolumn{8}{c|}{White-box} & \multicolumn{8}{c|}{Black-box} \\ \cline{3-14} & & NC & & \multicolumn{2}{c|}{NBC} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{SNAC} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ & & 0 & 0.75 & 0 & 0.5 & 1 & 0 & 0.5 & 1 & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ \hline \hline \multirow{2}{*}{Cifar-10} & 12 ConvNet & 13\% & 21\% & 13\% & 12\% & 12\% & 12\% & 11\% & 13\% & 31\% & 35\% & 14\% & 55\% & 54\% & 53\% & **59\%** \\ & ResNet20 & 10\% & 10\% & 11\% & 17\% & 16\% & 11\% & 17\% & 16\% & 42\% & 37\% & 11\% & 52\% & **56\%** & 50\% & **56\%** \\ \hline \multirow{2}{*}{MNIST} & LeNet1 & 25\% & 15\% & 16\% & 19\% & 12\% & 16\% & 18\% & 12\% & 31\% & 23\% & 12\% & **41\%** & 28\% & 40\% & **41\%** \\ & LeNet5 & 16\% & 13\% & 13\% & 17\% & 16\% & 14\% & 19\% & 16\% & 29\% & 33\% & 13\% & 35\% & 34\% & 36\% & **37\%** \\ \hline \multirow{2}{*}{Fashion} & LeNet4 & 5\% & 18\% & 14\% & 15\% & 14\% & 14\% & 15\% & 14\% & 17\% & 34\% & 10\% & **39\%** & **39\%** & 32\% & **39\%** \\ \hline \multirow{2}{*}{SVHN} & LeNet5 & 9\% & 4\% & 11\% & 12\% & 11\% & 11\% & 12\% & 11\% & 13\% & 19\% & 11\% & 43\% & 43\% & **44\%** & **50\%** \\ \hline \end{tabular}
\end{table}
Table 2. The fault detection rate for each subject with test subset size \(\beta=100\)
\begin{table}
\begin{tabular}{|p{11.4pt} p{11.4pt}|p{11.4pt} p{11.4pt} p{11.4pt} p{11.4pt} p{11.4pt} p{11.4pt} p{11.4pt} p{11.4pt}|p{11.4pt} p{11.4pt} p{11.4pt}|} \hline \multirow{2}{*}{Data} & \multirow{2}{*}{Model} & \multicolumn{8}{c|}{White-box} & \multicolumn{8}{c|}{Black-box} \\ \cline{3-14} & & NC & & \multicolumn{2}{c|}{NBC} & \multicolumn{2}{c|}{SNAC} & \multicolumn{2}{c|}{} \\ & & 0 & 0.75 & 0 & 0.5 & 1 & 0 & 0.5 & 1 & \multicolumn{2}{c|}{} \\ \hline \hline \multirow{2}{*}{Cifar-10} & 12 ConvNet & 19\% & 28\% & 21\% & 22\% & 23\% & 22\% & 22\% & 22\% & 35\% & 37\% & 22\% & 54\% & 53\% & 50\% & **56\%** \\ & ResNet20 & 15\% & 17\% & 17\% & 17\% & 19\% & 17\% & 17\% & 19\% & 49\% & 45\% & 19\% & 56\% & 58\% & 52\% & **59\%** \\ \hline \multirow{2}{*}{MNIST} & LeNet1 & 25\% & 25\% & 33\% & 28\% & 27\% & 36\% & 29\% & 27\% & 40\% & 40\% & 25\% & 53\% & 44\% & 53\% & **55\%** \\ & LeNet5 & 34\% & 33\% & 32\% & 33\% & 30\% & 32\% & 30\% & 30\% & 54\% & 52\% & 24\% & 63\% & 59\% & 62\% & **64\%** \\ \hline \multirow{2}{*}{Fashion} & LeNet4 & 8\% & 25\% & 19\% & 23\% & 23\% & 19\% & 23\% & 23\% & 35\% & 45\% & 17\% & 51\% & 47\% & 49\% & **53\%** \\ \hline \multirow{2}{*}{SVHN} & LeNet5 & 11\% & 13\% & 17\% & 16\% & 18\% & 17\% & 16\% & 18\% & 19\% & 36\% & 17\% & 58\% & 61\% & 58\% & **62\%** \\ \hline \end{tabular}
\end{table}
Table 3. The fault detection rate for each subject with test subset size \(\beta=300\)
transformations were applied to the MNIST, Fashion-MNIST and Cifar-10 test datasets. However, since no generated inputs were included for the SVHN dataset, we generated new test inputs by applying the same transformations. These transformations include sheering, rotating, zooming, and changing the brightness and the blurriness of all images in each dataset. We should note that the size of the newly generated test dataset is five times larger than the original test dataset since we apply five image transformations on each original test input. For each black-box test selection approach, we measure the accuracy improvement of the retrained models on both the original test dataset and the generated one. We report the results in Table 4. We acknowledge that studying accuracy improvement of the retrained models on the generated datasets is more suitable in our context since it does not contain any of the inputs used for retraining. But we choose to also report improvements with the original test dataset to verify that the retraining process did improve model accuracy. Note that we only select a small part of the original test dataset to retrain the model (only 300 out of 10,000 to 26,000 inputs).
As described in Table 4, we found that _DeepGD_ provides better guidance for retraining DNN models than the other black-box test selection approaches. It consistently outperforms ATS, DeepGini, and MaxP in terms of accuracy improvements across all models and datasets. Similar to the previously reported results in RQ1, we found that the second-best approach for guiding retraining is not consistently the same across all subjects. For example, ATS was the second-best approach for retraining ResNet20 with Cifar-10, while it was the third-best approach for retraining LeNet5 with SVHN.
To further investigate the effectiveness of _DeepGD_ and the other black-box baselines in retraining DNN models, we also computed their _optimization effectiveness_ by accounting for the maximum possible accuracy improvement. We therefore retrain the model with the entire original test dataset and report the best accuracy that can be achieved by retraining. Then, for each test selection method, we calculate its optimization effectiveness, defined as the ratio of (1) the accuracy improvement when retaining the model with only 300 selected inputs from the original test dataset (Table 4) to (2) the accuracy improvement when retaining the model with the entire original test dataset. We also repeat the above experiment for the generated test data set to once again get a fairer comparison of the test selection approaches and to obtain better generalizability for our results. More specifically, we retrain the original model by adding all generated inputs to the training dataset to achieve the highest accuracy possible. Then, we calculate the corresponding optimization effectiveness which is the ratio of (1) the accuracy improvement when retaining the model with only 300 inputs from the original test dataset (Table 4) to (2) the accuracy improvement when retaining the model with the entire generated test dataset. We report the results in Table 5.
We found that _DeepGD_ consistently outperforms other black-box test selection approaches in terms of optimization effectiveness. Compared to black-box SOTA baselines, _DeepGD_ is, on average, 8.93 pp better (with a maximum of 20.11 pp) than the second-best black-box alternative and 11.49 pp better (with a maximum of 25.06 pp) than the worst black-box alternative. As for RQ1, it is worth noting that since the performance of alternatives is not consistent across datasets and models, selecting one of them may yield the worst results. We also report the average optimization effectiveness on the generated test datasets. Gini, MaxP and ATS yield 30.35%, 31.07%, and 32.25% respectively, while _DeepGD_ achieves 38.03%. In other words, with only 300 test inputs, from the original test dataset and selected by _DeepGD_ for retraining, we were able to reach 38.03% of the maximum achievable accuracy by retraining with all generated test dataset. We should note that these results are expected since _DeepGD_ showed better performance than the other baselines in retraining the DNN models under test in our previous experiment. Similar to RQ1, we performed a statistical analysis using Wilcoxon signed-rank tests, with a significance level
\begin{table}
\begin{tabular}{|c|c|c|c c c||c c c|c|} \cline{3-10} \multicolumn{1}{c}{} & & \multicolumn{3}{c||}{Accuracy imp. on orig. test dataset} & \multicolumn{3}{c|}{Accuracy imp. on generated test dataset} \\ \hline Data & Model & MaxP & Gini & ATS & **DeepGD** & MaxP & Gini & ATS & **DeepGD** \\ \hline \hline \multirow{2}{*}{Cifar-10} & 12 ConvNet & 3.52\% & 2.09\% & 2.12\% & **3.53\%** & 5.35\% & 5.12\% & **4.62\%** & **6.85\%** \\ \cline{2-10} & ResNet20 & 1.66\% & 0.74\% & 2.03\% & **2.50\%** & 4.12\% & 3.45\% & 5.97\% & **6.18\%** \\ \hline \multirow{2}{*}{MNIST} & LeNet1 & 10.39\% & 10.40\% & 10.40\% & **11.10\%** & 13.18\% & 13.24\% & 13.19\% & **14.76\%** \\ \cline{2-10} & LeNet5 & 7.94\% & 7.91\% & 7.92\% & **9.26\%** & 13.02\% & 12.92\% & 13.08\% & **15.82\%** \\ \hline \multirow{2}{*}{Fashion} & LeNet4 & 3.94\% & 3.94\% & 3.91\% & **4.17\%** & 7.33\% & 7.41\% & 7.62\% & **7.69\%** \\ \hline SVHN & LeNet5 & 0.96\% & 0.99\% & 0.93\% & **1.24\%** & 0.61\% & 0.54\% & 0.63\% & **1.26\%** \\ \hline \end{tabular}
\end{table}
Table 4. DNNs accuracy improvements after retraining with the selected test input sets.
\begin{table}
\begin{tabular}{|c|c|c|c c c||c c c|c c|} \cline{3-10} \multicolumn{1}{c}{} & & \multicolumn{3}{c||}{Opt effectiveness on orig. test dataset (\(\overline{1}\))} & \multicolumn{3}{c|}{Opt effectiveness on generated dataset (G)} \\ \hline \multirow{2}{*}{Data} & \multirow{2}{*}{Model} & Max imp. & \multirow{2}{*}{MaxP} & Gini & ATS & \multirow{2}{*}{**DeepGD**} & Max imp. & \multirow{2}{*}{MaxP} & Gini & ATS & **DeepGD** \\ & & (100\% T) & & & & & & & & & \\ \hline \hline \multirow{2}{*}{Cifar-10} & 12 ConvNet & 16.36\% & 21.51\% & 12.78\% & 12.96\% & **21.58\%** & 28.98\% & 18.47\% & 17.65\% & 15.94\% & **23.65\%** \\ \cline{2-10} & ResNet20 & 9.99\% & 16.62\% & 7.41\% & 20.32\% & **25.03\%** & 23.33\% & 17.65\% & 14.80\% & 25.59\% & **26.47\%** \\ \hline \multirow{2}{*}{MNIST} & LeNet1 & 11.01\% & 93.60\% & 93.72\% & 93.69\% & **100\%** & 23.80\% & 55.39\% & 55.62\% & 55.43\% & **62.04\%** \\ \cline{2-10} & LeNet5 & 9.52\% & 83.40\% & 83.08\% & 83.19\% & **97.27\%** & 22.78\% & 57.15\% & 56.72\% & 57.41\% & **69.46\%** \\ \hline \multirow{2}{*}{Fashion} & LeNet4 & 12.00\% & 32.83\% & 32.83\% & 32.58\% & **34.75\%** & 23.72\% & 30.91\% & 31.23\% & 32.10\% & **32.41\%** \\ \hline SVHN & LeNet5 & 1.25\% & 77.01\% & 79.79\% & 74.84\% & **99.90\%** & 8.92\% & 6.87\% & 6.06\% & 7.04\% & **14.15\%** \\ \hline \hline \multicolumn{10}{|c}{Average} & 10.04\% & 54.16\% & 51.60\% & 52.93\% & **63.09\%** & 21.92\% & 31.07\% & 30.35\% & 32.25\% & **38.03\%** \\ \hline \end{tabular}
\end{table}
Table 5. Optimization effectiveness compared to retraining with all candidate tests
of 0.05, to investigate whether _DeepGD_ significantly outperforms the selected black-box test selection approaches in terms of optimization effectiveness across all subjects. We found all p-values to be less than 0.05, indicating that _DeepGD_ is significantly better than the selected baselines in retraining DNN models. In other words, results show that _DeepGD_ can select a small, more informative subset from a large unlabeled dataset to effectively retrain DNN models and minimize the labeling costs. Selecting diverse inputs with high uncertainty scores not only helps at detecting more faults in the DNN model, but also significantly improves the accuracy of the model through retraining.
**Answer to RQ2:**_DeepGD_ provides better guidance than black-box alternatives for retraining DNN models. It consistently and statistically outperforms other black-box test selection approaches in terms of accuracy improvement, in absolute terms and relatively to the maximum achievable improvement.
#### 3.5.3. **Discussions**
Based on our experimental results, we show that _DeepGD_ provides better guidance than existing baselines for selecting test inputs with high-fault revealing power. The second-best approaches for test selection and model retraining are not consistently the same across all models and datasets. This reinforces our conclusion that _DeepGD_ is the only solution that we can confidently recommend regardless of the model, dataset, and test set size. Indeed, selecting diverse test inputs with high uncertainty scores not only enables higher fault detection, but also provides more effective guidance for retraining DNN models. Although results are encouraging, and though this remains to be further investigated, we conjecture that _DeepGD_ is particularly useful when datasets are more redundant, a situation that is often observed in real world DNN testing scenarios, especially with massive generated datasets. Selecting diverse test input sets with _DeepGD_ is then expected to help reduce redundancy by selecting more informative inputs to be labeled in order to test and retrain DNN models. We should also note that the mean execution time of our test selection approach on Google Colab was five hours. This is practically acceptable since, in DNN testing, (1) the labeling cost of test inputs is far more expensive than the computation cost of the search, and (2) test selection is neither frequent nor a real-time task.
## 4. Threats to Validity
We discuss in this section the different threats to the validity of our study and describe how we mitigated them.
**Internal threats to validity** concern the causal relationship between the treatment and the outcome. Since _DeepGD_ is black-box and relies on extracting feature matrices to measure the diversity of the selected test input sets, an internal threat to validity might be caused by the poor quality representation of inputs. To mitigate this threat, we have relied on VGG-16, one of the most accurate feature extraction models in the literature. This model has been pre-trained on the very large ImageNet dataset that contains more than 14 million labeled images belonging to 22,000 categories. Moreover, _DeepGD_ relies on the specification of a few hyperparameters related to NSGA-II. This also applies to several white-box and black-box test selection approaches that we considered in our study. The configuration of the different hyperparameters in our work may induce additional threats. To mitigate them, we have relied on the NSGA-II hyperparameters recommended in the literature (Shi et al., 2017; Li et al., 2018; Li et al., 2019) except for the mutation rate, which was intentionally set higher and experimentally tuned since we customized the mutation operator to take into account both fitness functions, as described in section 2.3.2. We have also considered different configurations of the baselines-related hyperparameters according to their original papers. A last internal threat to validity would be related to randomness when selecting test input sets with _DeepGD_, ATS, and random selection. We addressed this issue by repeating such selection multiple times while considering different input set sizes and different datasets and models.
**Construct threats to validity** concern the relation between the theory and the observations made. A construct threat to validity might be due to inaccuracies in estimating DNN faults since detecting faults in DNNs is not as straightforward as in regular software. To mitigate this threat, we have relied on a SOTA fault estimation approach (Li et al., 2018) that has been thoroughly validated on several models and datasets. We have reused their publicly available approach to obtain accurate fault estimates. Nonetheless, relying on a such fault estimation approach is still far better than just considering mispredicted inputs that are redundant and due to the same root cause in the DNN model.
**External threats to validity** concern the generalizability of our study. We mitigate this threat by considering six different combinations of widely used and architecturally distinct models and datasets. We also considered many testing budgets in our experiments and compared our results with nine SOTA baselines for DNN test selection.
## 5. Related Work
In this section, we introduce existing work related to our proposed approach from two aspects: test selection and test diversity in the context of DNN models.
**Test Selection for DNNs.** Test selection approaches proposed for DNN models can be characterized as black-box or white-box, depending on their access requirements to the internals of the DNN model.
A few black-box test selection approaches for DNNs have been introduced in the literature. They generally rely on the uncertainty of model classifications. For example, Feng _et al._(Feng et al., 2019) proposed DeepGini, a test selection approach that prioritizes the selection of inputs with higher _Gini_ scores (Feng et al., 2019). They conjecture that if a DNN model is unsure about a classification and outputs similar probabilities for each class, it is more likely to mispredict the test input. Compared to random and coverage-based selection methods, DeepGini was shown to be more effective in uncovering mispredictions (Feng et al., 2019; Li et al., 2019). Similar to their work, we rely on _Gini_ scores as one of the fitness functions in our approach. However, we also consider the diversity of the selected test set and rely on a multi-objective genetic search to guide the search toward finding test inputs with high fault-revealing power. Li _et al._(Li et al., 2019) introduced Cross Entropy-based Sampling (CES) and Confidence-based Stratified Sampling
(CSS), for black-box DNN test selection. These metrics are used to select a small subset of test inputs that accurately reflect the accuracy of the whole test dataset. They show that compared to random sampling, their approach could achieve the same level of precision with about half of the labeled data. Their goal is clearly different from ours. While our focus is to prioritize the selection of test inputs with high fault-revealing capability, their goal is to minimize test sets. Also, though Arrieta (Srieta et al., 2018) relied on NSGA-II and uncertainty scores for selecting metamorphic follow-up test cases, our goal with _DeepGD_ is also different. We use both diversity and DNN uncertainty to guide the search toward test inputs with high fault-revealing power, for a fixed budget.
Several white-box test selection approaches have been proposed as well. Such approaches generally rely on coverage (Beng et al., 2017; Chen et al., 2018; Li et al., 2019) or surprise metrics (Li et al., 2019) to select inputs that will be labeled and used for testing DNN models. For example, Pei _et al._(Pei et al., 2018) proposed Neuron Coverage (NC), which measures neurons activation rates in DNN models. Ma _et al._(Ma et al., 2019) proposed _DeepGauge_, a set of coverage metrics for DNN models that consider neurons activation ranges. Kim _et al._(Kim et al., 2019) proposed surprise coverage metrics which measure how surprising test sets are given the training set. The selection of test inputs for all these approaches is based on maximizing coverage or surprise scores. However, several studies (Liu et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), have shown that these white-box metrics are not always effective for guiding DNNs test selection. For example, Ma _et al._(Ma et al., 2019) performed an empirical study and compared the effectiveness of white-box metrics (coverage and surprise metrics) with black-box uncertainty in guiding test input selection. Results showed that the former have a weak or no correlation with classification accuracy while the latter had a medium to strong correlation. Uncertainty-based metrics not only outperform coverage-based metrics but also lead to faster improvements in retraining. In our work, as mentioned above, we also consider maximizing the uncertainty score of the test inputs as one of our two fitness objectives.
**Diversity in DNN Testing.** Many works have studied diversity-based test selection and generation for traditional software (Liu et al., 2019; Li et al., 2019; Li et al., 2019). The underlying assumption is that there is a strong correlation between test case diversity and fault-revealing power (Li et al., 2019). Their results confirmed this assumption and showed that diversity-based metrics are effective in revealing faults. Inspired by these encouraging results, researchers devised diversity-based approaches for DNN testing (Liu et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Zhao _et al._(Zhao et al., 2019) conducted an empirical study of SOTA test input selection approaches for DNNs (Li et al., 2019; Li et al., 2019; Li et al., 2019) and concluded that they have a negative impact on test diversity and suggest that more research is warranted on designing more effective test selection approaches that guarantee test diversity.
In a recent study, Gao _et al._(Gao et al., 2019) proposed the adaptive test selection method (ATS) for DNN models that use the differences between model outputs as a behavior diversity metric. Although ATS aims to cover more diverse faults, its test selection is guided only by model outputs. _DeepGD_, however, considers both the uncertainty of model output probabilities and input features' diversity. Moreover, our results show that _DeepGD_ outperforms ATS in test input selection for different combinations of models and datasets and for different test subset sizes. Aghababaeyan _et al._(Aghabaeyan et al., 2019) studied three black-box input diversity metrics for testing DNNs, including geometric diversity (Zhao et al., 2019), normalized compression (Li et al., 2019), and standard deviation. They investigated the capacity of these metrics to measure actual diversity in input sets and analyzed their fault-revealing power. Their experiments on image datasets showed that geometric diversity outperforms SOTA white-box coverage criteria in terms of fault detection and computational time. However, they did not study how the GD metric can be used in practice to guide the selection of test inputs for DNN models.
## 6. Conclusion
In this paper, we propose _DeepGD_, a multi-objective search-based test input selection approach for DNN models. Our motivation is to provide a black-box test selection mechanism that reduces labeling costs by effectively selecting unlabeled test inputs with high fault-revealing power. We rely on both diversity and uncertainty scores to guide the search toward test inputs that reveal diverse faults in DNNs. We conduct an extensive empirical study on six different subjects and compare _DeepGD_ with nine state-of-the-art test selection approaches. Our results show that _DeepGD_ statistically and consistently outperforms baseline approaches in terms of its ability to reveal faults in DNN models. Selecting diverse inputs with high uncertainty scores with _DeepGD_ not only helps detecting more faults in the DNN model for a given test budget, but also significantly improves the accuracy of the model through retraining with an augmented training set. Our results also indicate that the second-best approach for testing or retraining is not consistent across all models and datasets, further supporting the choice of _DeepGD_. We aim to extend our work by studying the application of diversity and uncertainty metrics for other DNN testing purposes such as test minimization and generation. Finally, we need to investigate alternative ways to estimate faults in DNNs.
###### Acknowledgements.
This work was supported by a research grant from General Motors as well as the Canada Research Chair and Discovery Grant programs of the Natural Sciences and Engineering Research Council of Canada (NSERC).
|
2307.06540 | Convolutional Neural Networks for Sentiment Analysis on Weibo Data: A
Natural Language Processing Approach | This study addressed the complex task of sentiment analysis on a dataset of
119,988 original tweets from Weibo using a Convolutional Neural Network (CNN),
offering a new approach to Natural Language Processing (NLP). The data, sourced
from Baidu's PaddlePaddle AI platform, were meticulously preprocessed,
tokenized, and categorized based on sentiment labels. A CNN-based model was
utilized, leveraging word embeddings for feature extraction, and trained to
perform sentiment classification. The model achieved a macro-average F1-score
of approximately 0.73 on the test set, showing balanced performance across
positive, neutral, and negative sentiments. The findings underscore the
effectiveness of CNNs for sentiment analysis tasks, with implications for
practical applications in social media analysis, market research, and policy
studies. The complete experimental content and code have been made publicly
available on the Kaggle data platform for further research and development.
Future work may involve exploring different architectures, such as Recurrent
Neural Networks (RNN) or transformers, or using more complex pre-trained models
like BERT, to further improve the model's ability to understand linguistic
nuances and context. | Yufei Xie, Rodolfo C. Raga Jr | 2023-07-13T03:02:56Z | http://arxiv.org/abs/2307.06540v1 | Convolutional Neural Networks for Sentiment Analysis on Weibo Data: A Natural Language Processing Approach
###### Abstract
This study addressed the complex task of sentiment analysis on a dataset of 119,988 original tweets from Weibo using a Convolutional Neural Network (CNN), offering a new approach to Natural Language Processing (NLP). The data, sourced from Baidu's Paddle'Paddle AI platform, were meticulously preprocessed, tokenized, and categorized based on sentiment labels. A CNN-based model was utilized, leveraging word embeddings for feature extraction, and trained to perform sentiment classification. The model achieved a macro-average F1-score of approximately 0.73 on the test set, showing balanced performance across positive, neutral, and negative experiments. The findings underscore the effectiveness of CNNs for sentiment analysis tasks, with implications for practical applications in social media analysis, market research, and policy studies. The complete experimental content and code have been made publicly available on the Kaggle data platform for further research and development. Future work may involve exploring different architectures, such as Recurrent Neural Networks (RNN) or transformers, or using more complex pre-trained models like BERT, to further improve the model's ability to understand linguistic nuances and context.
Sentiment Analysis, Weibo Data, Natural Language Processing, Convolutional Neural Networks, Word Embeddings
## I Introduction
Sentiment analysis, also known as opinion mining, has become an indispensable tool in the age of digital media where public opinion can be swiftly gauged from a plethora of social media platforms. This computational study of people's emotions, attitudes, and opinions has far-reaching applications ranging from brand management and market research to policy-making and political forecasting.
One such fertile ground for sentiment analysis is Weibo, a popular microblogging platform in China that boasts over 500 million active users. This platform offers a wealth of user-generated content, encompassing a myriad of topics and, consequently, a spectrum of sentiments, making it an ideal data source for sentiment analysis.
However, the manual evaluation of sentiment is arduous and practically unscalable given the immense volume of data. This has necessitated the application of machine learning (ML) techniques for automatic sentiment analysis. Among various ML approaches, Convolutional Neural Networks (CNNs) have proven to be particularly effective. CNNs, originally designed for image processing, have demonstrated their prowess in Natural Language Processing (NLP) tasks such as sentiment analysis due to their ability to capture local dependencies in the input data, a key factor when working with text.
CNNs employ a hierarchical layer structure to learn increasingly complex features from raw input, enabling them to capture intricate linguistic structures that could be crucial for sentiment analysis. CNNs' ability to efficiently deal with high-dimensional data, coupled with their lower need for pre-processing compared to traditional NLP techniques, makes them an attractive choice for sentiment analysis on Weibo data.
In this paper, we delve into the application of CNNs for sentiment analysis on Weibo data, aiming to explore the efficacy of this approach in the realm of NLP tasks and assess its potential implications.The complete experimental content and code have been made publicly available on the Kaggle data platform for further research and development[1].
## II Literature Review
Sentiment analysis has been widely used to understand public attitudes and opinions in various domains, such as public opinion analysis, child abuse attitudes, policy sentiment analysis, and more. With the advent of machine learning and deep learning techniques, sentiment analysis has gained more precision and efficiency. This review aims to encapsulate the evolution of sentiment analysis approaches with a particular focus on those applied to Weibo data and highlight the potential of Convolutional Neural Networks (CNNs) in this field.
Li et al. (2022) proposed a sentiment analysis method that combines the bidirectional gated recurrent unit neural network (BiGRU) with the attention mechanism for Weibo topic sentiment analysis[2]. Their method achieved promising results, outperforming traditional neural network models in terms of accuracy. However, the authors acknowledged limitations in handling complex semantics like irony and lack of fine-grained sentiment analysis, indicating areas for future research.
Further, Lyu et al. (2020) explored public attitudes towards child abuse in mainland China by applying sentiment analysis to Weibo comments[3]. This study shed light on the emotional vocabulary and keywords related to resentment and vengefulness associated with child abuse. Nevertheless, this sentiment analysis was context-specific and did not extend to a broader range of topics.
Yang et al. (2019) demonstrated the use of extended vocabulary and CNNs for sentiment analysis of Weibo comment texts[4]. This research highlighted the efficiency of CNNs in handling large data sets and their high accuracy in sentiment classification.
Another work by Jia and Peng (2022) utilized a BiLSTM model with an attention mechanism for sentiment analysis regarding the Double Reduction Policy on Weibo[5]. Their method unveiled key themes related to the policy and successfully traced the online public sentiment trends.
Li et al. (2021) proposed a novel model combining BERT and deep learning for Weibo text sentiment analysis, demonstrating substantial improvements over similar models[6]. This research exemplified the integration of advanced NLP models with deep learning for sentiment analysis tasks.
Lastly, Chen (2015) emphasized the efficacy of CNNs for sentence classification, thus substantiating their potential for sentiment analysis[7]. The study underscored the superior performance of deep CNNs over traditional methods, highlighting their capability to capture intricate semantic features.
In summary, while different machine learning and deep learning techniques have shown success in Weibo sentiment analysis, CNNs have shown significant promise due to their hierarchical feature learning and efficient handling of high-dimensional data. However, the effectiveness of CNNs for sentiment analysis on a broader range of Weibo data still requires extensive exploration. This present study aims to address this gap by investigating the application of CNNs for sentiment analysis on Weibo data.
## III Problem Definition
Despite the substantial progress in sentiment analysis using machine learning and deep learning techniques, there remains a significant challenge in understanding and interpreting the sentiments in Weibo data. This challenge primarily stems from the nature of the Weibo platform and the Chinese language. As one of the most popular microblogging sites in China, Weibo contains a massive volume of user-generated content, which is predominantly in Chinese. The Chinese language, characterized by its rich set of homonyms, extensive use of idioms, and lack of space-separated words, poses unique obstacles to the task of sentiment analysis.
The primary problem that this study aims to address is the enhancement of sentiment analysis performance on Weibo data by leveraging the power of Convolutional Neural Networks (CNNs). Previous studies have highlighted the capability of CNNs in processing high-dimensional data and extracting intricate hierarchical features, which are especially relevant for sentiment analysis on Weibo data. The application of CNNs allows us to capture local dependencies in the data and construct meaningful, higher-level representations of input sentences, which could improve the accuracy of sentiment prediction.
However, the application of CNNs for sentiment analysis on Weibo data remains largely unexplored, and its effectiveness across a broad range of topics on Weibo is still uncertain. Therefore, this study is motivated by the following research question: Can CNNs effectively enhance the performance of sentiment analysis on Weibo data across various topics?
Our study intends to investigate the suitability and efficiency of CNNs for sentiment analysis on Weibo data. We hypothesize that, given their demonstrated strength in other areas of natural language processing, CNNs could significantly boost the performance of sentiment analysis on Weibo, thus providing more accurate and granular insights into public opinion as reflected on this platform.
## IV Methodology
This section details the methodology used in our experiment, which aimed to classify sentiments of Weibo posts into three categories: positive, neutral, and negative.
### _Data Collection_
The dataset for this study was obtained from Baidu's PaddlePaddle Artificial Intelligence platform, specifically the dataset titled "weibo_senti_100k". This dataset comprises 119,988 Weibo posts, each labeled as either positive (1) or negative (0) according to the sentiment conveyed.
The distribution of sentiment categories in the original dataset was analyzed and visualized using a bar plot. The analysis revealed an almost balanced distribution of positive and negative sentiments. Specifically, there were 59,995 instances of negative sentiments and 59,993 instances of positive sentiments.
By plotting these sentiment counts, we were able to visually confirm the equal distribution of sentiment categories. This equal distribution is crucial as it can potentially reduce bias in the machine learning model we intend to train.
### _Data Preprocessing_
Fig. 1: Number of sentiment categories in original dataset
The data preprocessing stage was crucial to ensure that the raw Weib post data was converted into a format suitable for feeding into the Convolutional Neural Network (CNN) model.
The data preprocessing started with the removal of user mentions from the Weibo posts. This was done by replacing any text that started with '_a_' and ended with a whitespace character. Following this, all punctuation was stripped from the Weibo posts.
Next, the data was tokenized using Jieba, a popular Chinese text segmentation tool. This converted the continuous text into discrete tokens, or words, which were later used as input for the CNN model.
Subsequently, a list of Chinese stop words was loaded, and these stop words were removed from the tokenized Weibo posts. Stop words are high-frequency words that often carry little meaningful information, such as the', 'and', 'is', etc., and are commonly removed in text preprocessing to decrease the dimensionality of the data and to focus on important words.
This preprocessing resulted in some empty reviews, which were then identified and removed from the dataset, resulting in a final count of 117,282 reviews.
The sentiment of the cleaned and tokenized Weibo posts was then determined using the SnowNLP library. A sentiment score between 0 (negative) and 1 (positive) was assigned to each post. These sentiment scores were then classified into three categories: negative (sentiment score less than 0.3), neutral (sentiment score between 0.3 and 0.7), and positive (sentiment score higher than 0.7).
The data was visualized again using a bar plot to understand the distribution of the sentiment categories after preprocessing. The plot revealed an imbalance in sentiment categories, with positive sentiments being the most common (72,909 instances), followed by negative sentiments (26,368 instances) and neutral sentiments (18,005 instances).
The processed data was then saved into a new CSV file for further use in the study.
### _Dataset Balancing_
Balancing the dataset is an essential step in training a machine learning model, particularly when dealing with multi-class classification tasks. In the case of this study, the original binary classification dataset was transformed into a three-class dataset during preprocessing, where positive sentiment was significantly more common than either neutral or negative sentiment. This imbalance could have resulted in a model biased towards predicting positive sentiment, leading to a decrease in performance.
To address this, the study employed an oversampling technique using the RandomOverSampler function from the implearn library. This function works by randomly duplicating instances from the minority classes until all classes have the same number of instances. Prior to oversampling, the data was split into a training set (80% of the data) and a test set (20% of the data), using the train test_split function from the sklearn library, to prevent data leakage and ensure a fair evaluation of the model's performance.
The oversampling was applied only to the training set, resulting in a balanced distribution of 14,434 instances for each sentiment category (negative, neutral, positive). The balanced training set was then ready to be used to train the CNN model.
A bar plot of the sentiment category distribution after oversampling confirmed the success of the balancing process, with all categories represented equally. Ensuring a balanced training set is crucial as it allows the model to learn from an equal number of instances from each class, avoiding bias towards the majority class and improving the model's ability to generalize to unseen data.
### _Feature Extraction_
Feature extraction is a crucial step in machine learning and deep learning pipelines. It involves transforming raw data into a format that is compatible with the learning algorithms. In this study, we use a Convolutional Neural Network (CNN) model, which requires a specific format for its input.
The study employs the Tokenizer utility from the Keras library to tokenize the preprocessed text data, converting each word into an integer. The parameter 'num_words' is set to 5000, meaning the Tokenizer will only consider the top 5000 most frequent words in the corpus, ignoring rare words. This approach can help reduce the dimensionality of the input data and speed up computation without sacrificing too
Fig. 3: Number of sentiment categories after oversampling balance
Fig. 2: Number of sentiment categories
much information, as rare words typically contribute less to the overall understanding of a text.
The sequences of integers obtained after tokenization then need to be padded to ensure that they all have the same length, which is a requirement for feeding data into a CNN. This is done using the pad_sequences utility from the Keras library, with the maximum sequence length ('maxlen') set to 400.
### _Model Architecture_
In this study, we adopt a Convolutional Neural Network (CNN) model to perform sentiment classification. The choice of CNN is made based on a series of experimental iterations. Initially, simpler models were tried but resulted in relatively low evaluation scores, indicating inadequate learning capacity. More complex models, like transformers, were considered; however, they required extensive computational resources and took over 60 hours to train, which was infeasible given our computational constraints. Therefore, CNNs strike a balance by providing enough complexity to capture the intricate patterns in text data while remaining relatively lightweight and efficient to train.
The constructed CNN model consists of several layers:
#### Iv-E1 Embedding Layer
This layer transforms the integer-encoded vocabulary into a dense vector representation. It takes 5000 (the size of the vocabulary) as the input dimension and outputs 50-dimensional vectors.
#### Iv-E2 Dropout Layer
A dropout layer is used right after the embedding layer to prevent overfitting. It randomly sets 20% of the input units to 0 at each update during training time.
#### Iv-E3 Convolutional Layer (Conv1D)
This layer applies 250 filters, each of size 3 (kernel size) to the input data. It uses the'relu' activation function and a stride of 1.
#### Iv-E4 GlobalMaxPooling1D Layer
This layer reduces the output of the convolutional layer by taking the maximum value over the time dimension for each feature map.
#### Iv-E5 Dense (Fully Connected) Layer
This layer has 250 neurons and uses the'relu' activation function. It connects each input to each output within its layer.
#### Iv-E6 Dropout Layer
Another dropout layer is used to prevent overfitting, randomly setting 20% of the input units to 0 at each update during training time.
#### Iv-E7 Output Layer
The final dense layer uses the'softmax' activation function and has 3 neurons, corresponding to the three sentiment categories.
The model is compiled using the 'categorical_crossentropy' loss function and 'adam' optimizer, and it is evaluated based on accuracy. The training is carried out for five epochs, with an early stopping strategy monitoring the validation loss and patience of 2.
After training, the model predicts sentiment labels for the test set. The model performance is evaluated by precision, recall, and F1-score, with results indicating reasonable performance across all three sentiment classes.
## V Results
Our sentiment analysis model, built on a Convolutional Neural Network (CNN), was trained and tested on the preprocessed and balanced dataset. We used accuracy as a training metric, with the model showing a steady increase in accuracy over the epochs. After the 3rd epoch, the validation loss started to increase, indicating the model may start overfitting the training data. Thanks to the early stopping callback, our training halted, preventing overfitting.
Upon evaluating the model on the test dataset, we achieved a macro-average F1-score of approximately 0.73. This metric provides a better measure of the incorrectly classified cases than the accuracy metric, as it takes both false positives and false negatives into account. The weighted average F1-score, precision, and recall are all approximately 0.73, showing balanced performance across the three sentiment classes (-1, 0, 1).
The performance metrics for each class in the test set are as follows:
## VI Discussion
Our Convolutional Neural Network (CNN) model delivered stable and reliable performance for the sentiment analysis task across multiple experiments. The success of this model can be attributed to several factors:
Firstly, the CNN architecture is inherently suitable for text data. While traditionally used for image analysis, CNNs can also be used for text classification by treating text as a form of one-dimensional sequence data. The ability of CNNs to automatically learn and extract valuable features from the text helped significantly in the task of understanding and classifying sentiment.
Secondly, we leveraged word embeddings as a form of feature extraction, transforming text data into numeric vectors that machines can understand. This step effectively captures the semantic relationships between words, providing rich representation for our model to learn from.
Lastly, our preprocessing steps, such as tokenizing, padding, and transforming labels into categorical format, prepared our text data in a way that made it easier for our model to learn and make accurate predictions.
Our results are comparable to many recent studies in sentiment analysis, demonstrating the efficacy of CNNs for this task. However, as compared to complex architectures like transformers, our model offered a balance between computational efficiency and predictive performance, making it a viable option when computational resources or time are limited.
Although the model's performance was reasonably good, it was noticed that the performance for neutral sentiment was slightly lower compared to positive and negative sentiments. This could be due to the inherent complexity in classifying neutral sentiments which often lack clear sentiment indicators.
The outcomes of this research could have a significant impact on various real-world applications, particularly those involving the analysis of public opinion, such as in social media analysis, market research, and political studies. Improved sentiment analysis can help businesses understand their customer feedback better, provide personalized recommendations, or help policymakers understand public opinion on certain issues. Future work can include exploring different architectures, like Recurrent Neural Networks (RNN) or transformers, or utilizing more complex pre-trained models like BERT, to further improve the model's understanding of the nuances in sentiment.
## VII Conclusion
Our study aimed to address the challenging task of sentiment analysis on a dataset derived from Weibo. Through the application of a Convolutional Neural Network (CNN), we aimed to build a model that could effectively understand and categorize sentiments expressed in tweets.
Our methodology involved preprocessing the text data, transforming it into numeric vectors using word embeddings, and feeding this into our CNN model. This model was then trained and evaluated using various metrics such as accuracy, precision, recall, and F1-score. The results demonstrated that our CNN model provided a balanced performance across the three sentiment categories - positive, neutral, and negative.
This research underlines the viability of using CNNs for sentiment analysis tasks and has implications for various real-world applications. The capacity to understand and categorize public sentiment from social media can provide valuable insights for businesses, marketers, and policymakers.
Looking ahead, there are several exciting directions for future research. The model's performance could potentially be improved by experimenting with different architectures, such as Recurrent Neural Networks or transformers. Another promising direction could involve leveraging more complex pre-trained models like BERT, which could improve the model's understanding of context and nuances in language. Lastly, while our study focused on Weibo Chinese data, it would be interesting to test the model on different social media platforms, which may present unique linguistic features and challenges for sentiment analysis.
|
2305.03527 | ResQNets: A Residual Approach for Mitigating Barren Plateaus in Quantum
Neural Networks | The barren plateau problem in quantum neural networks (QNNs) is a significant
challenge that hinders the practical success of QNNs. In this paper, we
introduce residual quantum neural networks (ResQNets) as a solution to address
this problem. ResQNets are inspired by classical residual neural networks and
involve splitting the conventional QNN architecture into multiple quantum
nodes, each containing its own parameterized quantum circuit, and introducing
residual connections between these nodes. Our study demonstrates the efficacy
of ResQNets by comparing their performance with that of conventional QNNs and
plain quantum neural networks (PlainQNets) through multiple training
experiments and analyzing the cost function landscapes. Our results show that
the incorporation of residual connections results in improved training
performance. Therefore, we conclude that ResQNets offer a promising solution to
overcome the barren plateau problem in QNNs and provide a potential direction
for future research in the field of quantum machine learning. | Muhammad Kashif, Saif Al-kuwari | 2023-05-05T13:33:43Z | http://arxiv.org/abs/2305.03527v1 | # ResQNets: A Residual Approach for Mitigating Barren Plateaus in Quantum Neural Networks
###### Abstract
The barren plateau problem in quantum neural networks (QNNs) is a significant challenge that hinders the practical success of QNNs. In this paper, we introduce residual quantum neural networks (ResQNets) as a solution to address this problem. ResQNets are inspired by classical residual neural networks and involve splitting the conventional QNN architecture into multiple quantum nodes, each containing its own parameterized quantum circuit, and introducing residual connections between these nodes. Our study demonstrates the efficacy of ResQNets by comparing their performance with that of conventional QNNs and plain quantum neural networks (PlainQNets) through multiple training experiments and analyzing the cost function landscapes. Our results show that the incorporation of residual connections results in improved training performance. Therefore, we conclude that ResQNets offer a promising solution to overcome the barren plateau problem in QNNs and provide a potential direction for future research in the field of quantum machine learning.
## 1 Introduction
The Noisy Intermediate-Scale Quantum (NISQ) devices are a new generation of quantum computers capable of executing quantum algorithms. However, NISQ devices still suffer from significant errors and limitations in terms of the number of qubits and coherence time [1]. Despite these limitations, NISQ devices are an important stepping stone towards developing fault-tolerant quantum computers, as they provide a platform for exploring and evaluating basic quantum algorithms and applications [2]. Research in the NISQ era is focused on developing algorithms and techniques that are resilient to noise and errors, and can run effectively on NISQ devices [3]. This includes algorithms for quantum error correction [4], quantum optimization [5], and quantum machine learning (QML)[6].
QML is an interdisciplinary field that combines the concepts and techniques from quantum computing and machine learning (ML). It aims to leverage the unique properties of quantum systems, such as superposition, entanglement, and interference, to develop new algorithms and approaches for solving complex machine learning problems [7]. QML is increasingly becoming an exciting application in the NISQ era [2]. The anticipation here is that the quantum models (by exploiting the exponentially large Hilbert space) would achieve a computational advantage over their classical counterparts [8, 9], particularly for quantum datasets [10, 11, 12]. With continued advancements in quantum hardware [13], development of new quantum algorithms [14], quantum error correction and fault tolerance [15], the future of QML is bright, and it is likely to play a significantly important role in the field of machine learning. A wide range of ML algorithms are being explored in the quantum realm, including
quantum neural networks (QNNs) [16], quantum support vector machines [17, 18], quantum principal component analysis [19], and quantum reinforcement learning [20]. These approaches were shown to be effective in various domains, such as image classification [21], natural language processing [22], and recommendation systems [23].
QNNs is a promising area of research that aims to combine the power of quantum computing and neural networks to solve complex computational problems [24, 25]. Unlike classical neural networks, QNNs use quantum-inspired representations and operations for encoding and processing data [26, 27, 28, 29]. This allows for the exploration of exponential solution space and the exploitation of quantum parallelism, potentially leading to faster and more accurate results [7]. QNNs can be considered as a subclass of variational quantum algorithms, which aim to optimize parameters (\(\theta\)) of a parameterized quantum circuit (PQC) 1\(U(\theta)\) to minimize the cost function \(\mathcal{C}\). PQC utilizes tunable parameters to optimize quantum algorithms through classical computation. One example of a QNN architecture is the quantum Boltzmann machine [30, 31], which uses quantum circuits to model complex probability distributions and perform unsupervised learning tasks. In addition to unsupervised learning, QNNs have shown potential in various applications such as quantum feature detection [18], quantum data compression and denoising[32, 33], and quantum reinforcement learning [34]. QNNs can also be used for quantum-enhanced image recognition [6, 35] and quantum molecular simulations [36].
Footnote 1: we will use the terms “PQC” and “quantum layers” interchangeably
However, despite their potential, QNNs are still in the early stages of development and face several technical and practical challenges. In particular, training and optimizing the parameters in QNNs pose significant challenges. To address these challenges, the research community has been developing the quantum landscape theory [37] that explores the properties of cost function landscapes in QML systems. Consequently, interesting results have been obtained in the study of QNN's training landscapes, including the occurrence of barren plateaus (BP) [38], the presence of sub-optimal local minima [39], and the impact of noise on cost function landscapes [40, 41, 42, 43]. These findings provide important insights into the properties of QNNs and their training dynamics, and can inform the development of new algorithms and strategies for training and optimizing QNNs.
In particular, the BP problem refers to a phenomenon in which the circuit's expressiveness, as measured by its ability to approximate a target unitary operation, is severely limited as the number of qubits in the circuit increases [38], which is mainly due to vanishing gradients in the parameter space. The phenomenon of BP in QNNs is a significant challenge that impedes the advancement and widespread implementation of QNNs. To mitigate the BP, various strategies have been proposed, including the use of clever parameter initialization techniques [44], pre-training [45], examination of the dependence on the cost function [46, 47], implementation of layer-wise training of QNNs [48], and initialization parameters drawn from the beta distribution[49]. These solutions aim to overcome the limitations posed by the BP in QNNs and facilitate the full realization of their potential. However, it is important to note that the solution that works best for one QNN architecture may not work for another, as the BP problem can be highly dependent on the specific problem being solved and the quantum architecture being used.
Contribution.In this paper, we propose a novel solution for mitigating the issue of barren plateaus (BP) in quantum neural networks (QNNs). Our approach is based on the concept of residual neural networks, which were previously introduced as a means to overcome the vanishing gradient problem in classical neural networks. For this, we present residual quantum neural networks (ResQNets) by incorporating residual connections between two quantum layers of varying depths. Our findings indicate that ResQNets greatly enhance the training of QNNs compared to plain QNNs (PlainQNets). To validate our proposed ResQNets, we perform comparisons of their cost function landscapes and training performance with that of PlainQNets. Our experimental results demonstrate that the residual connections in ResQNets effectively mitigate the adverse effects of BP and result in improved overall training performance.
Organization.The rest of the paper is organized as follows: Section 2 provides an overview of both classical and quantum residual neural networks and motivates their application. Section 3 discusses parameterized quantum circuits and elaborate on how can multiple PQCs be cascaded. This section also introduces the residual approach in cascaded PQCs. The methodology we adopt in this paper while conducting the various experiments is provided in Section 4. Section 5 presents the results we obtained on both the simulation environment and real quantum devices. Finally, the paper concludes in section 6 with a few concluding remarks and pointers to possible extensions to this work.
## 2 Residual Neural Networks
Residual Neural Networks (ResNets) are a type of deep neural network architecture that aims to improve the training process by addressing the vanishing gradient problem. The basic idea behind ResNets is to introduce residual connections between layers in the network, allowing for easier optimization as the network gets deeper. The residual connections allow the network to learn residual mapping rather than trying to fit the target function directly. This helps prevent the vanishing gradient problem, where the gradients in the backpropagation process become very small, making it difficult to update the parameters effectively. ResNets were first introduced in [50], where the authors showed that ResNets outperformed traditional deep neural networks on benchmark image recognition tasks and demonstrated that ResNets could accommodate significantly deeper architectures than previous networks without sacrificing accuracy.
The residual connections in ResNets have been shown to be effective for training very deep neural networks, with hundreds or even thousands of layers. This has drastically improved the performance in several computer vision and natural language processing tasks. A typical structure of a residual block in ResNets is depicted in Figure 0(a).
Given an input feature map \(x\), the basic building block of a ResNet can be defined as:
\[H(x)=F(x,W_{i})+x\]
where \(H(x)\) is the output of the block, \(F\) is a non-linear function represented by a series of neuron and activation layers with parameters \(W_{i}\), and \(x\) is the input feature map that is added back to the output (the residual connection). The model is trained to learn the function \(F\) such that it approximates the residual mapping \(y-x\), where \(y\) is the desired output. By introducing residual connections, ResNets can address the vanishing gradient problem in deep neural networks, allowing for deeper architectures to be trained effectively.
In this paper, we introduce the quantum counterpart of ResQNet, namely residual quantum neural network (ResQNet), a QNN architecture combining the principles of classical ResNets with QNNs. The basic idea is to add a residual connection between the output of one layer of quantum operations and the input of the next layer. This helps to mitigate the vanishing gradient problem, a.k.a BP, which is a major challenge in QNNs and arises as the number of qubits in the systems increases. Figure 0(b) directs how ResQNets works compared to ResNets.
In ResQNets, the residual connection is represented mathematically as:
Figure 1: Residual block structure
\[\psi_{\rm out}(\theta)=\psi(\theta)+U(\theta)\psi(\theta)\]
where \(\psi(\theta)\) is the input to the quantum circuit, \(U(\theta)\) is the unitary operation defined by the PQC, and \(\psi_{\rm out}(\theta)\) is the output.
## 3 Parameterized Quantum Circuits
QNN is a type of Parameterized Quantum Circuit (PQC), which is a quantum circuit that has tunable parameters that can be optimized to perform specific tasks. In a QNN, the parameters are typically optimized using classical optimization algorithms to learn a target function or perform a specific task. The PQC architecture of a QNN allows for the representation and manipulation of quantum data in a manner that can be used for various applications, such as QML and quantum control. The mathematical derivation of PQC involves the representation of quantum states and gates as matrices and the composition of these matrices to form the overall unitary operator for the circuit.
A quantum state can be represented by a column vector in a Hilbert space, where the elements of the vector are complex numbers that satisfy the normalization constraint:
\[\left|\psi\right\rangle=\left[\alpha\ \beta\right],\quad\left|\alpha\right|^{2 }+\left|\beta\right|^{2}=1\]
A quantum gate is represented by a unitary matrix, which preserves the norm of the vector, i.e., the inner product of the transformed vector with itself is equal to the inner product of the original vector with itself:
\[U^{\dagger}U=UU^{\dagger}=I\]
where \(U^{\dagger}\) is the conjugate transpose of \(U\) and \(I\) is the identity matrix. A PQC can be modeled as a sequence of gates, each represented by a unitary matrix based on classical parameters. The overall unitary operator of the circuit can be obtained by composing the matrices of the individual gates in the correct order:
\[U_{\rm circuit}=U_{n}(\theta_{n})\cdots U_{2}(\theta_{2})U_{1}(\theta_{1})\]
where \(U_{i}(\theta_{i})\) is the unitary matrix representing the \(i\)-th gate and \(\theta_{i}\) is a classical parameter.
The final quantum state after applying the PQC to an initial state can be obtained by matrix-vector multiplication:
\[\left|\psi_{\rm final}\right\rangle=U_{\rm circuit}\left|\psi_{\rm initial}\right\rangle\]
The parameters \(\theta_{1},\ldots,\theta_{n}\) can be optimized using classical optimization algorithms to achieve a desired quantum state or to maximize an objective function such as the expected value of a measurement outcome. The optimization problem can be written as:
\[\theta^{*}=\arg\max_{\theta}\left|\left\langle\psi_{\rm desired}\right|U_{ \rm circuit}(\theta)\left|\psi_{\rm initial}\right\rangle\right|^{2}\]
Solving this optimization problem provides the optimal set of parameters \(\theta^{*}\) that produce the desired outcome.
### Cascading PQCs
In the proposed ResQNets, we encapsulate PQC/QNNs into a quantum node (QN), and arrange multiple QNs in a series, such that the output from one QN serves as the input for the next. This structure enables us to introduce the residual learning approach in a manner that allows the PQCs to work together to achieve the desired outcome. The process of cascading PQCs involves feeding the
output of each PQC into the input of the next, creating a layered structure where each layer represents a single PQC. In this case, each PQC can build upon the outputs of the previous ones, leading to a more complex and sophisticated computation. To ensure that the overall computation remains stable, the residual learning approach is employed, where the output of each PQC is combined with the input of the next in a specified manner.
We now present the mathematical formulation for connecting multiple PQCs in sequence. We will refer to each PQC as \(U_{i}\) where \(i\) denotes the QN it is encapsulated in.
#### 3.1.1 2-Cascaded PQC
Consider two PQCs denoted as \(U_{1}(\theta_{1})\) and \(U_{2}(\theta_{2})\), where \(\theta_{1}\) and \(\theta_{2}\) are classical parameters. The first PQC \(U_{1}(\theta_{1})\) is applied to an initial quantum state \(\ket{\psi_{\text{initial}}}\) to obtain an intermediate quantum state \(\ket{\psi_{\text{intermediate}}}\):
\[\ket{\psi_{\text{intermediate}}}=U_{1}(\theta_{1})\ket{\psi_{\text{initial}}}\]
The second PQC \(U_{2}(\theta_{2})\) is then applied to the intermediate state \(\ket{\psi_{\text{intermediate}}}\) to obtain the final quantum state \(\ket{\psi_{\text{final}}}\):
\[\ket{\psi_{\text{final}}}=U_{2}(\theta_{2})\ket{\psi_{\text{intermediate}}}\]
The overall unitary operator of the two cascaded PQCs can be obtained by composing the matrices of the individual PQCs in the correct order:
\[U_{\text{circuit}}=U_{2}(\theta_{2})U_{1}(\theta_{1})\]
The final quantum state after applying the two cascaded PQCs to an initial state can be obtained by matrix-vector multiplication:
\[\ket{\psi_{\text{final}}}=U_{\text{circuit}}\ket{\psi_{\text{initial}}}\]
The parameters \(\theta_{1}\) and \(\theta_{2}\) can be optimized using classical optimization algorithms to achieve a desired quantum state or to maximize an objective function such as the expected value of a measurement outcome. The optimization problem can be written as:
\[\theta_{1},\theta_{2}^{=}\arg\max_{\theta_{1},\theta_{2}}\abs{\bra{\psi_{ \text{desired}}}U_{\text{circuit}}(\theta_{1},\theta_{2})\ket{\psi_{\text{ initial}}}}^{2}=\arg\max_{\theta_{1},\theta_{2}}\abs{\bra{\psi_{\text{desired}}}U_{2}( \theta_{2})U_{1}(\theta_{1})\ket{\psi_{\text{initial}}}}^{2}\]
Solving this optimization problem returns the optimal set of parameters \((\theta_{1},\theta_{2})\) that produce the desired outcome.
#### 3.1.2 \(n\)-Cascaded PQCs
Similarly, for \(n\) cascaded PQCs, where each PQC takes the output of the previous one as its input, the intermediate states can be described as follows:
\[\ket{\psi_{\text{intermediate},i}}=U_{i}(\theta_{i})\ket{\psi_{\text{intermediate },i-1}}\]
where \(i=1,2,\cdots,n\) and \(\ket{\psi_{\text{intermediate},0}}=\ket{\psi_{\text{initial}}}\). The overall unitary operator of the \(n\) cascaded PQCs can be obtained by composing the matrices of the individual PQCs in the correct order:
\[U_{\text{circuit}}=U_{n}(\theta_{n})\cdots U_{2}(\theta_{2})U_{1}(\theta_{1})\]
The final quantum state after applying the \(n\) cascaded PQCs to an initial state can be obtained by matrix-vector multiplication:
\[\left|\psi_{\text{final}}\right\rangle=U_{\text{circuit}}\left|\psi_{\text{ initial}}\right\rangle\]
The parameters \(\theta_{1},\theta_{2},\cdots,\theta_{n}\) can be optimized using classical optimization algorithms to achieve a desired quantum state or to maximize an objective function such as the expected value of a measurement outcome. The optimization problem can be written as:
\[\theta_{1},\theta_{2},\cdots,\theta_{n} =\arg\max_{\theta_{1},\theta_{2},\cdots,\theta_{n}}\left|\left\langle \psi_{\text{desired}}\right|U_{\text{circuit}}(\theta_{1},\theta_{2},\cdots, \theta_{n})\left|\psi_{\text{initial}}\right\rangle\right|^{2}\] \[=\arg\max_{\theta_{1},\theta_{2},\cdots,\theta_{n}}\left|\left\langle \psi_{\text{desired}}\right|U_{n}(\theta_{n})\cdots U_{2}(\theta_{2})U_{1}( \theta_{1})\left|\psi_{\text{initial}}\right\rangle\right|^{2}\]
Solving this optimization problem returns the optimal set of parameters \((\theta_{1},\theta_{2},\cdots,\theta_{n})\) that produce the desired outcome.
### Residual PQCs
We now introduce residual blocks in the cascaded PQCs encapsulated in QNs which we call ResQNets. In ResQNets, the output of the previous PQC is added to its input and fed as an input to the next PQC. The residual block is inserted to facilitate efficient information flow and improved performance. The primary objective of incorporating residual blocks in QNNs here is to overcome the difficulties associated with BP and thereby improve the learning process. Furthermore, the proposed method aims to harness the strengths of both residual learning and quantum computing to tackle complex problems more effectively.
To mathematically formulate our proposed ResQNets, we start by considering the case of two PQCs, and extend the approach to the general case of cascading \(n\) PQCs with \(n\) residual blocks. We will refer to each PQC as \(U_{i}\) where \(i\) denotes the QN it is encapsulated in.
#### 3.2.1 1-Residual Block
ResQNet with a single residual block contains a maximum of two PQCs of arbitrary depth enclosed in two separate QNs. The first QN serves as a residual block whose input is added to its output before passing it as input to the PQC in the next QN. For the mathematical formulation of such a setting, consider two PQCs, denoted as \(U_{1}(\theta_{1})\) and \(U_{2}(\theta_{2})\), where \(\theta_{1}\) and \(\theta_{2}\) are classical parameters. The first PQC \(U_{1}(\theta_{1})\) is applied to an initial quantum state \(\left|\psi_{\text{initial}}\right\rangle\) to obtain an intermediate quantum state \(\left|\psi_{\text{intermediate}}\right\rangle\):
\[\left|\psi_{\text{intermediate}}\right\rangle=U_{1}(\theta_{1})\left|\psi_{ \text{initial}}\right\rangle\]
In this case, the input of the second PQC \(U_{2}(\theta_{2})\) is not just the intermediate state \(\left|\psi_{\text{intermediate}}\right\rangle\), but the sum of the initial state \(\left|\psi_{\text{initial}}\right\rangle\) and the intermediate state \(\left|\psi_{\text{intermediate}}\right\rangle\):
\[\left|\psi_{\text{input}}\right\rangle=\left|\psi_{\text{initial}}\right\rangle +\left|\psi_{\text{intermediate}}\right\rangle\]
The second PQC \(U_{2}(\theta_{2})\) is then applied to the input state \(\left|\psi_{\text{input}}\right\rangle\) to obtain the final quantum state \(\left|\psi_{\text{final}}\right\rangle\):
\[\left|\psi_{\text{final}}\right\rangle=U_{2}(\theta_{2})\left|\psi_{\text{ input}}\right\rangle\]
The overall unitary operator of the two cascaded PQCs can be obtained by composing the matrices of the individual PQCs in the correct order:
\[U_{\text{circuit}}=U_{2}(\theta_{2})U_{1}(\theta_{1})\]
The final quantum state, after applying the two cascaded PQCs to an initial state, can be obtained by matrix-vector multiplication:
\[\left|\psi_{\text{final}}\right\rangle=U_{\text{circuit}}\left|\psi_{\text{ initial}}\right\rangle\]
\[\left|\psi_{\text{final}}\right\rangle=U_{\text{circuit}}\left(\left|\psi_{ \text{initial}}\right\rangle+\left|\psi_{\text{intermediate}}\right\rangle\right)\]
\[\left|\psi_{\text{final}}\right\rangle=U_{2}(\theta_{2})\left(\left|\psi_{ \text{initial}}\right\rangle+U_{1}(\theta_{1})\left|\psi_{\text{initial}}\right\rangle)\]
The parameters \(\theta_{1}\) and \(\theta_{2}\) can be optimized using classical optimization algorithms to achieve a desired quantum state or to maximize an objective function such as the expected value of a measurement outcome.
#### 3.2.2 2-Residual blocks
In ResQNets with two residual blocks, up to three PQCs can be incorporated within three QNs. There are three potential configurations for the residual blocks in this setup:
1. utilizing only the first QN as a residual block,
2. combining the first two QNs to form a single residual block,
3. utilizing both the first and second QNs individually as separate residual blocks.
For our mathematical formulation, only the third configuration will be considered since it is the general setting for the case of two residual blocks; other configurations effectively contain a single residual block, which has already been mathematically derived in section 3.2.1. However, we will conduct experiments examining all three configurations to determine which configuration performs the best.
Let \(U_{1}(\theta_{1})\), \(U_{2}(\theta_{2})\), and \(U_{3}(\theta_{3})\) be PQCs enclosed in three QNs, where \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) are classical parameters. The first PQC \(U_{1}(\theta_{1})\) takes an initial quantum state \(\left|\psi_{\text{initial}}\right\rangle\) as its input and produces an intermediate quantum state \(\left|\psi_{\text{intermediate}}\right\rangle\):
\[\left|\psi_{\text{intermediate}}\right\rangle=U_{1}(\theta_{1})\left|\psi_{ \text{initial}}\right\rangle\]
The second PQC \(U_{2}(\theta_{2})\) takes the sum of the initial state \(\left|\psi_{\text{initial}}\right\rangle\) and the intermediate state \(\left|\psi_{\text{intermediate}}\right\rangle\) as its input and produces another intermediate quantum state \(\left|\psi_{\text{intermediate}}^{\prime}\right\rangle\):
\[\left|\psi_{\text{input}}\right\rangle=\left|\psi_{\text{initial}}\right\rangle +\left|\psi_{\text{intermediate}}\right\rangle\]
\[\left|\psi_{\text{intermediate}}^{\prime}\right\rangle=U_{2}(\theta_{2}) \left|\psi_{\text{input}}\right\rangle\]
Finally, the third PQC \(U_{3}(\theta_{3})\) takes the sum of the input \(\left|\psi_{\text{input}}\right\rangle\) and the intermediate state \(\left|\psi_{\text{intermediate}}^{\prime}\right\rangle\) as its input and produces the final quantum state \(\left|\psi_{\text{final}}\right\rangle\):
\[\left|\psi_{\text{input}}^{\prime}\right\rangle=\left|\psi_{\text{input}} \right\rangle+\left|\psi_{\text{intermediate}}^{\prime}\right\rangle\]
\[\left|\psi_{\text{final}}\right\rangle=U_{3}(\theta_{3})\left|\psi_{\text{ input}}^{\prime}\right\rangle\]
The overall unitary operator of the three cascaded PQCs can be obtained by composing the matrices of the individual PQCs in the correct order:
\[U_{\text{circuit}}=U_{3}(\theta_{3})U_{2}(\theta_{2})U_{1}(\theta_{1})\]
The final quantum state after applying the three cascaded PQCs to an initial state can be obtained by matrix-vector multiplication:
\[\left|\psi_{\text{final}}\right\rangle=U_{\text{circuit}}\left|\psi_{\text{ initial}}\right\rangle=U_{3}(\theta_{3})U_{2}(\theta_{2})U_{1}(\theta_{1})\left|\psi_{ \text{initial}}\right\rangle\]
#### 3.2.3 \(n\) Residual Blocks
In the case of \(n\) PQCs enclosed within \(n\) QNs, there are multiple potential configurations for the residual blocks. The mathematical formulation considered here assumes that each of the \(n\) QNs is used as a separate residual block. However, the formulation can be adapted to account for alternative configurations of residual blocks, as needed. For \(n\) PQCs, the ResQNet can be represented as:
\[\left|\psi_{\text{intermediate}}^{(1)}\right\rangle=U_{1}(\theta_{1}) \left|\psi_{\text{initial}}\right\rangle\] \[\left|\psi_{\text{input}}^{(1)}\right\rangle=\left|\psi_{\text{ initial}}\right\rangle+\left|\psi_{\text{intermediate}}^{(1)}\right\rangle\] \[\left|\psi_{\text{intermediate}}^{(2)}\right\rangle=U_{2}(\theta_ {2})\left|\psi_{\text{input}}^{(1)}\right\rangle\] \[\left|\psi_{\text{input}}^{(2)}\right\rangle=\left|\psi_{\text{ input}}^{(1)}\right\rangle+\left|\psi_{\text{intermediate}}^{(2)}\right\rangle\] \[\vdots\]
\[\left|\psi_{\text{intermediate}}^{(n-1)}\right\rangle=U_{n-1}(\theta_{n-1}) \left|\psi_{\text{input}}^{(n-2)}\right\rangle\] \[\left|\psi_{\text{input}}^{(n-1)}\right\rangle=\left|\psi_{\text {input}}^{(n-2)}\right\rangle+\left|\psi_{\text{intermediate}}^{(n-1)}\right\rangle\] \[\left|\psi_{\text{final}}\right\rangle=U_{n}(\theta_{n})\left| \psi_{\text{input}}^{(n-1)}\right\rangle\]
And the overall unitary operator of the \(n\) cascaded PQCs is:
\[U_{\text{circuit}}=U_{n}(\theta_{n})U_{n-1}(\theta_{n-1})\cdots U_{2}(\theta _{2})U_{1}(\theta_{1})\]
The equation can be written in a summation form as follows:
\[\left|\psi_{\text{final}}\right\rangle=U_{n}(\theta_{n})\left(\left|\psi_{ \text{initial}}\right\rangle+\sum_{k=1}^{n-1}U_{k}(\theta_{k})\left|\psi_{ \text{input}}^{(k-1)}\right\rangle\right)\]
where \(\left|\psi_{\text{input}}^{(k-1)}\right\rangle=\left|\psi_{\text{input}}^{( k-2)}\right\rangle+\left|\psi_{\text{intermediate}}^{(k-1)}\right\rangle\) and \(\left|\psi_{\text{intermediate}}^{(k)}\right\rangle=U_{k}(\theta_{k})\left| \psi_{\text{input}}^{(k-1)}\right\rangle\).
Given a set of \(n\) PQCs, \(U_{1}(\theta_{1}),U_{2}(\theta_{2}),\ldots,U_{n}(\theta_{n})\) and an initial quantum state \(\left|\psi_{\text{initial}}\right\rangle\), the objective is to find the set of classical parameters \(\boldsymbol{\theta}=\theta_{1},\theta_{2},\ldots,\theta_{n}\) that maximizes (or minimizes) some cost function \(C(\boldsymbol{\theta})\) associated with the final quantum state \(\left|\psi_{\text{final}}\right\rangle\) produced by the cascaded PQCs. The optimization problem can be formulated as:
\[\boldsymbol{\theta}^{\star}=\arg\max_{\boldsymbol{\theta}}C(\boldsymbol{ \theta})\]
or
\[\boldsymbol{\theta}^{\star}=\arg\min_{\boldsymbol{\theta}}C(\boldsymbol{ \theta})\]
where \(\boldsymbol{\theta}^{\star}\) represents the optimal set of classical parameters that maximizes (or minimizes) the cost function. Note that the cost function \(C(\boldsymbol{\theta})\) can be defined based on the desired behavior of the quantum circuit and can be calculated from the final quantum state \(\left|\psi_{\text{final}}\right\rangle\).
Methodology
In classical NNs, residual neural networks (ResNets) were proposed to overcome the problem of vanishing gradients and were very useful for enabling deep learning in classical machine learning. In this paper, we propose a Residual Quantum Neural Networks (ResQNets), to enable deep learning in QNNs by mitigating the effect of BP as a function of the number of layers.
The conventional approach to constructing QNNs contains an arbitrarily deep PQC, which takes some input and yields some output. Such an architecture typically has a single QN, as depicted in Figure 1(a). In this paper, we refer to this traditional QNN architecture as "Simple PlainQNet".
To construct our proposed ResQNets, we need to further split the traditional QNN architecture into two QNs, where every QN contains arbitrary deep quantum layers. Since our proposed ResQNets contain at least two QNs and the traditional way of constructing QNNs contains a single QN, we construct a slightly modified version of simple PlainQNet, which we call "PlainQNet" and includes two or more QNs, with each QN containing PQCs of arbitrary depth, as shown in Figure 1(b). In PlainQNets, the output of the previous QN is fed to the next QN. The purpose of constructing PlainQNet is to have a fair comparison with our proposed ResQNets because ResQNets need two or more QNs to work. An example of ResQNet architecture with two QNs is shown in Figure 1(c). The PlainQNet architecture is similar to general QNN split into two QNs, whereas in the case of ResQNet, the first QN serves as the residual block, i.e., the input of the first QN is added to its output and then passed as input to the second QN.
It should be noted that ResQNets can comprise multiple QNs with various arrangements of residual blocks. For instance, the ResNet from Figure 1(c) can be extended to have three QNs, in which case three potential configurations can be employed. These include having the first and second QNs acting as individual residual blocks, combining the first and second QNs to serve as a single residual block, and only the first QN functioning as the residual block. The possibility of these three configurations has been taken into consideration. We also consider the case of three QNs with these configurations.
Figure 2: QNN architecture used in this paper (a) Simple PlainQNet (b) PlainQNet and (c) ResQNet
### Quantum Layers Design
For the design of quantum layers, we use a periodic structure containing two single-qubit unitaries (\(RX\) and \(RY\)) per qubit. These unitaries are randomly initialized in the range \([0,\pi]\). Furthermore, a two-qubit gate, i.e., \(CNOT\)-gate is used to entangle qubits, and every qubit is entangled with its neighboring qubit. Figure 3 shows the example design of the quantum layers we used (5 qubits). All the QNs in our experiments have the same quantum layers design.
### Depth of Quantum Layers
The impact of the quantum layer depth in examining the existence of BP in the cost function landscape of a QNN is significant. Effective depth (the longest path within the quantum circuit until the measurement) is crucial in this regard. For convenience, We introduce two depth parameters: layer depth (\(D_{L}\)) and effective depth (\(D_{E}\)). The layer depth \(D_{L}\) refers to the combined number of repetitions of the quantum layer illustrated in Figure 3 in both QNs, while the effective depth \(D_{E}\) represents the overall depth. For our quantum layers design, the following equation can be used to calculate the effective depth.
\[\textit{Total Effective Depth}=D_{E}=4\times D_{L}+k \tag{1}\]
where \(k=2,3,4,5\ldots\) for \(5,6,7,8\ldots\) qubits, respectively. Since the quantum layers are split into two separate QNs, and the depth per QN can be crucial to achieving better performance, it is important to calculate \(D_{E}\) of each QN individually and then add them to obtain the final \(D_{E}\). Failure to calculate the depth in each QN separately could result in an effective depth different from the sum of the effective depths of each QN, i.e., \(D_{L}/QN1+D_{L}/QN2\neq D_{E}\). For example, with \(D_{L}=2\), the total effective depth would be 10 without considering the splitting into two QNs. However, if \(D_{L}\) is split into two QNs with \(D_{L}/QN=1\), the effective depth would be 12. A modified version of Equation 1 should be used to calculate the \(D_{E}\) per QN, as described below.
\[\textit{Effective Depth per QN}=D_{E}/QN=4\times D_{L}/QN+k \tag{2}\]
### Depth Distribution per QN
As previously discussed, ResQNets and PlainQNets consist of multiple QNs, which results in different depth splits for a given depth of quantum layers. According to the definition of BP, the gradient vanishes as a function of the number of qubits; hence, we fix the depth of quantum layers to \(D_{L}=6\), and only vary the number of qubits. Table 1 summarizes the different depth per QN combinations
Figure 3: Quantum Layers Design
for \(D_{L}=6\), and all these depth combinations are tested for different numbers of qubits. Column 3 of Table 1 represents the depth split in the form of ordered pairs (we refer to this form in the rest of the paper whenever we discuss depth split per QN). For instance, \((1,5)\) denotes \(D_{L}=1\) in the first QN and \(D_{L}=5\) in the second QN. The depth per QN combination can be extended to more than two QNs in a similar manner.
### Cost Function Definition
For training our proposed ResQNet, we consider a simple example of learning the identity gate. In such a scenario a natural cost function would be the difference of 1 minus the probability of measuring an all-zero state, which can be described by the following equation.
\[C=\left\langle\psi(\theta)\right|(I-\left|0\right\rangle\left\langle 0\right|) \left|\psi(\theta)\right\rangle=1-p_{\left|0\right\rangle}\]
We consider the global cost function setting, i.e., we measure all the qubits in the network. Therefore, the above cost function definition will be applied across all the qubits according to the following equation.
\[C=\left\langle\psi(\theta)\right|(I-\left|00\ldots 0\right\rangle\left\langle 0 0\ldots 0\right|)\left|\psi(\theta)\right\rangle=1-p_{\left|00\ldots 0\right\rangle} \tag{3}\]
For cost function optimization, we use Adam optimizer (with a stepsize of 0.1), which is a gradient-based optimization method for optimization problems. The Adam optimizer updates the parameters of a model iteratively based on the gradient of the loss function with respect to the parameters. The Adam optimizer uses an exponentially decaying average of the first and second moments of the gradients to adapt the learning rate for each parameter. Let \(g_{t}\) be the gradient of the loss function with respect to the parameters at iteration \(t\). The first moment, \(m_{t}\), and the second moment, \(v_{t}\), are computed as follows:
\[m_{t} =\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}\] \[v_{t} =\beta_{2}v_{t-1}+(1-\beta_{2})g_{t}^{2}\]
where \(\beta_{1}\) and \(\beta_{2}\) are the decay rates for the first and second moments, respectively. The bias-corrected first moment and second moment are then computed as:
\[\hat{m}_{t} =\frac{m_{t}}{1-\beta_{1}^{t}}\] \[\hat{v}_{t} =\frac{v_{t}}{1-\beta_{2}^{t}}\]
Finally, the parameters are updated using the following equation:
\[\theta_{t+1}=\theta_{t}-\frac{\alpha}{\sqrt{\hat{v}_{t}}+\epsilon}\hat{m}_{t}\]
where \(\alpha\) is the learning rate and \(\epsilon\) is a small constant to prevent division by zero.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(D_{L}\) in QN-1 & \(D_{L}\) in QN-2 & in-text representation \\ \hline
1 & 5 & (1,5) \\ \hline
5 & 1 & (5,1) \\ \hline
2 & 4 & (2,4) \\ \hline
4 & 2 & (4,2) \\ \hline
3 & 3 & (3,3) \\ \hline \end{tabular}
\end{table}
Table 1: Depth combinations per QN
Results and Discussion
In order to investigate the issue of BP in both PlainQNets and ResQNets, we maintain a constant depth of quantum layers, \(D_{L}=6\), which comprises \(100\) quantum gates and \(60\) parameters. The quantum layer depth distribution is varied among different combinations, as discussed in table 1. The \(D_{E}\) per QN can then be calculated using Equation 2. The performance of both networks is evaluated by comparing their cost function landscapes and training results for the problem specified in Equation 3.
### PlainQNet and Simple PlainQNet
In this paper, we use a minimum of two QNs in constructing ResQNets, while the traditional approach in developing QNNs utilizes a single QN (referred to as "simple PlainQNets" in this paper). To ensure a fair comparison between the performance of QNNs with no residual and ResQNets, we also modify simple PlainQNets with two QNs (referred to as "PlainQNets" in this paper). A preliminary comparison between the performance of these two types of PlainQNets is conducted to verify that the use of two QNs in PlainQNets leads to better or equivalent performance compared to the use of a single QN in simple PlainQNets.
The simple PlainQNets and PlainQNets are compared for \(6\)-qubit and \(7\)-qubit quantum layers with a constant depth of \(D_{L}=6\). In the case of PlainQNets, the depth distribution per QN can vary, but we use the depth combinations of \((5,1)\) and \((4,2)\), where the first entry represents the depth of the first QN and the second entry represents the depth of the second QN, as shown in Table 1. We choose deeper quantum layers on the first QN and relatively shallow depth on the second QN primarily because such a configuration of depths per QN leads to a better performance, which will be discussed in more detail in the subsequent sections. For \(6\)-qubit quantum layers, the effective depth (\(D_{E}\)) for PlainQNets for both depth combinations mentioned above is \(30\) (as defined in Equation 2). The closest possible \(D_{E}\) for simple PlainQNets using the quantum layers considered in this paper (shown in Figure 3) is \(31\) with an overall \(D_{L}\) of \(7\) (as defined in Equation 1), which was used in the comparison. Similarly, for \(7\)-qubit quantum layers, the \(D_{E}\) for PlainQNets is \(32\) for both depth combinations per QN. The closest \(D_{E}\) in the case of simple PlainQNets is obtained for \(D_{L}=7\).
Both PlainQNets and simple PlainQNets are then trained for the prob
Figure 4: Cost vs. iterations of PlainQNets and simple PlainQNets (a) for \(6\) qubits (b) for \(7\) qubits. The parentheses denote the \(D_{L}\) per QN.
3. The training results are displayed in Figure 4. It can be observed that for 6-qubit layers, both PlainQNets and simple PlainQNets exhibit comparable performance. However, when the number of qubits increases to 7, the performance of simple PlainQNets decreases significantly due to BP, while PlainQNets improves. Based on these observations, we can infer that it is appropriate to compare the performance of PlainQNets with that of our proposed ResQNets. Hence, for the remainder of the paper, we will compare the performance of PlainQNets, which are QNNs containing two (or more) QNs, with that of ResQNets.
### ResQNet with shallow width quantum layers
In this section, we perform a comparative analysis of the incidence of BP in both PlainQNets and ResQNets. Both PlainQNets and ResQNets consist of two QNs, with a maximum of one residual block in the case of ResQNets. To facilitate a fair comparison, we consider shallow depth quantum layers with \(D_{L}=6\) and incrementally vary the number of qubits from 6 to 10.
#### 5.2.1 6-Qubit Circuit
In this setting, we experiment with a total of 6 qubits. The cost function landscapes for both PlainQNet and ResQNet were analyzed and compared, as shown in Figure 5. The results demonstrate that a significant portion of the cost function landscapes of the PlainQNet for almost all the depth combinations are flat and have a narrow region containing the global minimum. On the other hand, the cost function landscapes of ResQNets are less flat and have a wider region containing the global minimum, which makes ResQNet more suitable for optimization.
The training of PlainQNets and ResQNets was performed for the problem defined in Equation 3. The results of the training are depicted in Figure 6. When the depth of the second QN is equal to or greater than the depth of the first QN, it was observed that the PlainQNets do not undergo successful training. This can be attributed to the flat cost function landscape, i.e., the BP, as depicted in Figure
Figure 5: Cost function landscapes of PlainQNet (upper panel) and ResQNet (lower panel) for 6 Qubits. The parentheses denotes the \(D_{L}\) per QN.
5. For the similar depth distribution per QN (depth in second QN \(\geq\) depth in first QN), the ResQNets were observed to effectively undergo training. However, they struggled to reach an optimal solution due to the presence of multiple local minima in their cost function landscape. In instances where the depth of the first QN is greater than the second QN, both PlainQNets and ResQNets underwent successful training, but ResQNets outperformed PlainQNets.
#### 5.2.2 8-Qubit Cirucit
We now conduct experiments on both PlainQNets and ResQNets with 8-qubit layers, and examine the cost function landscapes of both PlainQNets and our proposed ResQNets. The overall layer depth is set to 6, and all depth combinations are analyzed. The results presented in Figure 7, reveal that approximately 90% of the cost function landscape for PlainQNets remains flat irrespective of the depth distribution per QN, making them unsuitable for optimization. In contrast, the cost function landscapes of ResQNets are still not flat for all the depth combinations, and thus are more favorable for optimization.
We conduct training experiments for both PlainQNets and ResQNets with 8 qubit quantum layers to solve the problem defined in Equation 3. The training results are presented in Figure 8, which show that as we increase the number of qubits from 6 to 8, the PlainQNets get trapped in the flat cost function landscape (i.e., BP), for all the depth combinations per QN and fail to train effectively for the specified problem.
On the other hand, the ResQNets demonstrate successful training across all the depth combinations, surpassing the performance of PlainQNets. Notice that ResQNets exhibit superior learning outcomes when the depth of the first QN is much greater than that of the second QN (\(D_{EinQN1>>>D_{EinQN2}}\)), such as in the case of \((5,1)\). This is because in such scenarios the cost function landscape has fewer and wider regions leading to the global minimum. Conversely, when the depth of the second QN is equal to or greater than that of the first QN, the cost function landscape is characterized by multiple local minima, making it less suitable for optimization as the optimizer becomes trapped in local minima. This phenomenon can be attributed to the presence of residual blocks in ResQNets. In the case of two QNs, a residual connection is introduced only after the first block. This helps in mitigating the issue of BP. However, if the second QN is deep enough, it can still result in BP. In such scenarios, the cost function landscape still contains multiple local minima and fewer paths to reach the global minimum, which makes the optimization process more prone to becoming stuck in a
Figure 6: Cost vs. iterations of (a) PlainQNets (b) and ResQNets for 6 qubits. The parentheses denote the \(D_{L}\) per QN.
local minimum. Despite this, ResQNets still demonstrate superior training performance compared to PlainQNets.
#### 5.2.3 10-Qubit Circuit
To further expand our study, we increased the number of qubits to 10 and performed the same experiments as with quantum layers of 6 and 8 qubits. The cost function landscapes were then analyzed
Figure 8: Cost vs. iterations of (a) PlainQNets and (b) ResQNets for 8 qubits. The parentheses denote the \(D_{L}\) per QN.
Figure 7: Cost function landscapes of PlainQNet (upper panel) and ResQNet (lower panel) for 8 Qubits. The parentheses denote the \(D_{L}\) per QN.
for both PlainQNets and ResQNets, as shown in Figure 9. Similar to the case of 8 qubit layers, a substantial portion of the cost function landscape of PlainQNets was found to be flat, indicating the presence of BP and making it unsuitable for optimization. Conversely, the cost function landscape of ResQNets remained more favorable for optimization as it was characterized by multiple paths leading to the global minimum, thus avoiding the occurrence of BP.
We subsequently trained the 10 qubit quantum layers to address the problem defined in Equation 3. The results of these experiments are depicted in Figure 10. Our analysis indicates that PlainQNets did not exhibit successful training outcomes for nearly all depth combinations, with the exception of \((4,2)\), which showed considerable performance improvement. When we examined its cost function landscape in Figure 9, we observe that there are one or two narrow regions that contain the solution and may be found by the optimizer. However, these narrow regions are unlikely to be encountered and thus the performance, despite being optimal, is not considered suitable for general optimization problems. Therefore, it can still be concluded that the PlainQNets are severely affected by the problem of BP. On the other hand, ResQNets effectively overcame the issue of BP and exhibited successful training outcomes for all depth combinations. Our observations for 10 qubit quantum layers align with our previous findings for 6 and 8 qubit layers in that ResQNets are more effective when the depth after the residual connection is less. This suggests that a shallower depth of quantum layers after the residual connection in ResQNets is more favorable for optimization and mitig
Figure 9: Cost function landscapes of PlainQNet (upper panel) and ResQNet (lower panel) for 10 Qubits. The parentheses denotes the \(D_{L}\) per QN.
Our results conclusively demonstrate that PlainQNets are heavily impacted by the issue of BP as the number of qubits increases, which significantly hinders their performance and ability to optimize the cost function. The previous results have demonstrated the advantage of our proposed ResQNets over PlainQNets in mitigating the phenomenon of BP. Therefore, in the next section, we will conduct experiments solely with ResQNets.
### ResQNets with wider quantum layers
To analyze the scalability of ResQNets for larger quantum circuits, we consider quantum layers with larger number of qubits, i.e., 15 and 20. The depth of the quantum layers, \(D_{L}\), is kept constant at 6. As the cost function landscapes are known to have a direct impact on the training results, as shown in Section 5.2. Consequently, we only present the training results for the 15 and 20-qubit quantum layers.
#### 5.3.1 15-Qubit Circuit
We train the 15 qubit quantum layers to optimize the problem defined in Equation 3. The training results are shown in Figure 10(a). it can be observed that the ResQNets are effectively trained. Additionally, analogous to the case of shallow width quantum layers, the performance is substantially better when the depth in the first QN (before the residual point) is bigger than the second QN.
#### 5.3.2 20-Qubit Circuit
We now train the ResQNets for 20-qubit layers for the problem defined in Equation 3, with a total layer depth of \(D_{L}=6\). It can be observed that even with 20 qubit layers, the ResQNets are effectively trained, as shown in Figure 10(b). Furthermore, similar to the previously shown results, the ResQNets for 20-qubit layers also perform significantly better when the depth after the residual point (second QN) is lesser than the depth before the residual point (first QN).
Figure 10: Cost vs. iterations of (a) PlainQNets (b) and ResQNets for 10 qubits. The parantheses denotes the \(D_{L}\) per QN.
From the results in Figure 11, it is evident that the ResQNets are capable of working with wider quantum layers. The results demonstrate that analogous to the case of shallow-width quantum layers, the training performance is better with the optimal results being achieved for a larger depth in the first QN and a smaller depth in the second QN.
It should be noted that our experiments are limited by the memory constraints of our local computer and we cannot go beyond 20 qubits. However, based on our findings, we believe that the proposed ResQNets would still train effectively even beyond 20 qubits.
### ResQNets with 3-QN
From the analysis presented in previous sections, it can be observed that the ResQNets consisting of two QNs with a maximum of one residual block can effectively address the problem of BP and significantly improve the training performance of QNNs. In this section, we show that increasing the number of QNs in ResQNets can enhance the performance of ResQNets even further. As discussed in Section 4, for three QNs we can have multiple configurations of residual blocks. We consider all of these configurations for our experiments with 20-qubit quantum layers and a fixed quantum layer depth of \(D_{L}=6\).
The results of the experiments conducted in this section will provide valuable insights into the optimal configuration of residual blocks for ResQNets with three or more QNs.
The cost function landscapes of various residual block configurations in ResQNets with three QNs were analyzed, as presented in Figure 12. The results indicate that the optimal placement of residual blocks has a significant impact on the performance of ResQNets. When the residual block is added after every QN, the cost function landscape quickly flattens irrespective of the depth per QN, suggesting that this configuration leads to equivalent or suboptimal performance compared to PlainQNets, which is not at all suitable for optimization.
On the other hand, when the residual block is added after two QNs, the cost function landscape shows multiple and wider regions containing the global minimum, which makes this configuration more suitable for optimization. Moreover, this configuration exhibits a consistent cost function landscape regardless of the depth per QN combination, implying that this particular residual block arrangement is more robust to BP and supports a wide range of depths and QN combinations.
For the case of adding the residual only after the first QN, with two QNs after the residual block, the results show that the cost function landscape is better than the case of adding the residual block
Figure 11: Cost vs. iterations of ResQNets for (a) 15 qubits and (b) 20 qubits. The parentheses denote the \(D_{L}\) per QN.
Figure 12: Cost function landscapes of ResQNets for 20 Qubits and 3-QNs. Residual after every QN (Top panel), Residual after two QNs (middle panel) and residual only after the first QN (bottom panel). The parentheses denote the \(D_{L}\) per QN and the comma denotes the residual point.
after every QN, but not as good as the case where there is a gap of two QNs while adding the residual.
We then trained ResQNets with three QNs for all the configurations while varying the depth for each QN combination on the problem defined in Equation 3. The training results are shown in Figure 13. These results align with the behavior of the cost function landscape, where the residual block configuration skipping two QNs outperforms other configurations. It can be observed that the residual block configuration after every QN does not train at all, while the residual block configuration after the first QN does converge for all the depth per QN combinations, but with significantly slower convergence compared to the residual block configuration after two QNs.
### 3-QN vs. 2-QN ResQNet
In this section, we compare the performance of ResQNets with 2 and 3-QNs to demonstrate the impact of increasing the number of QNs. The analysis was conducted for 20 qubit layers considering the best-performing depth combinations for both 2 and 3-QNs.
For 2-QNs, the results from Figure 10(b) indicate that the depth combinations of (5,1) and (4,2) performed better than other depth combinations. On the other hand, for three QNs, the results from Figure 12(b) and 12(c) show that the depth combinations of \((4\:1,1)\) and \((4,\:1\:1)\) outperformed other depth combinations. A closer examination of the best-performing depth combinations reveals that the \(D_{L}\) before and after the residual block for the depth per QN combination of \((5,1)\) in 2-QN ResQNet is equivalent to depth per QN combination of \((4\:1,1)\) for 3-QN ResQNet. Similarly, the combination \((4,2)\) in the 2-QN ResQNet is equivalent to \((4,\:1\:1)\) in the 3-QN ResQNet. Despite these similarities, as demonstrated in Figure 14, the ResQNets with 3-QNs exhibit superior performance, as they converge to the optimal solution more efficiently compared to the ResQNets with 2-QNs.
Figure 13: Training results of ResQNets with three QNs with 20 qubit layers. (a) Residual after every QN (b) Residual after two QNs and (c) Residual after the first QN. The parentheses denote the \(D_{L}\) per QN and the comma denotes the residual point.
### Real Quantum Device
The results presented so far were obtained by running ResQNets and PlainQNets on a simulation platform. In this section, we carry out some experiments on real quantum devices. In particular, we trained both ResQNets and PlainQNets with 2-QNs on a 5-qubit quantum layer with 20 epochs using an IBM's quantum device, namely \(ibmq\_lima\). The quantum layers depth was fixed to \(D_{L}=6\) with \(D_{L}=5\) in the first QN, and \(D_{L}=1\) in the second QN. This depth combination was chosen considering all the results discussed previously. We note that due to the limited number of publicly available quantum devices, the queue times for executing the jobs are considerably long. Therefore, to minimize the training time, we chose to reduce the number of epochs for real-device training. We trained both PlainQNets and ResQNets for only 20 epochs on real devices instead of 100 epochs as in the case of simulation. The training results are illustrated in Figure 15.
Figure 14: Training comparison of 2-QN and 3-QN ResQNets for 20 qubit layers. The parentheses denote the \(D_{L}\) per QN and the comma denotes the residual point.
Figure 15: Training comparison ResQNets and PlainQNets on (a) real quantum device and (b) simulator. The values in parentheses denote the depth per QN.
The results presented in Figure 14(a) reveal that ResQNets have been trained successfully on a real device, whereas PlainQNets have not been trained on a real device. The same trend is observed when both networks are executed on the simulator, as depicted in Figure 14(b). However, when both PlainQNets and ResQNets are trained on a real device, a slight fluctuation is observed while approaching the optimal solution due to hardware noise, as compared to the simulation results. Despite the presence of noise, the rate of decrease in the loss value for ResQNets is almost identical for both simulation and real experiments. According to [40], hardware noise can potentially cause BP. However, our results demonstrate that our proposed ResQNets are somewhat resilient against hardware noise, as they achieve similar performance to that of the simulator (though with some fluctuations).
## 6 Conclusion
The problem of barren plateaus (BP) in quantum neural networks (QNNs) is a critical hurdle on the road to the practical realization of QNNs. There have been several attempts to resolve this issue, but the impact of BP can still vary greatly depending on the application and the architecture of quantum layers. Thus, it is essential to have multiple solutions for BP to cover a wide range of problems.
In this paper, we propose residual quantum neural networks (ResQNets) to address the issue of BP in QNNs. Our approach is inspired by classical residual neural networks (ResNets), which were introduced to overcome the vanishing gradients problem in classical neural networks.
In traditional QNNs, a single parameterized quantum circuit (PQC) with arbitrary depth is included within a single quantum node (QN). To create ResQNets, we split the conventional QNN architecture into multiple QNs, each containing its own PQC with varying depths. Splitting the QNNs allows us to introduce the residual connections between the QNs, forming our proposed ResQNets. In simple QNNs without residual connections (referred to as PlainQNets), the output from the previous QN serves as the input to the next. On the other hand, in ResQNets, one or multiple QNs can serve as residual blocks, with the output from a previous residual block being added to its input before it is passed on to the next QN.
In our study, we first demonstrate the efficacy of the proposed splitting of the conventional QNN architecture into multiple QNs (PlainQNets) by comparing their performance to that of conventional QNNs (simple PlainQNets). The comparison results indicated that the PlainQNets have better or equivalent performance to that of conventional QNNs. Subsequently, we compare the performance of PlainQNets with that of our proposed ResQNets through several training experiments. Our analysis of the cost function landscapes for quantum layers of increasing qubits shows that incorporating residual connections results in improved training performance.
Based on our findings, we conclude that the proposed ResQNets provide a promising solution for overcoming the problem of BP in QNNs and offer a potential direction for further research in the field of quantum machine learning. |
2306.16308 | Gaussian random field approximation via Stein's method with applications
to wide random neural networks | We derive upper bounds on the Wasserstein distance ($W_1$), with respect to
$\sup$-norm, between any continuous $\mathbb{R}^d$ valued random field indexed
by the $n$-sphere and the Gaussian, based on Stein's method. We develop a novel
Gaussian smoothing technique that allows us to transfer a bound in a smoother
metric to the $W_1$ distance. The smoothing is based on covariance functions
constructed using powers of Laplacian operators, designed so that the
associated Gaussian process has a tractable Cameron-Martin or Reproducing
Kernel Hilbert Space. This feature enables us to move beyond one dimensional
interval-based index sets that were previously considered in the literature.
Specializing our general result, we obtain the first bounds on the Gaussian
random field approximation of wide random neural networks of any depth and
Lipschitz activation functions at the random field level. Our bounds are
explicitly expressed in terms of the widths of the network and moments of the
random weights. We also obtain tighter bounds when the activation function has
three bounded derivatives. | Krishnakumar Balasubramanian, Larry Goldstein, Nathan Ross, Adil Salim | 2023-06-28T15:35:10Z | http://arxiv.org/abs/2306.16308v2 | Gaussian Random Field Approximation via Stein's Method With Applications to Wide Random Neural Networks
###### Abstract
We derive upper bounds on the Wasserstein distance (\(W_{1}\)), with respect to sup-norm, between any continuous \(\mathbb{R}^{d}\) valued random field indexed by the \(n\)-sphere and the Gaussian, based on Stein's method. We develop a novel Gaussian smoothing technique that allows us to transfer a bound in a smoother metric to the \(W_{1}\) distance. The smoothing is based on covariance functions constructed using powers of Laplacian operators, designed so that the associated Gaussian process has a tractable Cameron-Martin or Reproducing Kernel Hilbert Space. This feature enables us to move beyond one dimensional interval-based index sets that were previously considered in the literature. Specializing our general result, we obtain the first bounds on the Gaussian random field approximation of wide random neural networks of any depth and Lipschitz activation functions at the random field level. Our bounds are explicitly expressed in terms of the widths of the network and moments of the random weights. We also obtain tighter bounds when the activation function has three bounded derivatives.
###### Contents
* 1 Introduction
* 1.1 Bounds for random field approximations
* 1.2 Application to wide random neural networks
* 1.2.1. Comparison to related works.
* 1.2.2. Future directions.
* 2 Gaussian smoothing for random fields indexed by the sphere
* 2.1 Constructing a covariance and Cameron-Martin space from the Laplacian
* 2.2 Regularization to the Cameron-Martin space
* 2.3 Smoothing using \(S\) and regularization
* 3 Proof of the Wasserstein bound
* 4 Properties of Solution to the Stein Equation
* 5 Chaining arguments for modulus of continuity
* 6 Proofs for wide random neural network approximations
* 6.1 \(W_{1}\) bounds for wide random neural networks: Proof of Theorem 1.2
* 6.2 Improved \(W_{1}\) bounds: Proof of Theorem 1.4
Introduction
Random fields that arise in a variety of applications related to deep learning (Neal, 1996; Lee et al., 2018; de G. Matthews et al., 2018; Yang, 2019; Hanin, 2023) and stochastic optimization (Benveniste et al., 2012; Sirignano and Spiliopoulos, 2020; Chen et al., 2020; Rotskoff and Vanden-Eijnden, 2022; Balasubramanian et al., 2023) can exhibit limiting Gaussian behavior, rigorously understood through the theory of weak convergence. Combining this asymptotic behavior with the comprehensive theory of Gaussian random fields leads to insights about the qualitative and quantitative behavior of the random field of interest. In order to justify the accuracy of the approximation of quantities of interest by those of their limits, it is important to quantify the error in the Gaussian random field approximation. Indeed, in the standard multivariate central limit theorem, Berry-Esseen bounds precisely determine when the Gaussian behavior "kicks-in". Our main goal in this work is to develop such quantitative Berry-Esseen-type bounds for Gaussian random field approximations via Stein's method. We focus in particular on bounds in the Wasserstein metric (\(W_{1}\)) with respect to sup-norm, and highlight that convergence of these bounds to zero implies asymptotic weak convergence. Moreover, such bounds immediately imply Wasserstein bounds between important statistics of the fields, such as finite-dimensional distributions and extrema.
Stein's method has been extensively developed to provide quantitative distributional approximation bounds in both the Gaussian and non-Gaussian settings; we refer to Chen, Goldstein, and Shao (2011); Ross (2011); Nourdin and Peccati (2012) for a detailed treatment of the former. Recent works (see, for example, Barbour et al. (2021, Section 1.1)) have focused on developing Stein's method to derive Gaussian process approximations results. These works pertain to random process indexed by the interval \([0,T]\), for some \(T<\infty\). As is common in Stein's method, bounds are first developed in some "smooth" metric and are then transferred to the metric of interest, such as the Wasserstein, Levy-Prokhorov or Kolmogorov metrics, via various smoothing techniques.
For instance, Barbour et al. (2021, Lemma 1.10) develops an infinite-dimensional analog of a widely-used finite-dimensional Gaussian smoothing technique. Based on this foundation, the authors establish Gaussian process approximation bounds for processes indexed by the interval \([0,T]\), in the \(W_{1}\) and Levy-Prokhorov metrics. However, their smoothing technique is restricted to random processes indexed by some subset of the real line, as it relies on a detailed understanding of the Cameron-Martin space of one-dimensional Brownian motion. As there are no canonical Gaussian random fields _indexed_ by more general sets, e.g., the \(n\)-sphere, which have explicit Cameron-Martin spaces, new ideas are required to adapt these smoothing techniques to this setting.
A main contribution of this work is the development of a novel smoothing technique which can be used in conjunction with Stein's method to derive Gaussian random field approximation bounds in the \(W_{1}\) metric. The smoothing technique is based on the construction of a Gaussian random field with an explicit Cameron-Martin space via Laplacian operators. Though we focus on the case of random fields indexed by the \(n\)-sphere \(\mathcal{S}^{n}\), our approach is generally applicable to random fields indexed by any compact metric measure space \(\mathcal{M}\), subject to increased technical complexity.
We apply our general result to derive quantitative bounds for the \(W_{1}\) distance between the output of a wide random neural network indexed by inputs in \(\mathcal{S}^{n}\) and the corresponding Gaussian random field. Though wide random neural networks produce highly complicated random fields, such bounds allow them to be studied via their more tractable limiting Gaussian behavior. In the one hidden layer case, Neal (1996) argues that wide random neural networks asymptotically behave as Gaussian random fields. The works of de G. Matthews et al. (2018) and Lee et al. (2018) give heuristic and empirical evidence that general depth neural networks exhibit Gaussian random field limits. Very recently, Hanin (2023) proves that deep neural networks converge weakly to appropriately defined Gaussian random fields as the layer widths tend to infinity. At a high-level, one proceeds here by
first establishing convergence of finite dimensional distributions, which typically follows directly from the multivariate CLT. Weak convergence then follows from tightness results. In a different but related direction, Li et al. (2022) provide a characterization of the limiting covariance matrix of the output of the neural network when evaluated at a finite-set of points, as the depth and width tends to infinity at the same rate.
From a quantitative point of view, the question of how wide a random neural network has to be in order that the limiting Gaussian random field provides a good approximation is left unanswered by results that only demonstrate weak convergence. Works that addresses this gap include Eldan et al. (2021); Basteri and Trevisan (2022); Klukowski (2022); Bordino et al. (2023b), discussed in more detail in Section 1.2.1. However, results currently known to us have at least one of the following drawbacks: they (i) work in weaker topologies, such as Wasserstein metrics with respect to integral (e.g., \(L^{2}\)) distances, rather than the sup-norm, (ii) only provide approximation bounds for finite dimensional distributions, and not at the random field level, (iii) require Gaussian or similar restrictive assumptions on the random weights, (iv) consider special cases like one hidden-layer neural networks or use restricted activation functions, such as polynomials. In contrast, our work provides precise quantitative bounds for the error in approximating wide random neural networks with Gaussian random fields, without any of the above-mentioned restrictions.
In the remainder of the introduction, we state and discuss our main results. Section 1.1 is devoted to our smoothing result, Theorem 1.1. Section 1.2 contains our Gaussian approximation results for wide random neural networks, Theorems 1.2 and 1.4.
### Bounds for random field approximations
We now formally describe our setting and main result. Consider a compact metric space \((\mathcal{M},\mathsf{d})\), equipped with a finite Borel measure \(\nu\) that is positive on open balls. Let \(\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\) denote the (separable) Banach space of continuous functions \(f:\mathcal{M}\to\mathbb{R}^{d}\), equipped with the sup-norm \(\|f\|_{\infty}:=\sup_{x\in\mathcal{M}}\|f(x)\|_{2}\), where \(\|\cdot\|_{2}\) is the usual Euclidean norm in \(\mathbb{R}^{d}\). For two random fields \(F,H\in\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\), we are interested in the distributional approximation of the random field \(F\) by \(H\) in appropriate distances, which we introduce next.
For a function \(\zeta:\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\to\mathbb{R}\), we denote taking Frechet derivatives by \(D,D^{2},\ldots\), and let the operator norm \(\|\cdot\|\) be defined for a \(k(\geqslant 1)\)-linear form \(A\) on \(\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\) by \(\|A\|:=\sup_{\|f\|_{\infty}=1}|A[f,\ldots,f]|\). The (integral) probability distances we consider are given by the supremum of the differences \(|\operatorname{\mathbb{E}}\zeta(F)-\operatorname{\mathbb{E}}\zeta(H)|\) taken over all functions in some class \(\mathcal{H}\) of _test functions_ that
\begin{table}
\begin{tabular}{|c|c|} \hline Notation & Description \\ \hline \hline \((\mathcal{M},\mathsf{d},\nu)\) & Metric space \((\mathcal{M},\mathsf{d})\) equipped with a measure \(\nu\) \\ \hline \(\mathcal{S}^{n}\) & \(n\)-sphere \\ \hline \(\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\) & Banach space of continuous functions equipped with sup-norm \\ \hline \((\varepsilon,\delta)\) & Regularization and smoothing parameters respectively \\ \hline \(F,H\) & Random fields in \(\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\) \\ \hline \(G\) & Gaussian Random Field used to approximate \(F\in\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\) \\ \hline \(S\) & Smoothing Gaussian Random Field \\ \hline \(\mathsf{d}_{\mathcal{H}}(F,H)\) & Integral probability metric over a class of test functions \(\mathcal{H}\) \\ \hline \(D^{k}\) & \(k\)-th order Fréchet derivative \\ \hline \end{tabular}
\end{table}
Table 1: Summary of some main notations used.
map \(\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\to\mathbb{R}\):
\[\mathsf{d}_{\mathcal{H}}(F,H)\coloneqq\sup_{\zeta\in\mathcal{H}}\bigl{|}\mathbb{ E}[\zeta(F)]-\mathbb{E}[\zeta(H)]\bigr{|}.\]
In particular, we are interested in the case where the role of \(\mathcal{H}\) is played by
\[\mathcal{W}\coloneqq\left\{\zeta:\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\to \mathbb{R}:\sup_{f\neq h}\frac{|\zeta(f)-\zeta(h)|}{\|f-h\|_{\infty}}\leqslant 1 \right\},\]
the class of \(1\)-Lipschitz functions, in which case the distance is called as the Wasserstein metric (\(W_{1}\)), denoted by \(\mathsf{d}_{\mathcal{W}}(F,H)\); convergence in this metric is known to imply weak convergence in (the Polish space) \((\mathrm{C}(\mathcal{M};\mathbb{R}^{d}),\|\cdot\|_{\infty})\); see Dudley (2018, Theorem 11.3.3).
To proceed, we introduce the following weaker metric based on the class of "smooth" test functions
\[\mathcal{F}\coloneqq\bigl{\{}\zeta:\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\to \mathbb{R}:\,\sup_{f}\|D^{k}\zeta(f)\|\leqslant 1,k=1,2;\,\sup_{f\neq h}\frac{\|D^{2} \zeta(f)-D^{2}\zeta(h)\|}{\|f-h\|_{\infty}}\leqslant\,1\bigr{\}}. \tag{1.1}\]
The metric \(\mathsf{d}_{\mathcal{F}}\) is well-suited to Stein's method, but, in contrast to analogous metrics in the finite dimensional case, it does not directly imply weak convergence, or provide bounds on more informative metrics such as the Wasserstein or Levy-Prokhorov. Conceptually speaking, this disconnect can occur because it is not established that the test functions in \(\mathcal{F}\) capture tightness, and practically speaking, it can occur because the technical tools used in finite dimensions (approximation by smoother functions and boundary measure inequalities) do not generally directly carry over to infinite dimensions. Using our novel Laplacian-based smoothing method, we non-trivially adapt the techniques of Barbour et al. (2021), and prove the following general approximation result in the \(W_{1}\) metric for random fields indexed by the sphere.
**Theorem 1.1**.: _[Master Theorem] Let \(F,H\in\mathrm{C}(\mathcal{S}^{n};\mathbb{R}^{d})\) be random fields, where \(\mathcal{S}^{n}\) is the unit sphere in \(\mathbb{R}^{n+1}\) for some finite integer \(n\). Then for any \(\varepsilon,\delta\in(0,1)\) and \(\iota>0\),_
\[\mathsf{d}_{\mathcal{W}}(F,H)\leqslant C\Bigl{(}d\,\delta^{-2}\varepsilon^{-2 (n+\iota)}\,\overline{\mathsf{d}_{\mathcal{F}}(F,H)}_{\mathsf{Section}\ 4}+\Bigl{|}\mathbb{E}\,\|F-F_{ \varepsilon}\|_{\infty}\Bigr{]}+\Bigl{|}\mathbb{E}\,\|H-H_{\varepsilon}\|_{ \infty}\Bigr{]}+\,\overline{\mathbb{E}\,\|H-H_{\varepsilon}\|_{\infty}}\, \tag{1.2}\]
_where \(F_{\varepsilon}\) and \(H_{\varepsilon}\) are \(\varepsilon\)-regularizations of \(F\) and \(H\) defined at (2.8) below, and \(C\) is a constant depending only on \(n\) and \(\iota\)._
To explain the terms appearing in the bound, we first give the basic idea behind the proof of Theorem 1.1. Given a function \(\zeta:\mathrm{C}(\mathcal{S}^{n};\mathbb{R}^{d})\to\mathbb{R}\) which is Lipschitz, we define a \((\varepsilon,\delta)\)-regularized version \(\zeta_{\varepsilon,\delta}\) such that for \(k=1,2\), \(D^{k}\zeta_{\varepsilon,\delta}(f)\) exists and has norm bounded uniformly in \(f\) of order smaller than \(\delta^{-2}\varepsilon^{-2(n+\iota)}\), and \(D^{2}\zeta_{\varepsilon,\delta}\) is Lipschitz with respect to the operator norm, with constant of order \(\delta^{-2}\varepsilon^{-2(n+\iota)}\). In particular, there is a constant \(c\) such that, \(c\,\delta^{2}\varepsilon^{2(n+\iota)}\zeta_{\varepsilon,\delta}\in\mathcal{F}\). Applying the triangle inequality yields
\[\bigl{|}\mathbb{E}[\zeta(F)]-\mathbb{E}[\zeta(H)]\bigr{|}\leqslant\bigl{|} \mathbb{E}[\zeta_{\varepsilon,\delta}(F)]-\mathbb{E}[\zeta_{\varepsilon, \delta}(H)]\bigr{|}+\bigl{|}\mathbb{E}[\zeta(F)]-\mathbb{E}[\zeta_{\varepsilon, \delta}(F)]\bigr{|}+\bigl{|}\mathbb{E}[\zeta_{\varepsilon,\delta}(H)]- \mathbb{E}[\zeta(H)]\bigr{|}.\]
Because \(c\,\delta^{2}\varepsilon^{2(n+\iota)}\zeta_{\varepsilon,\delta}\in\mathcal{F}\), the first term is bounded of order \(\delta^{-2}\varepsilon^{-2(n+\iota)}\times\mathsf{d}_{\mathcal{F}}\). In Theorem 4.1 we bound \(\mathsf{d}_{\mathcal{F}}(F,H)\) when \(H\) is a continuous and centered \(\mathbb{R}^{d}\) valued Gaussian random field, denoted by \((G(x))_{x\in\mathcal{M}}\), having non-negative definite covariance kernel \(C_{ij}(x,y)=\mathbb{E}[G_{i}(x)G_{j}(y)]\). This result follows from a development of Stein's method closely related to that of Barbour et al. (2023), following Barbour (1990).
In contrast to the first term in (1.2), the remaining three terms decay as \(\varepsilon\) and \(\delta\) become small, and in particular, the second and third terms become small because \(\zeta\) and \(\zeta_{\varepsilon,\delta}\) become close. The quantity \(\|F-F_{\varepsilon}\|_{\infty}\) is closely related to the modulus of continuity of \(F\) (see Definition 5.1), and hence the term \(\mathbb{E}\|F-F_{\varepsilon}\|_{\infty}\) can be further bounded using classical quantitative tightness arguments, which we present in Lemma 5.3. The optimal choice of \(\varepsilon\) and \(\delta\) is the one having the best tradeoff between the first and the remaining terms, which may in applications depend on the rate of decay of \(\mathsf{d}_{\mathcal{F}}(F,H)\) as a function of'sample' or 'network' size, and which mitigates its prefactor tending to infinity.
While this approach is a standard way to parlay a preliminary bound in a smooth metric into a stronger one, the crux of the problem at the random field level is: _how does one construct \(\zeta_{\varepsilon,\delta}\)_? In finite dimensions, a fruitful regularization takes a function \(\zeta\) and replaces it with \(\zeta_{\delta}(x)=\mathbb{E}[\zeta(x+\delta S)]\), where \(S\) is a "smoothing" standard Gaussian. The smoothness of \(\zeta_{\delta}\) follows by making a change of measure and using the smoothness of the Gaussian density. See, for example, Raic (2018) and references therein for additional details.
For random fields _indexed_ by \(\mathcal{M}\) (or even \(\mathcal{S}^{n}\) with \(n\geqslant 2\)), there is no "standard" Gaussian and in choosing an appropriate smoothing Gaussian \(S\) there are two related potential difficulties. The first is that Cameron-Martin change of measure formulas involve Paley-Wiener integrals, which in general do not have closed form expressions. Moreover, the Cameron-Martin (or Reproducing Kernel Hilbert) space where the change of measure formula holds is typically restricted to a strict subset of \(\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\), meaning that \(\mathscr{L}(f+\delta S)\) and \(\mathscr{L}(\delta S)\) will be singular for many reasonable \(f\). Following the strategy of Barbour et al. (2021), one approach is to define a smoothing Gaussian random field \(S:\mathcal{M}\to\mathbb{R}^{d}\), where the Cameron-Martin space is a subset of smooth functions. In the simpler setting of Barbour et al. (2021) where \(\mathcal{M}=[0,T]\), \(S\) is taken to be Brownian motion with a random Gaussian initial value, and the Cameron-Martin space is well known to be absolutely continuous functions equipped with \(L^{2}\)-derivative inner product. In our more general setting of random fields indexed by \(\mathcal{M}\), there is no canonical Gaussian process like Brownian motion with a well-understood Cameron-Martin space.
In our construction of a smoothing Gaussian random field indexed by \(\mathcal{S}^{n}\), the associated Cameron-Martin space contains a class of functions in the domain of a certain fractional Laplacian and whose images are \(L^{2}\) bounded, and thus can be equipped with a related \(L^{2}\) inner product. With this function class in hand, there is still the issue that not all functions \(f\in\mathrm{C}(\mathcal{S}^{n};\mathbb{R}^{d})\) are in the domain of a fractional Laplacian, and so we use a second \(\varepsilon\)-regularization, now of \(f\), given by \(f_{\varepsilon}(x)=\mathbb{E}f(B_{\varepsilon}^{(x)})\), where \((B_{t}^{(x)})_{t\geqslant 0}\) is a Brownian motion on \(\mathcal{S}^{n}\) started from \(x\). Now defining \(\zeta_{\varepsilon,\delta}(f)=\mathbb{E}[\zeta(f_{\varepsilon}+\delta S)]\), bounds on derivatives of \(\zeta_{\varepsilon,\delta}\) can be derived from quantitative information on the spectrum of the Laplacian, which is available in detail for \(\mathcal{M}=\mathcal{S}^{n}\). This procedure is elaborated in Section 2.
Although Theorem 4.1 for bounding \(\mathsf{d}_{\mathcal{F}}(F,G)\) holds for any compact metric measure space \((\mathcal{M},\mathsf{d},\nu)\), specializing to the case of \(\mathcal{M}=\mathcal{S}^{n}\) in Theorem 1.1 allows us to obtain explicit bounds in terms of the problem parameters (i.e., \(n\) and \(d\), etc.). The technology of our Laplacian-based smoothing approach applies in the more general setting. Explicit bounds can be obtained using our approach anytime appropriate spectral estimates of the associated Laplacian are available. Indeed, Gaussian random fields and Laplacian operators on general metric measure spaces are well-studied; see, for example, Sturm (1998) and Burago et al. (2019). We highlight that our proofs would also work with functionals of the Laplacian other than fractional powers, as long as they would ensure the required smoothness conditions are satisfied. This flexibility in our proof technique might turn out to be crucial in cases when \(\mathcal{M}\) is not the \(n\)-sphere.
### Application to wide random neural networks
We now show how Theorem 1.1 is used to obtain quantitative bounds on the distributional approximation of wide random neural networks by appropriately defined Gaussian random fields. Our first motivation to do so is as follows. In practice, widely used training algorithms like stochastic gradient descent are initialized randomly. In light of that, an interesting question was raised by Golikov and Yang (2022): _Does the distribution of the initial weights matter for the training process?_ The authors demonstrate that for a large class of distributions of the initial weights, wide random neural networks are Gaussian random fields in the limit. Based on this outcome, they argue that as long as the distribution of the weights are from this universality class, the answer to the above question is _no_. Our results in this section could be used to quantify this phenomenon.
Our second motivation is to initiate the study of the training dynamics of neural networks for prediction problems, at the random field level. Several works (Sirignano and Spiliopoulos, 2020; Chen et al., 2020; Rotskoff and Vanden-Eijnden, 2022) demonstrate that when neural networks are trained by gradient descent with small order step-sizes, certain functionals exhibit limiting Gaussian behavior along the training trajectory. Under larger order step-sizes, the works (Damian et al., 2022; Ba et al., 2022; Abbe et al., 2022) demonstrate that neural networks behave differently than Gaussian-process based prediction methods (including certain classes of kernel methods), thus suggesting the existence of a phase transition from Gaussian to non-Gaussian limits. Our result in this section, along with the associated proof techniques, take a first step towards understanding the above phenomena at the random field level, by developing quantitative information about the setting where the Gaussian behavior is observed.
Formally, we consider a fully connected \(L\)-layer neural network that is defined recursively through random fields \(F^{(\ell)}:\mathcal{M}\to\mathbb{R}^{n_{\ell}}\), \(\ell=1,\ldots,L\), where \(n_{1},\ldots,n_{L}\) are positive integers corresponding to the widths of the network, with \(n_{L}\) assumed constant. We also assume that \(\mathcal{M}\subset\mathbb{R}^{n_{0}}\). The random fields are generated by a collection of random matrices \((W^{(\ell)})_{\ell=0}^{L-1}\) where \(W^{(\ell)}:\mathbb{R}^{n_{\ell}}\to\mathbb{R}^{n_{\ell+1}}\), with \(W^{(0)}\) having i.i.d. rows, \(W^{(\ell)}\) having independent entries for \(1\leqslant l\leqslant L-1\), and a collection \((b^{(\ell)})_{\ell=0}^{L-1}\) of centered Gaussian "bias" vectors. For \(x\in\mathcal{M}\), we define
\[F^{(1)}(x) =W^{(0)}x+b^{(0)},\] \[F^{(\ell)}(x) =W^{(\ell-1)}\sigma\big{(}F^{(\ell-1)}(x)\big{)}+b^{(\ell-1)},\ \ \ell=2,\ldots,L,\]
where \(\sigma:\mathbb{R}\to\mathbb{R}\) is an activation function that we apply to vectors coordinate-wise. We assume that
\[\operatorname{Var}(W^{(\ell)}_{ij})=\frac{c^{(\ell)}_{w}}{n_{\ell}},\ \ \text{and}\ \operatorname{Var}(b^{(\ell)}_{i})=c^{(\ell)}_{b}.\]
The limiting Gaussian random field is defined inductively as follows. First let \(G^{(1)}=F^{(1)}\), which in general is not a Gaussian random field (since \(W^{(0)}_{ij}\) is not assumed Gaussian), and has covariance
\[C^{(1)}_{ij}(x,y)=\delta_{ij}\bigg{(}\frac{c^{(0)}_{w}}{n_{0}}\langle x,y \rangle+c^{(0)}_{b}\bigg{)},\]
where \(\delta_{ij}\) is the Kronecker delta, and \(\langle\cdot,\cdot\rangle\) is the usual Euclidean inner product. Given the distribution of \(G^{(\ell)}\) for some \(\ell\geqslant 1\), we define \(G^{(\ell+1)}\) to be a centered Gaussian random field with covariance
\[C^{(\ell+1)}_{ij}(x,y)=\delta_{ij}\bigg{(}c^{(\ell)}_{w}\operatorname{E}\Big{[} \sigma\big{(}G^{(\ell)}_{1}(x)\big{)}\sigma\big{(}G^{(\ell)}_{1}(y)\big{)} \Big{]}+c^{(\ell)}_{b}\bigg{)}.\]
As the rows of \(W^{(\ell)}\) are assumed i.i.d. and the network is fully connected, the components of \(F^{(\ell)},\ell\geqslant 1\) are exchangeable, so in particular identically distributed. Additionally, the covariance
functions of \(F^{(\ell+1)}\) obey the same recurrence as the one above, with \(G_{1}^{(\ell)}\) replaced by \(F_{1}^{(\ell)}\), and hence have uncorrelated components. Consequently, paralleling the covariance structure of \(F^{(\ell)}\), the components of the Gaussian weighted network \(G^{(\ell)}\) are, additionally, made independent.
We now state our results for neural networks that have Lipschitz activation function. Widely used activation functions such as the ReLU, sigmoid, softmax, and tanh satisfy this assumption.
**Theorem 1.2**.: _Let \(\mathcal{S}^{n}\subset\mathbb{R}^{n+1}=:\mathbb{R}^{n_{0}}\) be the \(n\)-dimensional sphere, and \(G^{(\ell)},F^{(\ell)}:\mathcal{S}^{n}\to\mathbb{R}^{n_{\ell}}\), \(\ell=1,\ldots,L\) be defined as above. Assume \(\sigma\) is Lipschitz with constant \(\mathrm{Lip}_{\sigma}\). If there is a \(p>n\) and constants \(B^{(\ell)}\), \(\ell=0,\ldots,L-1\), independent of \(n_{1},\ldots,n_{L-1}\) such that_
\[\mathbb{E}\big{[}\big{(}W_{ij}^{(\ell)}\big{)}^{2p}\big{]}\leqslant\bigg{(} \frac{c_{w}^{(\ell)}}{n_{\ell}}\bigg{)}^{p}\big{(}B^{(\ell)}\big{)}^{p/2}, \tag{1.3}\]
_then for any \(\iota>0\), there is a constant \(c\) depending only on \((c_{w}^{(\ell)},c_{b}^{(\ell)},B^{(\ell)})_{\ell=0}^{L},n,p,\mathrm{Lip}_{ \sigma},\sigma(0),\iota\) such that_
\[\mathsf{d}_{\mathcal{W}}(F^{(L)},G^{(L)})\leqslant c\sum_{\ell=1}^{L-1}\biggl{(} n_{\ell+1}^{1/2}\biggl{(}\frac{n_{\ell+1}^{4}}{n_{\ell}}\biggr{)}^{(1-\frac{n}{p})/ (6(1-\frac{n}{p})+8(n+\iota))}\log(n_{\ell}/n_{\ell+1}^{4})\biggr{)}\prod_{j= \ell+1}^{L-1}\mathbb{E}\|W^{(j)}\|_{\mathrm{op}},\]
_where \(\|\cdot\|_{\mathrm{op}}\) denotes the matrix operator norm with respect to Euclidean distance._
**Remark 1.3**.: If \(W_{ij}^{(\ell)}=n_{\ell}^{-1/2}\widetilde{W}_{ij}^{(\ell)}\), with \(\widetilde{W}_{ij}^{(\ell)}\) sub-Gaussian, then according to Vershynin (2018, Exercises 4.4.6 and 4.4.7), we have
\[\mathbb{E}\|W^{(\ell)}\|_{\mathrm{op}}=\Theta\big{(}1+\sqrt{n_{\ell+1}/n_{ \ell}}\big{)}.\]
Note that under the same assumption, (1.3) is satisfied for all \(p\geqslant 2\). Thus, assuming \(n_{\ell}\) goes to infinity fast enough relative to \(n_{\ell+1}\) so that \(n_{\ell+1}/n_{\ell}\) is bounded, the final bound has rate
\[\sum_{\ell=1}^{L-1}n_{\ell+1}^{1/2}\bigg{(}\frac{n_{\ell+1}^{4}}{n_{\ell}} \bigg{)}^{\frac{1}{8n+6}+\varepsilon},\]
for any \(\varepsilon>0\). In the case \(n=1\) and \(L=2\), the rate of \(n_{1}^{-\frac{1}{14}+\varepsilon}\) matches that given in the similar but simpler setting in Barbour et al. (2021, Remark 1.9)
To the best of our knowledge, Theorem 1.2 provides the first result in the literature for bounding the law of wide random neural networks of any depth and Lipschitz activation functions, to that of a Gaussian. We emphasize in particular that the stated bounds are at the random field level, with the metric being the \(W_{1}\) metric under the stronger sup-norm topology; see Section 1.2.1 for comparisons to prior works. We also emphasize that under a sub-Gaussian assumption on the entries of \(W^{(\ell)}\), the result in Theorem 1.2 actually provides a bound for the Gaussian approximation for every layer, i.e., for the quantity \(\sum_{\ell=1}^{L}\mathsf{d}_{\mathcal{W}}(F^{(\ell)},G^{(\ell)})\). Indeed, for sufficiently wide neural networks, using Vershynin (2018, Exercise 4.4.7) as outlined in Remark 1.3, the bound in Theorem 1.2, with a potentially different constant \(c\) having the same dependencies, applies to all layers simultaneously.
We next give a high-level idea behind the proof of Theorem 1.2. Note that, conditional on the \(\ell\)-th layer, layer \((\ell+1)\) is a sum of \(n_{\ell}\) random fields and a Gaussian:
\[F_{i}^{(\ell+1)}=\sum_{j=1}^{n_{\ell}}W_{ij}^{(\ell)}\sigma\big{(}F_{j}^{(\ell) }\big{)}+b_{i}^{(\ell)},\ \ i=1,\ldots,n_{\ell+1}.\]
Inductively, assuming an appropriate bound on the distributional distance between \(F^{(\ell)}\) and \(G^{(\ell)}\), we can bound the error made in the approximation
\[F_{i}^{(\ell+1)}\approx\sum_{j=1}^{n_{\ell}}W_{ij}^{(\ell)}\sigma\big{(}G_{j}^{( \ell)}\big{)}+b_{i}^{(\ell)},\ \ i=1,\ldots,n_{\ell+1}.\]
The field on the right-hand side has the same covariance as \(G^{(\ell+1)}\), and hence the approximation bound in Theorem 1.2 follows by recursive application of the Stein's method approximation Theorem 4.1 (summarized in Lemma 6.1) to bound \(\mathsf{d}_{\mathcal{F}}\), combined with Theorem 1.1.
As detailed in Remark 1.5, our next result shows that one gains an improved rate under the assumption that the activation function \(\sigma\) is three-times differentiable, and when smoothing is only performed in the final stage, in contrast to the result of Theorem 1.2, which is obtained by smoothing at each step of the recursion.
**Theorem 1.4**.: _Instantiate the conditions of Theorem 1.2 and assume in addition that the activation function \(\sigma\) has three bounded derivatives. Then, for any \(\iota>0\), there is a constant \(c\) depending only on \((c_{w}^{(\ell)},c_{b}^{(\ell)},B^{(\ell)})_{\ell=0}^{L},\,n,p,\iota,\) and \(\|\sigma^{(k)}\|_{\infty}\), the supremum of the \(k\)th derivative of \(\sigma\), \(k=1,2,3\), such that_
\[\mathsf{d}_{\mathcal{W}}(F^{(L)},G^{(L)})\leqslant c\sqrt{n_{L}}(n_{L}\beta_{ L}^{2})^{(1-\frac{n}{p})/(6(1-\frac{n}{p})+8(n+\iota))}\sqrt{\log(1/(n_{L} \beta_{L}^{2}))},\]
_where_
\[\beta_{L}\coloneqq\sum_{\ell=1}^{L-1}\frac{n_{\ell+1}^{3/2}}{\sqrt{n_{\ell}}} \prod_{j=\ell+1}^{L-1}\max\bigl{\{}1,\mathbb{E}\left[\|W^{(j)}\|_{\mathrm{op} }^{3}\right]\bigr{\}}. \tag{1.4}\]
**Remark 1.5**.: Under the same setting as in Remark 1.3, with \(C_{L}\) a constant depending only on \(L\), we have
\[\beta_{L}^{2}\leqslant C_{L}\sum_{\ell=1}^{L-1}\frac{n_{\ell+1}^{3}}{n_{\ell} }\quad\text{and hence}\quad\mathsf{d}_{\mathcal{W}}(F^{(L)},G^{(L)}) \leqslant\sqrt{n_{L}}\left(n_{L}\sum_{\ell=1}^{L-1}\frac{n_{\ell+1}^{3}}{n_{ \ell}}\right)^{\frac{1}{(8n+6)}+\varepsilon},\]
for any \(\varepsilon>0\), demonstrating the rate improvement obtained by Theorem 1.4.
#### 1.2.1 Comparison to related works
Eldan et al. (2021) studied Gaussian random field approximation bounds for the case of \(L=2\) with Gaussian weights with three specific choices of activation functions. They used the Wasserstein-2 distance with respect to \(L^{p}\) topology on the sphere. For polynomial activations they work with \(p=\infty\), and for \(\mathsf{ReLU}\) and \(\mathsf{tanh}\) they work with \(p<\infty\). Following that work, Klukowski (2022) derived improved bounds in the Wasserstein-2 distance with respect to \(L^{2}\) topology, assuming the rows of the weight matrix are drawn uniformly from the sphere. We remark that weak convergence with respect to integral norms (such as \(L^{p}\) with \(p<\infty\)) does not imply weak convergence of finite dimensions, or of other natural statistics such as the maximum.
Basteri and Trevisan (2022) gives rate of convergence of finite dimensional distributions for general depth fully connected networks and Gaussian weights. The metric is Wasserstein-2 with respect to Euclidean norm. The bound of Basteri and Trevisan (2022) exhibits multivariate convergence as long as \(n_{\ell}\) tends to infinity for each \(\ell=1,\ldots,L-1\), in any order. This phenomenon is a consequence of a very good relationship between the dimension and the number of terms for the rate of convergence in the multivariate CLT, stemming from the metric used there, and the Gaussian assumptions on the weights.
More recently, Bordino et al. (2023b) used Stein's method to derive bounds for univariate distributional approximation for one-layer neural networks with Gaussian weights in the \(W_{1}\), Kolmogorov and total variation metrics, and bounds for the error of the approximation of a multivariate output of the network by a Gaussian, in the \(W_{1}\) metric. Their approach is based on a straight-forward but laborious application of a Gaussian approximation result for functions of Gaussian random variables in Vidotto (2020), which is a multivariate refinement of the second-order Poincare inequality version of Stein's method introduced by Chatterjee (2009).
#### 1.2.2 Future directions.
Obtaining a deeper understanding of the weak convergence of wide random neural networks to Gaussian (and non-Gaussian) random fields is an active area of research. Here, we highlight a few interesting directions which can be pursued based on our work.
Rate improvements: There are at least two directions to explore for improving the bounds of Theorem 1.2 and Theorem 1.4. The first, is in the case of Gaussian weights, is to understand whether the proof approach in Basteri and Trevisan (2022) for the multivariate setting could be extended to the random-field level. The second is to develop improved rates (in potentially weaker topologies, but still at the random field level) by combining our techniques with those in Hanin (2023). Both directions are intriguing but appear to be non-trivial at the random field level, and we leave them as future work to investigate.
Heavy-tailed weights: Motivated by constructing priors for Bayesian inference for neural networks, Neal (1996, Section 2.2) also heuristically examined the limits of single layer neural networks with the entries of the weight matrices being stable random variables. Recently, several works (Der and Lee, 2005; Jung et al., 2021; Bordino et al., 2023a; Lee et al., 2022; Fortuin et al., 2022; Favaro et al., 2023; Bordino et al., 2023a) showed that the limits of such neural networks (including deep ones) converge weakly to appropriately defined stable random fields. An interesting question that arises is whether one can establish quantitative distributional approximation bounds in the heavy-tailed setting. Our work provides a step in this direction. Indeed, our main result in Theorem 1.1 is immediately applicable. The remaining challenge will be in establishing a version of Theorem 4.1 for stable random fields. This could potentially be accomplished by extending recent works, for example, Xu (2019); Arras and Houdre (2019, 2022); Chen et al. (2023), on multivariate stable approximations to the random field setting.
The rest of the article is organized as follows. Section 2 defines and develops properties of our smoothing Gaussian process, which are then used in Section 3 to prove our general smoothing result, Theorem 1.1. Section 4 develops Stein's method for Gaussian processes, culminating in Theorem 4.1, which is used to bound \(\mathsf{d}_{\mathcal{F}}\). Section 5 uses classical quantitative chaining arguments along with heat kernel bounds to prove Lemma 5.3, which gives an easily applied method for bounding \(\mathbb{E}\|F-F_{\varepsilon}\|_{\infty}\). Finally, Section 6 uses the theory developed in the previous sections to prove our wide neural network approximation results, Theorems 1.2 and 1.4.
**Acknowledgments.** We thank Volker Schlue for discussions regarding certain differential geometric aspects and Max Fathi for the suggestion to look at the Laplacian for smoothing. This project originated at the "Stein's Method: The Golden Anniversary" workshop organized by the Institute for Mathematical Sciences at the National University of Singapore in June-July, 2022. We thank the institute for the hospitality and the organizers for putting together the stimulating workshop. KB was supported in part by National Science Foundation (NSF) grant DMS-2053918.
## 2 Gaussian Smoothing for Random Fields indexed by the Sphere
We begin by constructing our Gaussian smoothing random field, with its covariance defined based on the powers of Laplacian operators, and specifying its Cameron-Martin space.
### Constructing a covariance and Cameron-Martin space from the Laplacian
To define our smoothing Gaussian random field, we construct a covariance function based on the Laplacian on the \(n\)-sphere \(\mathcal{S}^{n}\), which we view as embedded in \(\mathbb{R}^{n+1}\),
\[\mathcal{S}^{n}=\{x\in\mathbb{R}^{n+1}:\|x\|_{2}=1\}.\]
A standard way to define the Laplacian on the sphere is to "lift" functions \(f:\mathcal{S}^{n}\to\mathbb{R}\) to \(\tilde{f}:\mathbb{R}^{n+1}\setminus\{0\}\to\mathbb{R}\) by
\[\tilde{f}(x)=f(x/\|x\|_{2}).\]
Letting \(\widetilde{\Delta}\) denote the usual Laplacian on \(\mathbb{R}^{n+1}\), we can then define the Laplacian \(\Delta\) acting on twice differential functions on \(\mathcal{S}^{n}\) by
\[\Delta f(x)=\widetilde{\Delta}\tilde{f}(x),\ x\in\mathcal{S}^{n};\]
see, for example, Dai and Xu (2013, Corollary 1.4.3). The negative of the Laplacian \((-\Delta)\) is a positive definite operator on \(L^{2}(\mathcal{S}^{n};\mathbb{R})\) and has an orthonormal basis given by _spherical harmonics_. The eigenvalues of \((-\Delta)\) are
\[\lambda_{k}=k(k+n-1),\ \ k=0,1,2,\dots, \tag{2.1}\]
and an orthonormal basis for the eigenspace associated to \(\lambda_{k}\) is given by a collection of polynomials \(\mathscr{H}_{k}=\big{\{}\varphi_{k}^{(1)},\dots,\varphi_{k}^{(d_{k})}\big{\}}\), with
\[d_{k}:=\dim\mathscr{H}_{k}=\frac{2k+n-1}{k}\binom{n+k-2}{k-1}; \tag{2.2}\]
see, for example, Dai and Xu (2013, Corollary 1.1.4). The union \(\bigcup_{k\geqslant 0}\mathscr{H}_{k}\), of the sets of all basis vectors for the \(k^{th}\) eigenspace, gives an orthonormal basis for \(L^{2}(\mathcal{S}^{n};\mathbb{R})\). From here, we define the _zonal harmonics_
\[Z_{k}(x,y)=\sum_{j=1}^{d_{k}}\varphi_{k}^{(j)}(x)\varphi_{k}^{(j)}(y), \tag{2.3}\]
that for \(n\geqslant 2\) satisfy
\[Z_{k}(x,y)=\frac{\Gamma\big{(}(n+1)/2\big{)}(2k+n-1)}{2\pi^{(n+1)/2}(n-1)}C_{ k}^{(n-1)/2}\big{(}\langle x,y\rangle\big{)}, \tag{2.4}\]
where \(C_{k}^{\lambda},\lambda>0,k\geqslant-1\) are the _Gegenbauer polynomials_ defined by the three term recurrence, for \(x\in[-1,1]\),
\[C_{k+1}^{\lambda}(x)=\frac{2(k+\lambda)}{k+1}xC_{k}^{\lambda}(x)-\frac{k+2 \lambda-1}{k+1}C_{k-1}^{\lambda}(x)\quad\text{for $k\geqslant 1$},\]
with initial values \(C_{-1}^{\lambda}\equiv 0\) and \(C_{0}^{\lambda}\equiv 1\). For \(n=1\), \(Z_{k}(x,y)=\pi^{-1}\cos\bigl{(}k(\theta_{x}-\theta_{y})\bigr{)}\), where \(\theta_{x},\theta_{y}\) are the polar angles of \(x,y\), i.e., \(x=(\cos(\theta_{x}),\sin(\theta_{x}))\). For our purposes, the key property of \(Z_{k}\) is that
\[|Z_{k}(x,y)|\leqslant Z_{k}(x,x)=\frac{\Gamma\big{(}(n+1)/2\big{)}}{2\pi^{(n+ 1)/2}}d_{k}, \tag{2.5}\]
see Dai and Xu (2013, Corollary 1.2.7), noting their different normalization at (1.1.1) of the inner product on the sphere. Thus, for any \(\iota>0\), we can define the kernel \(C^{(\iota)}=(C_{ij}^{(\iota)})_{i,j=1}^{d}\) on \(\mathcal{S}^{n}\) by
\[C_{ij}^{(\iota)}(x,y)=\delta_{ij}\sum_{k\geqslant 1}\frac{Z_{k}(x,y)}{\lambda_ {k}^{n_{i}}}, \tag{2.6}\]
where \(n_{t}:=(n+\iota)/2\). Because1\(\lambda_{k}\asymp k^{2}\) and \(d_{k}\asymp k^{n-1}\), by (2.5) we see that \(|Z_{k}(x,y)|\) is \(\mathrm{O}(k^{n-1})\) uniformly, hence the sum (2.6) converges absolutely and uniformly. Since each \(Z_{k}\) is continuous and the sphere is compact, \(C^{(\iota)}\) is continuous and positive definite due to the decomposition (2.3). We fix \(n\geqslant 2\) and \(\iota>0\), and set \(C=C^{(\iota)}\) for the remaining part of this section. With our covariance kernel in hand, we define our smoothing random field \(S\) and its Cameron-Martin space.
Footnote 1: For two functions \(f,g\), by \(f\asymp g\) means that there exists absolute constants \(c,C>0\) such that \(c|g|\leqslant|f|\leqslant C|g|\).
**Definition 2.1** (The Smoothing Gaussian random field \(S\) and its Cameron-Martin space).: Let \(S\) be the centered \(\mathbb{R}^{d}\)-valued Gaussian random field indexed by \(\mathcal{S}^{n}\) with covariance function \(C\) given by (2.6). Let \(\mathbf{e}_{i}\in\mathbb{R}^{d}\), for \(i=1\ldots,d\) be the standard basis vectors for \(\mathbb{R}^{d}\). The associated orthonormal decomposition for an \(h\in L^{2}(\mathcal{S}^{n};\mathbb{R}^{d})\) is given by
\[h=\sum_{i=1}^{d}\mathbf{e}_{i}\sum_{k\geqslant 1}\sum_{j=1}^{d_{k}}h_{k,i}^{(j) }\varphi_{k}^{(j)}\quad\text{where}\quad h_{k,i}^{(j)}=\int_{\mathcal{S}^{n}} h_{i}(x)\varphi_{k}^{(j)}(x)\mathrm{d}x, \tag{2.7}\]
where \(\mathrm{d}x\) is the volume measure on the sphere. We define the _Cameron-Martin_ or _Reproducing Kernel Hilbert_ space \(H\) of \(S\) to be the subset of \(L^{2}(\mathcal{S}^{n};\mathbb{R}^{d})\) defined by
\[H=\bigg{\{}h\in L^{2}(\mathcal{S}^{n};\mathbb{R}^{d}):\sum_{k\geqslant 1} \lambda_{k}^{n_{t}}\sum_{i=1}^{d}\sum_{j=1}^{d_{k}}(h_{k,i}^{(j)})^{2}<\infty \bigg{\}},\]
equipped with inner product
\[\langle h,g\rangle_{H}:=\sum_{k\geqslant 1}\lambda_{k}^{n_{t}}\sum_{i=1}^{d} \sum_{j=1}^{d_{k}}h_{k,i}^{(j)}g_{k,i}^{(j)}.\]
There is the following alternative description of the Cameron-Martin space and inner product. We define the fractional Laplacian operator \((-\Delta)^{\alpha}\) for any \(\alpha>0\) through the orthonormal basis \((-\Delta)^{\alpha}\varphi_{k}^{(j)}:=\lambda_{k}^{\alpha}\varphi_{k}^{(j)}\), and for \(h:\mathcal{S}^{n}\to\mathbb{R}^{d}\) we write \((-\Delta)^{\alpha}h\) for the fractional-Laplacian applied coordinate-wise.
**Proposition 2.2**.: _If \(h,g\in L^{2}(\mathcal{S}^{n};\mathbb{R}^{d})\) are such that \((-\Delta)^{\frac{1}{2}n_{t}}h,(-\Delta)^{\frac{1}{2}n_{t}}g\in L^{2}(\mathcal{ S}^{n};\mathbb{R}^{d})\), then \(h,g\in H\) and_
\[\langle h,g\rangle_{H}=\big{\langle}(-\Delta)^{\frac{1}{2}n_{t}}h,(-\Delta)^{ \frac{1}{2}n_{t}}g\big{\rangle}_{L^{2}(\mathcal{S}^{n};\mathbb{R}^{d})}.\]
Proof.: First note that
\[\big{\langle}(-\Delta)^{\frac{1}{2}n_{t}}h,(-\Delta)^{\frac{1}{2}n_{t}}g\big{ }\big{\rangle}_{L^{2}(\mathcal{S}^{n};\mathbb{R}^{d})}=\sum_{i=1}^{d}\int_{ \mathcal{S}^{n}}(-\Delta)^{\frac{1}{2}n_{t}}h_{i}(x)(-\Delta)^{\frac{1}{2}n_{ t}}g_{i}(x)\,\mathrm{d}x.\]
Thus, by additivity, it suffices to show the result for \(d=1\). Since \((-\Delta)^{\frac{1}{2}n_{t}}h\in L^{2}(\mathcal{S}^{n})\), we can compute the coefficients in its \(L^{2}(\mathcal{S}^{n})\) expansion (2.7) as
\[\int(-\Delta)^{\frac{1}{2}n_{t}}h(x)\varphi_{k}^{(j)}(x)\mathrm{ d}x =\sum_{\ell\geqslant 1}\lambda_{\ell}^{\frac{1}{2}n_{t}}\sum_{i=1}^{d_{k}}h_ {\ell}^{(i)}\int\varphi_{\ell}^{(i)}(x)\varphi_{k}^{(j)}(x)\mathrm{d}x\] \[=\lambda_{k}^{\frac{1}{2}n_{t}}h_{k}^{(j)},\]
where the second equality follows from orthonormality. Thus, we have that
\[\big{\langle}(-\Delta)^{\frac{1}{2}n_{i}}h,(-\Delta)^{\frac{1}{2}n_{i}}g\big{\rangle} _{L^{2}(\mathcal{S}^{n};\mathbb{R}^{d})}=\sum_{k\geqslant 1}\lambda_{k}^{n_{i}} \sum_{j=1}^{d_{k}}h_{k}^{(j)}g_{k}^{(j)}=\langle h,g\rangle_{H}.\]
To explicitly state the Cameron-Martin change of measure formula for \(S\), we first provide its Karhunen-Loeve expansion; see Adler and Taylor (2007, Chapter 3).
**Theorem 2.3** (Karhunen-Loeve Expansion for the Smoothing Gaussian random field \(S\)).: _There exists \((\mathcal{Z}_{k,i}^{(j)}:k\geqslant 1,1\leqslant j\leqslant d_{k},1\leqslant i \leqslant d)\) independent centered normal random variables with \(\operatorname{Var}(\mathcal{Z}_{k,i}^{(j)})=\lambda_{k}^{-n_{i}}\) such that_
\[S_{i}=\sum_{k\geqslant 1}\sum_{j=1}^{d_{k}}\mathcal{Z}_{k,i}^{(j)}\varphi_{k}^ {(j)},\]
_where the convergence holds in \(L^{2}\) and almost surely, uniformly on \(\mathcal{S}^{n}\)._
With this result, we have the following natural definition.
**Definition 2.4** (Paley-Wiener integral).: For \(h\in H\) with \(L^{2}\) expansion (2.7), the Paley-Wiener integral with respect to \(S\) is the centered normal random variable with variance \(\langle h,h\rangle_{H}\) given by
\[\langle h,S\rangle_{H}:=\sum_{i=1}^{d}\sum_{k\geqslant 1}\lambda_{k}^{n_{i}} \sum_{j=1}^{d_{k}}\mathcal{Z}_{k,i}^{(j)}h_{k,i}^{(j)},\]
where the \(\mathcal{Z}_{k,i}^{(j)}\) are as in Theorem 2.3.
We can now formally state the Cameron-Martin change of measure formula for \(S\), which follows from an application of a theorem of Kakutani (1948) for absolute continuity of infinite product measure.
**Theorem 2.5** (Cameron-Martin change of measure for the Smoothing Gaussian random field \(S\)).: _For any \(h\in H\), \(\mathscr{L}(h+S)\) has Radon-Nikodym derivative with respect to \(\mathscr{L}(S)\), given by_
\[\frac{d\mathscr{L}(h+S)}{d\mathscr{L}(S)}=\exp\bigl{\{}\langle h,S\rangle_{H }-\tfrac{1}{2}\langle h,h\rangle_{H}\bigr{\}}.\]
### Regularization to the Cameron-Martin space
In the previous section, we provided the Cameron-Martin change of measure formula for our smoothing Gaussian random field \(S\), but it only applies to functions \(f\in\operatorname{C}(\mathcal{S}^{n};\mathbb{R}^{d})\) that are sufficiently smooth. Thus, we define the \(\varepsilon\)-regularization of \(f\) by
\[f_{\varepsilon}(x)=\bigl{(}f_{\varepsilon,i}(x)\bigr{)}_{i=1}^{d}=\bigl{(}e^{ \varepsilon\frac{\Delta}{2}}f_{i}(x)\bigr{)}_{i=1}^{d}=\sum_{i=1}^{d}\mathbf{e} _{i}\sum_{k\geqslant 1}e^{-\frac{\varepsilon\lambda_{k}}{2}}\sum_{j=1}^{d_{k}}f_{k, i}^{(j)}\varphi_{k}^{(j)}(x). \tag{2.8}\]
The \(\varepsilon\)-regularized \(f_{\varepsilon}(x)\) equals \(\mathbb{E}\left[f(B_{\varepsilon}^{(x)})\right]\), where \((B_{t}^{(x)})_{t\geqslant 0}\) is a \(d\)-dimensional Brownian motion run on the sphere started from \(x\); see Bakry et al. (2014). The next proposition uses this representation of \(f_{\varepsilon}\) in terms of the "heat kernel" for Brownian motion, which will be useful to derive smoothness properties.
**Proposition 2.6**.: _Let_
\[p(x,y;\varepsilon)=\sum_{k\geqslant 1}e^{-\frac{\varepsilon\lambda_{k}}{2}}\sum_{j= 1}^{d_{k}}\varphi_{k}^{(j)}(x)\varphi_{k}^{(j)}(y)=\sum_{k\geqslant 1}e^{-\frac{ \varepsilon\lambda_{k}}{2}}Z_{k}(x,y), \tag{2.9}\]
_using the definition of \(Z_{k}\) in (2.3). Then for any bounded and measurable \(f:\mathcal{S}^{n}\to\mathbb{R}^{d}\),_
\[f_{\varepsilon,i}(x)=\int_{\mathcal{S}^{n}}p(x,y;\varepsilon)f_{i}(y)\mathrm{ d}y. \tag{2.10}\]
Proof.: Dropping the subscript \(i\), we have by (2.9),(2.5), (2.2), and Fubini's theorem that
\[\int_{\mathcal{S}^{n}}p(x,y;\varepsilon)f(y)\mathrm{d}y =\sum_{k\geqslant 1}e^{-\frac{\varepsilon\lambda_{k}}{2}}\int_{ \mathcal{S}^{n}}f(y)Z_{k}(x,y)\mathrm{d}y\] \[=\sum_{k\geqslant 1}e^{-\frac{\varepsilon\lambda_{k}}{2}}\sum_{j= 1}^{d_{k}}f_{k}^{(j)}\varphi_{k}^{(j)}(x),\]
which is the same as (2.8).
We are now in position to derive bounds on \(\|(-\Delta)^{\alpha}f_{\varepsilon}\|\).
**Proposition 2.7**.: _If \(f:\mathcal{S}^{n}\to\mathbb{R}^{d}\) is bounded and measurable, then for any \(\alpha>0\), \((-\Delta)^{\alpha}f_{\varepsilon}\) exists and \((-\Delta)^{\alpha}f_{\varepsilon}\in L^{2}(\mathcal{S}^{n};\mathbb{R}^{d})\). Moreover, there is a constant \(c=c(n,\alpha)\) depending only on \(n\) and \(\alpha\) such that_
\[\|(-\Delta)^{\alpha}f_{\varepsilon,i}\|_{\infty}\leqslant c\,\|f_{i}\|_{ \infty}\varepsilon^{-(2\alpha+n)/2}.\]
Proof.: By (2.4) each (lifted) \(Z_{k}(\cdot,y)\) is infinitely differentiable, with derivatives growing in absolute value at most polynomially in \(k\). Thus, using (2.9), that \(\lambda_{k}\asymp k^{2}\), and dominated convergence, \((-\Delta_{x})^{\alpha}p(x,y;\varepsilon)\) is well-defined and
\[(-\Delta_{x})^{\alpha}p(x,y;\varepsilon)=\sum_{k\geqslant 1}e^{-\frac{ \varepsilon\lambda_{k}}{2}}(-\Delta_{x})^{\alpha}Z_{k}(x,y).\]
Now, dropping the \(i\) subscript, using (2.9) and (2.3), followed by (2.5), (2.2) to find \(d_{k}=\mathrm{O}(k^{n-1})\) and (2.1), so as to apply dominated convergence, we have
\[\big{|}(-\Delta)^{\alpha}f_{\varepsilon}(x)\big{|} =\bigg{|}\int_{\mathcal{S}^{n}}(-\Delta_{x})^{\alpha}p(x,y; \varepsilon)f(y)\mathrm{d}y\bigg{|}\] \[\leqslant\|f\|_{\infty}\int_{\mathcal{S}^{n}}\big{|}(-\Delta_{x} )^{\alpha}p(x,y;\varepsilon)\big{|}\mathrm{d}y\] \[=\|f\|_{\infty}\int_{\mathcal{S}^{n}}\biggl{|}\sum_{k\geqslant 1}e^{- \frac{\varepsilon\lambda_{k}}{2}}(-\Delta_{x})^{\alpha}Z_{k}(x,y)\bigg{|} \mathrm{d}y\] \[\leqslant\|f\|_{\infty}\int_{\mathcal{S}^{n}}\biggl{|}\sum_{k \geqslant 1}\lambda_{k}^{\alpha}e^{-\frac{\varepsilon\lambda_{k}}{2}}Z_{k}(x,y) \bigg{|}\mathrm{d}y\] \[\leqslant\|f\|_{\infty}\sum_{k\geqslant 1}\lambda_{k}^{\alpha}e^{- \frac{\varepsilon\lambda_{k}}{2}}d_{k}\] \[\leqslant c\|f\|_{\infty}\sum_{k\geqslant 1}k^{2\alpha+n-1}e^{- \frac{\varepsilon k^{2}}{2}}.\]
By comparing this sum with
\[\int_{0}^{\infty}(\varepsilon/2)^{(2\alpha+n)/2}x^{2\alpha+n-1}e^{-\varepsilon x ^{2}/2}\mathrm{d}x=\tfrac{1}{2}\Gamma(\alpha+n/2),\]
we find
\[\big{|}(-\Delta)^{\alpha}f_{\varepsilon}(x)\big{|}\leqslant c\,\varepsilon^{-( 2\alpha+n)/2}\|f\|_{\infty},\]
where \(c\) is a constant depending only on \(n\) and \(\alpha\), as desired.
Propositions 2.2 and 2.7 imply that \(f_{\varepsilon}\in H\) for bounded and measurable \(f\), and also give the following lemma bounding \(|\langle f_{\varepsilon},g_{\varepsilon}\rangle_{H}|\), whose proof is straightforward.
**Lemma 2.8**.: _If \(f,g\) are bounded and measurable functions \(\mathcal{S}^{n}\to\mathbb{R}^{d}\), then there is a constant \(c=c(n,\iota)\) depending only on \(n\) and \(\iota\) such that_
\[\big{|}\langle f_{\varepsilon},g_{\varepsilon}\rangle_{H}\big{|}=\big{|} \langle(-\Delta)^{\frac{1}{2}n_{\iota}}f_{\varepsilon},(-\Delta)^{\frac{1}{2 }n_{\iota}}g_{\varepsilon}\big{\rangle}_{L^{2}(\mathcal{S}^{n};\mathbb{R}^{ \iota})}\big{|}\leqslant c\,d\|f\|_{\infty}\|g\|_{\infty}\varepsilon^{-(2n+ \iota)}.\]
### Smoothing using \(\boldsymbol{S}\) and regularization
We now use the \(f_{\varepsilon}\) regularization given in the last section to define a \((\varepsilon,\delta)\)-regularized version of a test function \(\zeta\). The following result is an analog of Barbour et al. (2021, Lemma 1.10).
**Theorem 2.9**.: _Let \(\zeta:\mathrm{C}(\mathcal{S}^{n};\mathbb{R}^{d})\to\mathbb{R}\) and, for \(f:\mathcal{S}^{n}\to\mathbb{R}^{d}\) bounded and measurable, define_
\[\zeta_{\varepsilon,\delta}(f):=\mathbb{E}[\zeta(f_{\varepsilon}+\delta S)],\]
_where \(f_{\varepsilon}\) is the \(\varepsilon\)-regularization defined at (2.8). If \(\zeta\) is bounded or Lipschitz, then \(\zeta_{\varepsilon,\delta}\) is infinitely differentiable. Moreover, for every \(k\geqslant 0\) there is a constant \(c\) depending only on \(k\), \(n\) and \(\iota\), such that if \(\zeta\) is bounded, then_
\[\|D^{k}\zeta_{\varepsilon,\delta}\|\leqslant c\,d^{k/2}\delta^{-k}\varepsilon ^{-k(n+\iota)}\|\zeta\|_{\infty},\]
_and if \(\zeta\) is \(1\)-Lipschitz and \(h:\mathcal{S}^{n}\to\mathbb{R}^{d}\) is bounded and measurable, then_
\[\|D^{k}\zeta_{\varepsilon,\delta}(f)-D^{k}\zeta_{\varepsilon,\delta}(h)\| \leqslant c\,d^{k/2}\delta^{-k}\varepsilon^{-k(n+\iota)}\|f-h\|_{\infty}. \tag{2.11}\]
Proof.: The proof is closely related to that of Barbour et al. (2021, Lemma 1.10), where the Cameron-Martin inner product and \(\varepsilon\)-regularization are simpler. Intuition behind the manipulations below can be found there.
Firstly, \(\zeta_{\varepsilon,\delta}\) is clearly well-defined if \(\zeta\) is bounded. If \(\zeta\) is \(C\)-Lipschitz, then
\[\big{|}\zeta_{\varepsilon,\delta}(f)-\zeta(f_{\varepsilon})\big{|}\leqslant C \delta\,\mathbb{E}\,\|S\|_{\infty}<\infty,\]
where the last inequality is Fernique's theorem (Fernique, 1970). Moreover, \(\zeta_{\varepsilon,\delta}\) is measurable since, from (2.10), \(f\mapsto f_{\varepsilon}\) is continuous with respect to sup-norm, as is \((f,g)\mapsto f+g\) in product topology. Thus, \((f,s)\mapsto\zeta(f_{\varepsilon}+\delta s)\) is measurable with respect to product topology.
We claim that for \(\zeta\) bounded, \(k\geqslant 1\) and \(g^{(i)}\in\mathrm{C}(\mathcal{S}^{n};\mathbb{R}^{d})\), \(i=1,\ldots,k\), we have
\[D^{k}\zeta_{\varepsilon,\delta}(f)[g^{(1)},\ldots,g^{(k)}]=\mathbb{E}\bigg{[} \zeta(\delta S)e^{\Psi_{\varepsilon}(f)}\sum_{\pi\in\mathcal{P}_{k,2}}\prod_{ b\in\pi}D^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]\bigg{]}, \tag{2.12}\]
where
\[\Psi_{\varepsilon}(f)=\tfrac{1}{\delta}\langle f_{\varepsilon},S\rangle_{H}- \tfrac{1}{2\delta^{2}}\langle f_{\varepsilon},f_{\varepsilon}\rangle_{H}.\]
In (2.12) \(\mathcal{P}_{k,2}\) is the set of all partitions of \(\{1,\ldots,k\}\), whose blocks have at most \(2\) elements; \(b\in\pi\) means that \(b\) is a block of \(\pi\), and we denote its cardinality by \(|b|\). When \(b=\{i\}\) the expression \(D^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]\) is defined as
\[D^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]=D\Psi_{\varepsilon}(f)[g^{(i)}]=\delta^{ -1}\big{\langle}g^{(i)}_{\varepsilon},S-\delta^{-1}f_{\varepsilon}\big{\rangle} _{H},\]
and when \(|b|=2\) is given by \(b=\{i_{1},i_{2}\}\), then
\[D^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]=D^{2}\Psi_{\varepsilon}(f)[g^{(i_{1})},g ^{(i_{2})}]=-\delta^{-2}\langle g^{(i_{1})}_{\varepsilon},g^{(i_{2})}_{ \varepsilon}\rangle_{H},\]
which we note does not depend on \(f\). Compare to Barbour et al. (2021, Equation (2.11)) with \(\Theta\equiv 0\). Assuming (2.12), the Cameron-Martin Theorem 2.5 implies that
\[D^{k}\zeta_{\varepsilon,\delta}(f)[g^{(1)},\ldots,g^{(k)}]=\mathbb{E}\bigg{[} \zeta(f_{\varepsilon}+\delta S)\sum_{\pi\in\mathcal{P}_{k,2}}\prod_{b\in\pi} \widehat{D}^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]\bigg{]}, \tag{2.13}\]
where \(\widehat{D}^{2}\Psi_{\varepsilon}(f)=D^{2}\Psi_{\varepsilon}(f)\), and
\[\widehat{D}\Psi_{\varepsilon}(f)[g]=\delta^{-1}\big{\langle}g^{(i)}_{ \varepsilon},S\big{\rangle}_{H}\sim\text{Normal}\big{(}0,\delta^{-2}\langle g ^{(i)}_{\varepsilon},g^{(i)}_{\varepsilon}\rangle_{H}\big{)},\]
and we note that \(\widehat{D}^{|b|}\Psi_{\varepsilon}(f)\) does not depend on \(f\) for \(|b|\in\{1,2\}\). Technically, we are applying the Cameron-Martin change of measure formula to the joint distribution of the random variables \(\big{(}\big{\langle}g^{(i)}_{\varepsilon},S-\delta^{-1}f_{\varepsilon}\big{\rangle} _{H}\big{\rangle}_{i=1}^{k}\) and \((S-\delta^{-1}f_{\varepsilon})\), which follows in a straightforward way from Kakutani's theorem and the definition of the Paley-Wiener integral.
By Lemma 2.8, we have
\[\big{|}\langle g^{(i_{1})}_{\varepsilon},g^{(i_{2})}_{\varepsilon}\rangle_{H} \big{|}\leqslant c\,d\|g^{(i_{1})}\|_{\infty}\|g^{(i_{2})}\|_{\infty} \varepsilon^{-(2n+\iota)}, \tag{2.14}\]
where \(c\) is a constant depending only on \(n_{\iota}\). Thus, if \(\zeta\) is bounded, we have
\[\big{|}D^{k}\zeta_{\varepsilon,\delta}(f)[g^{(1)},\ldots,g^{(k)}]\big{|} \leqslant\|\zeta\|_{\infty}\sum_{\pi\in\mathcal{P}_{k,2}}\prod_{ \begin{subarray}{c}b\in\pi\\ |b|=2\end{subarray}}\bigl{|}\widehat{D}^{2}\Psi_{\varepsilon}(f)[g^{(b)}] \bigr{|}\,\mathbb{E}\prod_{\begin{subarray}{c}b\in\pi\\ |b|=1\end{subarray}}\bigl{|}\widehat{D}\Psi_{\varepsilon}(f)[g^{(b)}]\bigr{|},\]
and then the definition of \(\widehat{D}^{k}\Psi_{\varepsilon}\), (2.14), and Holder's inequality imply
\[\big{|}D^{k}\zeta_{\varepsilon,\delta}(f)[g^{(1)},\ldots,g^{(k)}]\big{|} \leqslant c\,d^{k/2}\delta^{-k}\varepsilon^{-k(n+\iota)}\|\zeta\|_{\infty} \prod_{i=1}^{k}\|g^{(i)}\|_{\infty},\]
where \(c\) depends on \(k\) (through the sum over \(\mathcal{P}_{k,2}\) and the absolute moments up to order \(k\) of standard normal variables) and \(n_{\iota}\), as desired.
Assume now \(\zeta\) is \(1\)-Lipshitz, and letting \(f,h\in\operatorname{C}(\mathcal{S}^{n};\mathbb{R}^{d})\) and recalling that \(\widehat{D}^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]=\widehat{D}^{|b|}\Psi_{ \varepsilon}(h)[g^{(b)}]\), (2.13) implies
\[D^{k}\zeta_{\varepsilon,\delta}(f)[g^{(1)},\ldots,g^{(k)}]-D^{k }\zeta_{\varepsilon,\delta}(h)[g^{(1)},\ldots,g^{(k)}]\] \[=\mathbb{E}\bigg{[}\big{(}\zeta(f_{\varepsilon}+\delta S)-\zeta( h_{\varepsilon}+\delta S)\big{)}\sum_{\pi\in\mathcal{P}_{k,2}}\prod_{b\in\pi} \widehat{D}^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]\bigg{]},\]
and using that \(\zeta\) is Lipschitz and (2.10), we have
\[\big{|}\zeta(f_{\varepsilon}+\delta S)-\zeta(h_{\varepsilon}+\delta S)\big{|} \leqslant\|f_{\varepsilon}-h_{\varepsilon}\|_{\infty}\leqslant\|f-h\|_{ \infty}.\]
With this, (2.11) follows in exactly the same way as the bounded case.
To establish (2.12), we use induction. For \(k=1\), the Cameron-Martin Theorem 2.5 implies
\[\zeta_{\varepsilon,\delta}(f+g)-\zeta_{\varepsilon,\delta}(f)=\mathbb{E}\big{[} \zeta(\delta S)(e^{\Psi_{\varepsilon}(f+g)}-e^{\Psi_{\varepsilon}(f)})\big{]},\]
so by the bounded- or Lipschitz-ness of \(\zeta\) and the Cauchy-Schwarz inequality, it is enough to show that
\[\mathbb{E}\Big{[}\big{(}e^{\Psi_{\varepsilon}(f+g)-\Psi_{\varepsilon}(f)}-1-D \Psi_{\varepsilon}(f)[g]\big{)}^{2}\Big{]}=\mathrm{o}\big{(}\|g\|_{\infty}^{2} \big{)}. \tag{2.15}\]
But
\[\Psi_{\varepsilon}(f+g)-\Psi_{\varepsilon}(f)=D\Psi_{\varepsilon}(f)[g]- \frac{1}{2\delta^{2}}\langle g_{\varepsilon},g_{\varepsilon}\rangle_{H},\]
with \(D\Psi_{\varepsilon}(f)[g]\sim\mathrm{Normal}(-\delta^{-2}\langle f_{ \varepsilon},g_{\varepsilon}\rangle_{H},\delta^{-2}\langle g_{\varepsilon}, g_{\varepsilon}\rangle_{H})\), and so straightforward computing shows
\[\mathbb{E}\Big{[}\big{(}e^{\Psi_{\varepsilon}(f+g)-\Psi_{ \varepsilon}(f)}-1-D\Psi_{\varepsilon}(f)[g]\big{)}^{2}\Big{]}\] \[= e^{\delta^{-2}\langle g_{\varepsilon},g_{\varepsilon}\rangle_{H }-2\delta^{-2}\langle f_{\varepsilon},g_{\varepsilon}\rangle_{H}}+\bigg{(}1- \frac{\langle f_{\varepsilon},g_{\varepsilon}\rangle_{H}}{\delta^{2}}\bigg{)} ^{2}+\frac{\langle g_{\varepsilon},g_{\varepsilon}\rangle_{H}}{\delta^{2}}\] \[-2e^{-\delta^{-2}\langle f_{\varepsilon},g_{\varepsilon}\rangle_ {H}}\bigg{(}1-\frac{\langle f_{\varepsilon},g_{\varepsilon}\rangle_{H}}{ \delta^{2}}+\frac{\langle g_{\varepsilon},g_{\varepsilon}\rangle_{H}}{\delta^ {2}}\bigg{)},\]
which, using Lemma 2.8, is easily seen to be \(\mathrm{o}\big{(}\|g\|_{\infty}^{2}\big{)}\), as desired. Compare to Barbour et al. (2021, (2.12-15)).
Assuming (2.12) holds for \(k\), we want to show it holds for \(k+1\). We write
\[D^{k}\zeta_{\varepsilon,\delta}(f+g)[g^{(1)},\ldots,g^{(k)}]-D^{ k}\zeta_{\varepsilon,\delta}(f)[g^{(1)},\ldots,g^{(k)}]\] \[=\mathbb{E}\bigg{[}\zeta(\delta S)\big{(}e^{\Psi_{\varepsilon}(f +g)}-e^{\Psi_{\varepsilon}(f)}\big{)}\sum_{\pi\in\mathcal{P}_{k,2}}\prod_{b \in\pi}D^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]\bigg{]} \tag{2.16}\] \[\qquad+\mathbb{E}\bigg{[}\zeta(\delta S)e^{\Psi_{\varepsilon}(f)} \sum_{\pi\in\mathcal{P}_{k,2}}\Big{(}\prod_{b\in\pi}D^{|b|}\Psi_{\varepsilon} (f+g)[g^{(b)}]-\prod_{b\in\pi}D^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]\Big{)} \bigg{]} \tag{2.17}\]
Because of (2.15), the term (2.16) is equal to
\[\mathbb{E}\bigg{[}\zeta(\delta S)D\Psi_{\varepsilon}(f)[g]\sum_{\pi\in \mathcal{P}_{k,2}}\prod_{b\in\pi}D^{|b|}\Psi_{\varepsilon}(f)[g^{(b)}]\bigg{]} +\mathrm{o}\big{(}\|g\|_{\infty}\big{)}. \tag{2.18}\]
Now working on (2.17), noting that \(D^{2}\Psi_{\varepsilon}(f+g)=D^{2}\Psi_{\varepsilon}(f)\) and \(D\Psi_{\varepsilon}(f+g)[h]=D\Psi_{\varepsilon}(f)[h]+D^{2}\Psi_{\varepsilon }(f)[h,g]\), we find
\[\sum_{\pi\in\mathcal{P}_{k,2}}\Bigl{(}\prod_{b\in\pi} D^{|b|}\Psi_{\varepsilon}(f+g)[g^{(b)}]-\prod_{b\in\pi}D^{|b|}\Psi_{ \varepsilon}(f)[g^{(b)}]\Big{)}\] \[=\sum_{\pi\in\mathcal{P}_{k,2}}\prod_{b\in\pi}D^{2}\Psi_{ \varepsilon}(f)[g^{(b)}]\bigg{\{}\prod_{b\in\pi}\Big{(}D\Psi_{\varepsilon}(f)[ g^{(b)}]+D^{2}\Psi_{\varepsilon}(f)[g^{(b)},g]\Big{)}-\prod_{b\in\pi}D \Psi_{\varepsilon}(f)[g^{(b)}]\bigg{\}}\] \[=\sum_{\pi\in\mathcal{P}_{k,2}}\prod_{b\in\pi}D^{2}\Psi_{ \varepsilon}(f)[g^{(b)}]\bigg{\{}\sum_{b\in\pi\atop|b|=1}D^{2}\Psi_{ \varepsilon}(f)[g^{(b)},g]\prod_{b\neq a\in\pi\atop|a|=1}D\Psi_{\varepsilon}( f)[g^{(a)}]\bigg{\}}+\mathbf{o}\big{(}\|g\|_{\infty}\big{)},\]
where \(\mathbf{o}\big{(}\|g\|_{\infty}\big{)}\) is a random variable, say \(X=X(g)\) depending on \(g\), such that \(\mathbb{E}\left[|X|^{p}\right]^{1/p}=\mathrm{o}\big{(}\|g\|_{\infty}\big{)}\) for any \(p\geqslant 2\). This is because \(D\Psi_{\varepsilon}(f)[g^{(b)}]\) is Gaussian, and \(D^{2}\Psi_{\varepsilon}(f)[g^{(b)},g]=\mathrm{O}\big{(}\|g\|_{\infty}\big{)}\), by Lemma 2.8. Thus, up to a \(\mathrm{o}\big{(}\|g\|_{\infty}\big{)}\) term, (2.17) is equal to
\[\mathbb{E}\bigg{[}\zeta(\delta S)e^{\Psi_{\varepsilon}(f)}\sum_{\pi\in\mathcal{ P}_{k,2}}\prod_{\begin{subarray}{c}b\in\pi\\ |b|=2\end{subarray}}D^{2}\Psi_{\varepsilon}(f)[g^{(b)}]\bigg{\{}\sum_{ \begin{subarray}{c}b\in\pi\\ |b|=1\end{subarray}}D^{2}\Psi_{\varepsilon}(f)[g^{(b)},g]\prod_{ \begin{subarray}{c}b\neq a\in\pi\\ |a|=1\end{subarray}}D\Psi_{\varepsilon}(f)[g^{(a)}]\bigg{\}}\bigg{]}. \tag{2.19}\]
Combining (2.18) and (2.19) completes the induction.
## 3 Proof of the Wasserstein Bound
Armed with Theorem 2.9, we follow the strategy described in Section 1.1 to prove our master theorem.
Proof of Theorem 1.1.: To achieve a bound in the Wasserstein distance, let \(\zeta:\mathrm{C}(\mathcal{S}^{n};\mathbb{R}^{\,d})\) be a Lipschitz function and let \(\zeta_{\varepsilon,\delta}\) be defined as in Theorem 2.9. The triangle inequality yields
\[|\mathbb{E}\zeta(F)-\mathbb{E}\zeta(H)|\] \[\leqslant\big{|}\mathbb{E}[\zeta_{\varepsilon,\delta}(F)]- \mathbb{E}[\zeta_{\varepsilon,\delta}(H)]\big{|}+\big{|}[\mathbb{E}\zeta(F)] -\mathbb{E}[\zeta_{\varepsilon,\delta}(F)]\big{|}+\big{|}[\mathbb{E}[\zeta(H )]-\mathbb{E}\zeta_{\varepsilon,\delta}(H)]\big{|}. \tag{3.1}\]
For the first term of (3.1), we use (2.11) of Theorem 2.9 with \(k=2\) and the definition (1.1) of \(\mathcal{F}\) to find
\[\big{|}\mathbb{E}[\zeta_{\varepsilon,\delta}(F)]-\mathbb{E}[\zeta_{\varepsilon,\delta}(H)]\big{|}\leqslant c\,d\,\delta^{-2}\varepsilon^{-2(n+\iota)} \mathrm{d}_{\mathcal{F}}(F,H).\]
For the second term, using the definition of \(\zeta_{\varepsilon,\delta}\) and that \(\zeta\) is Lipschitz implies
\[\big{|}\mathbb{E}[\zeta(F)]-\mathbb{E}[\zeta_{\varepsilon,\delta}(F)]\big{|} =\big{|}\mathbb{E}[\zeta(F)]-\mathbb{E}[\zeta(F_{\varepsilon}+\delta S)] \big{|}\leqslant\mathbb{E}\|F-F_{\varepsilon}\|_{\infty}+\delta\,\mathbb{E}\| S\|_{\infty},\]
where \(S:\mathcal{S}^{n}\to\mathbb{R}^{\,d}\) is the smoothing Gaussian random field defined at (2.6). Since \(S\) has independent components and, by (2.6) and (2.5), has covariance uniformly bounded in absolute value of order \(c_{n}\sum_{k\geqslant 1}k^{-1-\iota}\) for some constant \(c_{n}\) depending only on \(n\), Fernique's theorem implies
\[\mathbb{E}\|S\|_{\infty}\leqslant c_{n,\iota}\,\sqrt{d},\]
where \(c_{n,\iota}\) constant depending only on \(n\) and \(\iota\). Thus, we find
\[\big{|}\mathbb{E}[\zeta(F)]-\mathbb{E}[\zeta_{\varepsilon,\delta}(F)]\big{|} \leqslant\mathbb{E}\|F-F_{\varepsilon}\|_{\infty}+c_{n,\iota}\delta\sqrt{d}.\]
The same reasoning shows that this same inequality holds with \(H\) replacing \(F\). Substituting these bounds in (3.1) verifies that the desired bound in (1.2) holds.
## 4 Properties of Solution to the Stein Equation
Applications of Theorem 1.1 require bounds on the first three terms of the right-hand side of (1.2). In this section, we bound the first term for the case when \(H=G\), the approximating Gaussian field. We handle the second and the third terms in the following section, see Lemma 5.3 in particular.
We start with the following result, which extends the work of Barbour et al. (2023, Section 2) and provides properties of solutions to infinite-dimensional versions of Stein's equation. Specifically, Barbour et al. (2023) worked with Stein's equations for Gaussian processes indexed by an interval \([0,T]\), whereas here we work with random fields indexed by a compact measured metric space.
**Theorem 4.1** (Bounds on solutions of the Stein equation).: _For a Gaussian random field \(G\in\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\), define the operator \(\mathcal{A}=\mathcal{A}_{G}\) acting on \(\zeta:\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\to\mathbb{R}\) with_
\[\max_{k=1,2}\sup_{g\in\mathrm{C}(\mathcal{M};\mathbb{R}^{d})}\|D^{k}\zeta(g)\|<\infty,\]
_by_
\[\mathcal{A}\zeta(f):=\mathbb{E}\left[D^{2}\zeta(f)[G,G]\right]-D\zeta(f)[f],\]
_where \(f\in\mathrm{C}(\mathcal{M};\mathbb{R}^{d})\), and \(D\) denotes Frechet derivative. Then for any such \(\zeta\), there exists an \(\eta=\eta_{\zeta}\) satisfying_
\[\mathcal{A}\eta(f)=\zeta(f)-\mathbb{E}[\zeta(G)]. \tag{4.1}\]
_Moreover, in the operator norm, for any \(k=1,2,\) or \(k\geqslant 3\) with \(\sup_{g\in\mathrm{C}(\mathcal{M};\mathbb{R}^{d})}\|D^{k}\zeta(g)\|<\infty\), we have_
\[\|D^{k}\eta(f)\|\leqslant\frac{1}{k}\sup_{g\in\mathrm{C}( \mathcal{M};\mathbb{R}^{d})}\|D^{k}\zeta(g)\|, \tag{4.2}\]
_and for any \(\zeta\in\mathcal{F}\) and all \(f,h\in C(\mathcal{M};\mathbb{R}^{d})\), we have_
\[\|D^{2}\eta(f)-D^{2}\eta(h)\|\leqslant\frac{1}{3}\|f-h\|_{\infty}. \tag{4.3}\]
**Remark 4.2**.: The operator \(\mathcal{A}\) defined in Theorem 4.1 plays the role of the left-hand side of the 'random field' version
\[\mathbb{E}\big{[}D^{2}\eta(f)[G,G]\big{]}-D\eta(f)[f]=\zeta(f)- \mathbb{E}[\zeta(G)] \tag{4.4}\]
of the finite dimensional Stein equation \(\nabla^{\top}\Sigma\nabla\eta(x)-x^{\top}\nabla\eta(x)=\zeta(x)-\mathbb{E}[ \zeta(G)]\) for a centered Gaussian \(G\) with covariance matrix \(\Sigma\). With the Stein equation (4.4) and the bounds on its solution provided by Theorem 4.1, the standard steps of Stein's method can be implemented. In particular, the integral probability metric bound to \(G\) over some given function class \(\mathcal{H}\) can be computed by bounding the absolute expectation of the right-hand side of (4.4) for \(\zeta\in\mathcal{H}\) by taking absolute expectations on the left-hand side, given in terms of the solution \(\eta\). In particular, uniformly bounding \(|\mathbb{E}\mathcal{A}\eta_{\zeta}(F)|\) for all solutions \(\eta_{\zeta},\zeta\in\mathcal{F}\) to (4.4) yields a bound on \(\mathsf{d}_{\mathcal{F}}(F,G)\), the first term on the right-hand side of (1.2). See Lemma 6.1 and its proof for the implementation.
**Remark 4.3**.: The term \(\mathbb{E}\big{[}D^{2}\zeta(f)[G,G]\big{]}\) implicitly depends on the covariance \(C\) of \(G\). As discussed in Remark 4.2, if \(G\) is finite-dimensional, then this term evaluates explicitly to \(\nabla^{\top}\Sigma\nabla\eta(x)\), where \(\Sigma\) is the covariance matrix of \(G\). When \(G\) is a random field indexed by an uncountable set, in general it is not clear how to write this term solely in terms of \(C\). In applications, the term should be rewritten in a form that matches the particular application and does not involve an expectation against \(G\). Typically, this form involves the covariance structure of \(G\) and is most easily found using some structure of the random field \(F\) to determine the first order term in a Taylor expansion of \(\mathbb{E}\big{[}D\eta(F)[F]\big{]}\). See Section 6 for further details from our application to wide random neural networks, and also Barbour et al. (2023) for applications that provide additional relevant examples.
Proof of Theorem 4.1.: The result essentially follows from the work of Barbour et al. (2023, Section 2), building off Barbour (1990) and Kasprzak et al. (2017), in the setting where the index set of the process \(f\) is the interval \([0,T]\).
Fix \(\zeta\) with two bounded derivatives. For \(f\in\mathds{C}(\mathcal{M};\mathbb{R}^{d})\) define \(h_{f}:\mathds{R}_{+}\to\mathbb{R}\) by \(h_{f}(t):=\mathbb{E}[\zeta(e^{-t}f+\sqrt{1-e^{-2t}}G)]\), and \(\eta=\eta_{\zeta}\) to be
\[\eta(f)=-\int_{0}^{\infty}\bigl{(}h_{f}(t)-\mathbb{E}[\zeta(G)]\bigr{)}\mathrm{ d}t.\]
The integral is well-defined since \(\zeta\) has bounded derivative, \(\|f\|_{\infty}\) is finite by continuity and compactness of \(\mathcal{M}\), and \(\|G\|_{\infty}\) has finite moments, by Gaussianity and path continuity. That \(\mathcal{A}\) can be applied to \(\eta\) (meaning it has two bounded derivatives) follows essentially from Barbour (1990), see also Kasprzak et al. (2017), since dominated convergence implies that if \(\sup_{g}\|D^{k}\zeta(g)\|<\infty\), we have the well-defined expressions
\[D^{k}\eta(f)[g_{1},\dots,g_{k}]=-\int_{0}^{\infty}e^{-kt}\,\mathbb{E}\bigl{\{}D ^{k}\zeta(e^{-t}f+\sqrt{1-e^{-2t}}G)[g_{1},\dots,g_{k}]\bigr{\}}\mathrm{d}t, \tag{4.5}\]
from which the bounds in (4.2) easily follow. For (4.3), if \(\zeta\in\mathcal{F}\) applying (4.5) with \(k=2\) and (1.1), that assures elements of \(\mathcal{F}\) to have a Lipschitz-1 second derivative, imply
\[\big{|}D^{2}\eta(f) [g,g]-D^{2}\eta(h)[g,g]\big{|}\] \[\leqslant\int_{0}^{\infty}e^{-2t}\,\mathbb{E}\bigl{\{}\big{|}D^{ 2}\zeta(e^{-t}f+\sqrt{1-e^{-2t}}G)[g,g]-D^{2}\zeta(e^{-t}h+\sqrt{1-e^{-2t}}G)[ g,g]\big{|}\bigr{\}}\mathrm{d}t\] \[\leqslant\|g\|_{\infty}^{2}\int_{0}^{\infty}e^{-2t}\|e^{-t}f-e^{- t}h\|_{\infty}\mathrm{d}t\] \[\leqslant\frac{1}{3}\|f-h\|_{\infty}\|g\|_{\infty}^{2}.\]
We now show (4.1). We have
\[\zeta(f)-\mathbb{E}[\zeta(G)] =-\int_{0}^{\infty}h_{f}^{\prime}(t)\mathrm{d}t\] \[=\int_{0}^{\infty}e^{-t}\,\mathbb{E}\bigl{[}D\zeta(e^{-t}f+\sqrt{ 1-e^{-2t}}G)[f]\bigr{]}\mathrm{d}t\] \[\qquad-\int_{0}^{\infty}\frac{e^{-2t}}{\sqrt{1-e^{-2t}}}\, \mathbb{E}\bigl{[}D\zeta(e^{-t}f+\sqrt{1-e^{-2t}}G)[G]\bigr{]}\mathrm{d}t\] \[=-D\eta(f)[f]-\int_{0}^{\infty}\frac{e^{-2t}}{\sqrt{1-e^{-2t}}}\, \mathbb{E}\bigl{[}D\zeta(e^{-t}f+\sqrt{1-e^{-2t}}G)[G]\bigr{]}\mathrm{d}t,\]
where the third equality follows by (4.5). Comparing to (4.1), we will have shown the first claim if we can demonstrate that
\[\mathbb{E}\bigl{[}D^{2}\eta(f)[G,G]\bigr{]}=-\int_{0}^{\infty}\frac{e^{-2t}}{ \sqrt{1-e^{-2t}}}\,\mathbb{E}\bigl{[}D\zeta(e^{-t}f+\sqrt{1-e^{-2t}}G)[G]\bigr{]} \mathrm{d}t.\]
Evaluating (4.5) for \(k=2\), the left-hand side of the previous display can be expressed as
\[-\int_{0}^{\infty}e^{-2t}\,\mathbb{E}\bigl{[}D^{2}\zeta(e^{-t}f+\sqrt{1-e^{-2 t}}G)[G^{\prime},G^{\prime}]\bigr{]}\mathrm{d}t,\]
where \(G^{\prime}\) is an independent copy of \(G\). But the integrands are equal since, for fixed \(t\) and \(f\) and \(\varphi(g)\coloneqq\zeta(e^{-t}f+\sqrt{1-e^{-2t}}g)\), Barbour et al. (2023, Proof of Proposition 2.1) implies
\[\mathbb{E}\bigl{[}D\varphi(G)[G]\bigr{]}=\mathbb{E}\bigl{[}D^{2}\varphi(G)[G^{ \prime},G^{\prime}]\bigr{]},\]
\[D\varphi(g)[g_{1}] =\sqrt{1-e^{-2t}}D\zeta(e^{-t}f+\sqrt{1-e^{-2t}}g)[g_{1}],\] \[D^{2}\varphi(g)[g_{1},g_{2}] =(1-e^{-2t})D^{2}\zeta(e^{-t}f+\sqrt{1-e^{-2t}}g)[g_{1},g_{2}].\]
Note that here we are using the Karhunen-Loeve expansion which is the part of the argument that uses the Borel measure \(\nu\) on our metric space \((\mathcal{M},\mathsf{d})\).
There have been a number of recent works developing Stein's method for processes, predominantly in the context of distributional approximation by interval-indexed Gaussian processes, and especially Brownian motion; though see Gan and Ross (2021) for an exception. Building from the seminal work of Barbour (1990), Shih (2011) develops Stein's method in the very general setting of a Gaussian measure on a separable Banach space. However, the bounds there are too abstract to be evaluated explicitly in practice. Closely following Shih (2011), the works Coutin and Decreusefond (2013, 2020); Bourguin and Campese (2020) provide more concrete results in the less general setting of a Gaussian measure on a Hilbert space. However, the associated probability metrics are with respect to the Hilbert space topology, e.g., \(L^{2}\) and Sobolev, which are quite weak and do not see fundamental natural statistics such as finite dimensional distributions and extrema. The works of Kasprzak (2020, 2020); Dobler and Kasprzak (2021), based on Barbour (1990), are more closely related to our work, but work only in smooth function metrics like \(\mathsf{d}_{\mathcal{F}}\). We refer to Barbour et al. (2021, Section 1.1) for additional details and comparisons.
## 5 Chaining arguments for modulus of continuity
We now present results for bounding the second and third terms in (1.2) that arise from the smoothing process. We start with a proposition that is useful for obtaining probabilistic bounds on the modulus of continuity of an \(\mathbb{R}^{d}\)-valued random field on a compact metric space \((\mathcal{M},\mathsf{d})\).
**Definition 5.1** (Modulus of Continuity).: The modulus of continuity of a function \(J:\mathcal{M}\to\mathbb{R}^{d}\) at level \(\theta>0\) is defined as \(\omega_{J}(\theta)\coloneqq\sup\bigl{\{}\|J(x)-J(y)\|_{2}:x,y\in\mathcal{M}, \mathsf{d}(x,y)<\theta\bigr{\}}\).
While the proofs below leverage standard chaining arguments, existing results seem not to provide the form of the results we require, as those mainly focus on expectation bounds and the case of \(d=1\).
Define the _covering number_\(\mathcal{N}(\mathcal{M},\mathsf{d},\varepsilon)\) (or just \(\mathcal{N}(\varepsilon)\) when \((\mathcal{M},\mathsf{d})\) is clear from context) of \((\mathcal{M},\mathsf{d})\) at level \(\varepsilon>0\) as the smallest cardinality over finite collections of points \(U\subseteq\mathcal{M}\) so that every point of \(\mathcal{M}\) is within \(\varepsilon\) of some point of \(U\) (i.e., \(U\) is an \(\varepsilon\)-net).
**Proposition 5.2**.: _Let \((\mathcal{M},\mathsf{d})\) be a compact metric space and let \((H(x))_{x\in\mathcal{M}}\) be an \(\mathbb{R}^{d}\)-valued random field with continuous paths and write \(H=(H_{1},\ldots,H_{d})\). Suppose there exist positive constants \(c_{0},\beta,\gamma\) and \(c_{1}\) such that for any \(x,y\in\mathcal{M}\) and \(i=1,\ldots,d\), we have_
\[\mathbb{P}\left(|H_{i}(x)-H_{i}(y)|\geqslant\lambda\right)\leqslant c_{0} \,\frac{\mathsf{d}(x,y)^{\beta}}{\lambda^{\gamma}}\quad\text{ for all }\lambda>0, \tag{5.1}\]
_and for every \(\varepsilon>0\) the covering numbers satisfy_
\[\mathcal{N}(\varepsilon)\leqslant c_{1}\varepsilon^{-\alpha}. \tag{5.2}\]
_Then, if \(\alpha<\beta/2\) there is a constant \(c\) depending only on \(\operatorname{diam}(\mathcal{M})\), \(\alpha,\beta\), \(\gamma,c_{0},c_{1}\) such that for all \(i=1,\ldots,d\) and \(\theta>0\),_
\[\mathbb{P}\left(\omega_{H_{i}}(\theta)>\lambda\right)\leqslant c\,\frac{ \theta^{\beta-2\alpha}}{\lambda^{\gamma}}, \tag{5.3}\]
_and for any \(0<k<\gamma\),_
\[\mathbb{E}\big{[}\omega_{H}(\theta)^{k}\big{]}\leqslant c\,d^{k/2}\,\theta^{k( \beta-2\alpha)/\gamma}. \tag{5.4}\]
Proof.: Following Pollard (1984, Chapter VII, Section 2, 9 Chaining Lemma), we can construct a nested sequence of subsets \(\mathcal{M}_{0}\subseteq\mathcal{M}_{1}\subseteq\mathcal{M}_{2}\subseteq\cdots \subseteq\mathcal{M}\) such that every \(t\in\mathcal{M}\) is within \(\operatorname{diam}(\mathcal{M})2^{-i}\) of a point of \(\mathcal{M}_{i}\), and
\[|\mathcal{M}_{i}|\leqslant\mathcal{N}\big{(}\operatorname{diam}(\mathcal{M}) 2^{-(i+1)}\big{)}\leqslant c_{1}\frac{2^{\alpha(i+1)}}{\operatorname{diam}( \mathcal{M})^{\alpha}}. \tag{5.5}\]
For each \(x\in\mathcal{M}\), there is a sequence \((x_{i})_{i\geqslant 1}\) with \(x_{i}\in\mathcal{M}_{i}\) and \(\lim_{i\to\infty}x_{i}=x\); in particular \(\mathcal{M}^{*}:=\bigcup_{i}\mathcal{M}_{i}\) is dense in \(\mathcal{M}\).
We first show that for all \(\theta>0\),
\[\mathbb{P}\left(w_{H}(\theta)>\lambda\right)=\mathbb{P}\left(w_{H}^{*}(\theta )>\lambda\right),\]
where \(w_{H}^{*}(\theta)=\sup\bigl{\{}|H(x)-H(y)|:x,y\in\mathcal{M}^{*},\mathsf{d}(x,y)\leqslant\theta\bigr{\}}\) is the modulus of continuity of \(H:\mathcal{M}^{*}\to\mathbb{R}\) at level \(\theta\), that is, only considering points in \(\mathcal{M}^{*}\). The equality holds as the events are equal. Clearly \(\{w_{H}^{*}(\theta)>\lambda\}\subseteq\{w_{H}(\theta)>\lambda\}\) since \(\mathcal{M}^{*}\subseteq\mathcal{M}\). For the other direction, if there are \(x,y\in\mathcal{M}\) with \(\mathsf{d}(x,y)<\lambda\) and \(|H(x)-H(y)|>\theta\), then letting \(x_{i},y_{i}\in\mathcal{M}_{i}\) such that \(x_{i}\to x\) and \(y_{i}\to y\), then \(\mathsf{d}(x_{i},y_{i})\to\mathsf{d}(x,y)<\lambda\) and continuity of \(H\) implies that \(|H(x_{i})-H(y_{i})|\to|H(x)-H(y)|>\lambda\) and so there must be some \(i\) with \(\mathsf{d}(x_{i},y_{i})<\lambda\) and \(|H(x_{i})-H(y_{i})|>\theta\).
To bound \(w_{H}^{*}(\theta)\), let \(\theta>0\) be fixed and let \(x,y\) be arbitrary points in \(\mathcal{M}^{*}\) satisfying \(\mathsf{d}(x,y)<\theta\). Since the \(\mathcal{M}\)'s are nested, there exists \(n\) such that \(x,y\in\mathcal{M}_{n+1}\). Further, there here are sequences \((x_{i})_{i=0}^{n},(y_{i})_{i=0}^{n}\) such that \(x_{i},y_{i}\in\mathcal{M}_{i}\), and \(\mathsf{d}(x_{i},x_{i+1})\vee\mathsf{d}(y_{i},y_{i+1})\leqslant\operatorname {diam}(\mathcal{M})2^{-i+1}\), letting \(x_{n+1}:=x\) and \(y_{n+1}:=y\). These sequences can be constructed sequentially, e.g., set \(x_{n}\) to be the nearest point in \(\mathcal{M}_{n}\) to \(x\), which must be within \(\operatorname{diam}(\mathcal{M})2^{-n}\) since \(\mathcal{M}_{n}\) is a \(\operatorname{diam}(\mathcal{M})2^{-n}\)-net. Given that we know \(x_{i+1}\), we choose \(x_{i}\) to be the point in \(\mathcal{M}_{i+1}\) that is closest to \(x_{i}\). Since \(\mathcal{M}_{i}\subseteq\mathcal{M}_{i+1}\), there must be such a point with distance no greater than \(\operatorname{diam}(\mathcal{M})2^{-i+1}\).
Denoting the maximum change in \(H\) over points in \(\mathcal{M}_{i}\) that are within \(\operatorname{diam}(\mathcal{M})\rho\) by
\[D_{i}(\rho):=\sup\bigl{\{}|H(u)-H(v)|:u,v\in\mathcal{M}_{i},\,\mathsf{d}(u,v) \leqslant\operatorname{diam}(\mathcal{M})\rho\bigr{\}}.\]
Set \(m=\lfloor-\log_{2}(\theta/\operatorname{diam}\mathcal{M})\rfloor\), implying in particular that \(\theta\leqslant\operatorname{diam}(\mathcal{M})2^{-m}\). The triangle inequality implies that
\[|H(x)-H(y)| \leqslant|H(x_{m})-H(y_{m})|+\sum_{i=m}^{n}\Bigl{(}|H(x_{i+1})-H (x_{i})|+|H(y_{i+1})-H(y_{i})|\Bigr{)}\] \[\leqslant D_{m}\bigl{(}2^{-m+2}\bigr{)}+2\sum_{i=m}^{n}D_{i} \bigl{(}2^{-m+2}\bigr{)}\] \[\leqslant D_{m}\bigl{(}2^{-m+2}\bigr{)}+2\sum_{i=m}^{\infty}D_{i} \bigl{(}2^{-i+1}\bigr{)}, \tag{5.6}\]
where the \(2^{-m+2}\) in the second inequality follows by the triangle inequality
\[\mathsf{d}(x_{m},y_{m}) \leqslant d(x,y)+\sum_{i=m}^{n}\mathsf{d}(x_{i},x_{i+1})+\sum_{i =m}^{n}\mathsf{d}(y_{i},y_{i+1})<\theta+\sum_{i=m}^{n}\mathsf{d}(x_{i},x_{i+1}) +\sum_{i=m}^{n}\mathsf{d}(y_{i},y_{i+1})\] \[\leqslant\operatorname{diam}(\mathcal{M})2^{-m}+2\operatorname{ diam}(\mathcal{M})\sum_{i=m}^{n}2^{-i+1}\leqslant 3\operatorname{diam}(\mathcal{M})2^{-m} \leqslant\operatorname{diam}(\mathcal{M})2^{-m+2}.\]
Noting that (5.6) holds we set \(\lambda_{i}=(1-a)a^{i-m}(\lambda/3)\) for \(i\geqslant m\), for some \(a\in(0,1)\) to be chosen later, and applying the union bound we have
\[\mathbb{P}\left(\omega_{H}^{*}(\theta)>\lambda\right)\leqslant\mathbb{P}\left(D _{m}\big{(}2^{-m+2}\big{)}>\lambda/3\right)+\sum_{i=m}^{\infty}\mathbb{P}\left( D_{i}\big{(}2^{-i+1}\big{)}>\lambda_{i}\right). \tag{5.7}\]
Now, again using a union bound, (5.1) and (5.5) yields that
\[\mathbb{P}\left(D_{i}(\rho)>\lambda\right) \leqslant\sum_{\begin{subarray}{c}u,v\in\mathcal{M}_{i}\\ \mathsf{d}(u,v)\leqslant\mathrm{diam}(\mathcal{M})\rho\end{subarray}}\mathbb{ P}\left(|H(u)-H(v)|>\lambda\right)\] \[\leqslant c_{0}|\mathcal{M}_{i}|^{2}\frac{\mathrm{diam}( \mathcal{M})^{\beta}\rho^{\beta}}{\lambda^{\gamma}}\] \[\leqslant c_{0}\,c_{1}^{2}\,2^{2\alpha(i+1)}\frac{\mathrm{diam} (\mathcal{M})^{\beta-2\alpha}\rho^{\beta}}{\lambda^{\gamma}},\]
and so for a constant \(c^{\prime}\) depending on \(\mathrm{diam}(\mathcal{M})\), \(\alpha,\beta\), \(\gamma,c_{0},c_{1}\), using that the first term in (5.7) can be bounded by a constant depending only on \(\alpha,\beta,\gamma\) and \(a\) times the bound on the first term of the sum that follows, we have
\[\mathbb{P}\left(\omega_{H}^{*}(\theta)>\lambda\right)\leqslant c^{\prime}(1-a )^{-\gamma}\lambda^{-\gamma}\sum_{i=m}^{\infty}\biggl{(}\frac{2^{2\alpha-\beta }}{a^{\gamma}}\biggr{)}^{i}.\]
As \(2\alpha-\beta<0\) it is possible to choose \(a\) such that \(r:=2^{2\alpha-\beta}/a^{\gamma}<1\). So doing, we obtain
\[\sum_{i=m}^{\infty}\biggl{(}\frac{2^{2\alpha-\beta}}{a^{\gamma}}\biggr{)}^{i} =(1-r)^{-1}r^{m}=(1-r)a^{-\gamma m}2^{(2\alpha-\beta)m}\leqslant(1-r)2^{(2 \alpha-\beta)m},\]
where we have used that \(a\in(0,1)\) and is being raised to a positive power, so can be bounded by \(1\). Recalling that \(m=\lfloor-\log_{2}(\theta/\mathrm{diam}(\mathcal{M}))\rfloor\), we hence observe that there is a constant \(c\) depending on \(\mathrm{diam}(\mathcal{M})\), \(\alpha,\beta\), \(\gamma,c_{0},c_{1}\), such that
\[\mathbb{P}\left(\omega_{H}^{*}(\theta)>\lambda\right)\leqslant c\,\frac{ \theta^{\beta-2\alpha}}{\lambda^{\gamma}}.\]
Now, we proceed to prove (5.4) starting with the case \(d=1\). Letting \(\tilde{c}\) be a constant that may vary from line to line, but will at most only depend on \(\mathrm{diam}(\mathcal{M}),\alpha,\beta,\gamma,c_{0},c_{1}\), the result easily follows from (5.3), since under our hypotheses that \(0<k<\gamma\) we have
\[\begin{split}\mathbb{E}\left[\omega_{F}(\theta)^{k}\right]& =\int_{0}^{\infty}\mathbb{P}\left(\omega_{F}(\theta)>\lambda^{1/k} \right)\mathrm{Leb}(\mathrm{d}\lambda)\\ &=\int_{0}^{\theta^{k(\beta-2\alpha)/\gamma}}\mathbb{P}\left( \omega_{F}(\theta)>\lambda^{1/k}\right)\mathrm{Leb}(\mathrm{d}\lambda)+\int_{ \theta^{k(\beta-2\alpha)/\gamma}}^{\infty}\mathbb{P}\left(\omega_{F}(\theta)> \lambda^{1/k}\right)\mathrm{Leb}(\mathrm{d}\lambda)\\ &\leqslant\theta^{k(\beta-2\alpha)/\gamma}+\tilde{c}\int_{ \theta^{k(\beta-2\alpha)/\gamma}}^{\infty}\frac{\theta^{\beta-2\alpha}}{\lambda ^{\gamma/k}}\mathrm{Leb}(\mathrm{d}\lambda)\\ &\leqslant\tilde{c}\,\theta^{k(\beta-2\alpha)/\gamma},\end{split} \tag{5.8}\]
as desired.
Now, for general \(d\geqslant 1\), it is clear from the definition of the modulus of continuity that
\[\omega_{F}^{2}(\theta)\leqslant\sum_{i=1}^{d}\omega_{F_{i}}^{2}(\theta). \tag{5.9}\]
Raising both sides of (5.9) to any positive power \(k\geqslant 1\), and using that \((\sum_{i}a_{i})^{k}\leqslant d^{k-1}\sum_{i}a_{i}^{k}\) for non-negative \(a_{i}\), we have
\[\omega_{F}^{2k}(\theta)\leqslant\left(\sum_{i=1}^{d}\omega_{F_{i}}^{2}(\theta )\right)^{k}\leqslant d^{k-1}\sum_{i=1}^{d}\omega_{F_{i}}^{2k}(\theta).\]
Taking expectation on both sides and applying (5.8), we have
\[\left(\mathbb{E}[\omega_{F}^{k}(\theta)]\right)^{2}\leqslant\mathbb{E}[ \omega_{F}^{2k}(\theta)]\leqslant d^{k-1}\sum_{i=1}^{d}\mathbb{E}[\omega_{F_{i }}^{2k}(\theta)]\leqslant\tilde{c}d^{k}\,\theta^{2k(\beta-2\alpha)/\gamma},\]
and taking square roots yields the desired inequality.
**Lemma 5.3**.: _Let \(\mathcal{M}=\mathcal{S}^{n}\subset\mathds{R}^{n+1}\) for some \(n\geqslant 2\), with natural geodesic metric \(\mathtt{d}\), and \(H=(H_{1},\ldots,H_{d}):\mathcal{S}^{n}\to\mathbb{R}^{d}\) be a random field with continuous paths, and \(H_{\varepsilon}\) be the \(\varepsilon\)-regularization of \(H\) defined at (2.8) for a fixed \(0<\varepsilon<1\). If for all \(i=1,\ldots,d\), for all \(x,y\in\mathcal{S}^{n}\), some constant \(\hat{c}\), and some \(p>n\) we have_
\[\mathbb{E}\big{[}\big{(}H_{i}(x)-H_{i}(y)\big{)}^{2p}\big{]}\leqslant\hat{c} \,\mathtt{d}(x,y)^{2p}, \tag{5.10}\]
_then there is a constant \(c\) depending only on \(\hat{c},n\), and \(p\), such that_
\[\mathbb{E}\,\|H-H_{\varepsilon}\|_{\infty}\leqslant c\,\sqrt{d}\,\varepsilon^ {\frac{1}{2}(1-\frac{n}{p})}\sqrt{\log(1/\varepsilon)}.\]
Proof.: Using the alternative expression for \(H_{\varepsilon}\) given at (2.10) in Proposition 2.6, for any given \(\theta>0\) we immediately have
\[H(x)-H_{\varepsilon}(x)=\int_{y:d(x,y)\leqslant\theta}p(x,y;\varepsilon) \big{(}H(x)-H(y)\big{)}\mathrm{d}y+\int_{y:d(x,y)>\theta}p(x,y;\varepsilon) \big{(}H(x)-H(y)\big{)}\mathrm{d}y,\]
where \(\mathrm{d}y\) is the volume element on the sphere. It is easy to see that \(\omega_{H}(\theta)\) is finite because \(H\) is continuous and the sphere is compact. Hence, we can further bound
\[\big{|}H(x)-H_{\varepsilon}(x)\big{|} \leqslant\omega_{H}(\theta)+\sup_{u,v\in\mathcal{S}^{n}}\!\big{\|} H(u)-H(v)\big{\|}_{2}\int_{y:d(x,y)>\theta}p(x,y;\varepsilon)\mathrm{d}y\] \[=\omega_{H}(\theta)+\omega_{H}(\pi)\int_{y:d(x,y)>\theta}p(x,y; \varepsilon)\mathrm{d}y.\]
The heat kernel bounds of Nowak et al. (2019, Theorem 1) imply
\[\int_{y:d(x,y)>\theta}p(x,y;\varepsilon)\mathrm{d}y\leqslant c_{n}\,e^{- \theta^{2}/(5\varepsilon)},\]
where \(c_{n}\) is a constant depending only on \(n\). Hence, we have that
\[\mathbb{E}\,\|H-H_{\varepsilon}\|_{\infty}\leqslant\mathbb{E}\,\big{[}\omega_ {H}(\theta)\big{]}+c_{n}\,\mathbb{E}\,\big{[}\omega_{H}(\pi)\big{]}e^{-\theta^ {2}/(5\varepsilon)}. \tag{5.11}\]
To bound \(\mathbb{E}[\omega_{H}(\theta)]\) we apply Proposition 5.2, and use Markov's inequality to find that
\[\mathbb{P}\left(\left|H_{i}(x)-H_{i}(y)\right|\geqslant\lambda\right)\leqslant \frac{\mathbb{E}\big{[}(H_{i}(x)-H_{i}(y))^{2p}\big{]}}{\lambda^{2p}}.\]
Therefore (5.1) is satisfied with \(\beta=\gamma=2p\) and \(c_{0}=\hat{c}\), due to our assumption (5.10). To bound the covering numbers, standard volume arguments (see, for example, Vershynin (2018, Corollary 4.2.13)) imply that for all \(\varepsilon\in(0,1)\), we have
\[\mathcal{N}(\mathcal{S}^{n},\mathsf{d},\varepsilon)\leqslant c_{n}\, \varepsilon^{-n},\]
where \(c_{n}\) is a constant depending only on \(n\), thus (5.2) is satisfied with \(\alpha=n\). Applying (5.4) of Proposition 5.2 we find that there exists a constant \(c\), whose value from line to line may change, but depends that only on \(\hat{c},n\), and \(p\), such that
\[\mathbb{E}\big{[}\omega_{H}(\theta)\big{]}\leqslant c\,\sqrt{d}\,\theta^{1- \frac{n}{p}}.\]
Substituting this inequality in (5.11) and setting \(\theta=\sqrt{l\varepsilon\log(1/\varepsilon)}\) and \(l\geqslant 5(1-\frac{n}{p})/2\), we conclude that
\[\mathbb{E}\big{\|}H-H_{\varepsilon}\|_{\infty}\leqslant c\,\sqrt{d}\, \varepsilon^{\frac{1}{2}(1-\frac{n}{p})}\sqrt{\log(1/\varepsilon)}.\]
## 6 Proofs for wide random neural network approximations
We now apply the general results developed in the previous sections to prove Theorems 1.2 and 1.4 on the smooth and Wasserstein distance bounds for wide random neural network. We follow the strategy based on induction as previously described in Section 1.2. We first present the following result, obtained by applying Theorem 4.1 at a given, single layer of the network. One key element driving the result is the use of the classical Stein 'leave-one-out' approach, see (6.5).
**Lemma 6.1**.: _Let \(H:\mathcal{M}\to\mathbb{R}^{m}\) be a random field with continuous and i.i.d. coordinate processes \(H_{1},\dots,H_{m}\), and let \(W:\mathbb{R}^{m}\to\mathbb{R}^{n}\) be an \(n\times m\) random matrix that is independent of \(H\) and has centered independent entries having the same variance \(\operatorname{Var}(W_{ij})=:c_{w}/m\), also satisfying \(\mathbb{E}[W_{ij}^{4}]\leqslant B(c_{w}/m)^{2}\), and \(\sigma:\mathbb{R}\to\mathbb{R}\). Define \(F:\mathcal{M}\to\mathbb{R}^{n}\) by_
\[F(x)=W\sigma\big{(}H(x)\big{)},\]
_and assume \(F\in L^{2}(\mathcal{M};\mathbb{R}^{n})\). Let \(G\in\operatorname{C}(\mathcal{M};\mathbb{R}^{d})\) be a centered Gaussian random field with covariance function_
\[C_{ij}(x,y):=\mathbb{E}\big{[}F_{i}(x)F_{j}(y)\big{]}=\delta_{ij}c_{w}\, \mathbb{E}\big{[}\sigma\big{(}H_{1}(x)\big{)}\sigma\big{(}H_{1}(y)\big{)} \big{]}.\]
_Then for any \(\zeta\in\mathcal{F}\), we have_
\[\big{|}\mathbb{E}[\zeta(F)]-\mathbb{E}[\zeta(G)]\big{|}\leqslant c_{w}^{3/2}B ^{3/4}\,\mathbb{E}\big{[}\|\sigma(H_{1})\|_{\infty}^{3}\big{]}\frac{n^{3/2}}{ \sqrt{m}}. \tag{6.1}\]
Proof.: We apply Theorem 4.1 with the Gaussian random field \(G\) and \(d=n\). In particular, we obtain the bound (6.1) by substituting \(F\) for \(f\) in (4.4) and bounding the expectation of its right-hand side. Our first step is to derive a more useful representation for the second order term. We claim
\[\mathbb{E}\big{[}D^{2}\eta(f)[G,G]\big{]}=\mathbb{E}\big{[}D^{2}\eta(f)[W\sigma (H),W\sigma(H)]\big{]}. \tag{6.2}\]
More generally, if the covariance of a centered Gaussian random field \(G\in\mathrm{C}(\mathcal{M};\mathds{R}^{d})\) satisfies \(C_{ij}(x,y)=\delta_{ij}\operatorname{\mathbb{E}}[G_{i}(x)G_{i}(y)]= \operatorname{\mathbb{E}}[R_{i}(x)R_{j}(y)]\) for some centered \(L^{2}(\mathcal{M};\mathds{R}^{d})\) random field \(R\) (not necessarily assumed Gaussian), then for any bilinear form \(A\) with \(\operatorname{\mathbb{E}}\bigl{[}A[G,G]\bigr{]}<\infty\), we have \(\operatorname{\mathbb{E}}\bigl{[}A[G,G]\bigr{]}=\operatorname{\mathbb{E}} \bigl{[}A[R,R]\bigr{]}\).
Equality (6.2) is a consequence of the Karhunen-Loeve expansion; see e.g., Adler and Taylor (2007, Chapter 3), that states that there is an orthonormal basis \((\varphi_{k})_{k\geqslant 1}\) of \(L^{2}(\mathcal{M};\mathds{R})\) and independent one dimensional centered Gaussian random variables \((X_{ki})_{k\geqslant 1,1\leqslant i\leqslant d}\) with \(\operatorname{Var}(X_{ki})=\lambda_{ki}>0\) such that \(G_{i}=\sum_{k\geqslant 1}X_{ki}\varphi_{k}\), and the convergence is in \(L^{2}\). Since \(R\) is also \(L^{2}\), we can expand \(R_{i}=\sum_{k\geqslant 1}Y_{ki}\varphi_{k}\) with \(Y_{ki}=\int R_{i}(x)\varphi_{k}(x)\mathrm{d}x\), where \(\mathrm{d}x\) is the volume measure associated to \(\mathcal{M}\), and the convergence is in \(L^{2}\). Now, by linearity and that \(\operatorname{Cov}(X_{ki},X_{\ell j})=\delta_{ij}\delta_{k\ell}\lambda_{ki}\), we find
\[\operatorname{\mathbb{E}}\bigl{[}A[G,G]\bigr{]}=\sum_{i=1}^{d}\sum_{k\geqslant 1 }\lambda_{ki}A[\mathbf{e}_{i}\varphi_{ki},\mathbf{e}_{i}\varphi_{ki}],\]
where \(\mathbf{e}_{i}\) is the \(d\)-dimensional vector with a one in the \(i^{\text{th}}\) position, and zero elsewhere.
To show that we obtain the same quantity with \(R\) replacing \(G\), it is enough to show \(\operatorname{Cov}(Y_{ki},Y_{\ell j})=\delta_{ij}\delta_{k\ell}\lambda_{ki}\). We use Mercer's theorem, which says that
\[\operatorname{\mathbb{E}}[R_{i}(x)R_{j}(y)]=C_{ij}(x,y)=\delta_{ij}\sum_{m \geqslant 1}\lambda_{mi}\varphi_{m}(x)\varphi_{m}(y),\]
where the convergence in the sum is uniform, and we obtain
\[\operatorname{\mathbb{E}}[Y_{ki}Y_{\ell j}] =\operatorname{\mathbb{E}}\biggl{[}\iint R_{i}(x)R_{j}(y)\varphi _{k}(x)\varphi_{\ell}(y)\,\mathrm{d}x\,\mathrm{d}y\biggr{]}\] \[=\iint C_{ij}(x,y)\varphi_{k}(x)\varphi_{\ell}(y)\,\mathrm{d}x\, \mathrm{d}y=\delta_{ij}\sum_{m\geqslant 1}\lambda_{mi}\iint\varphi_{m}(x) \varphi_{m}(y)\varphi_{k}(x)\varphi_{\ell}(y)\,\mathrm{d}x\,\mathrm{d}y\] \[=\delta_{ij}\sum_{m\geqslant 1}\lambda_{mi}\delta_{mk}\delta_{ml}= \delta_{ij}\delta_{k\ell}\lambda_{ki},\]
as \(\varphi_{k},k\geqslant 1\) are orthonormal, thus proving claim (6.2).
Let the pair \((\widehat{W},\widehat{H})\) be an independent copy of \((W,H)\). Clearly, the right-hand side of (6.2) is the same for both pairs and hence
\[|\operatorname{\mathbb{E}}\zeta(F)-\operatorname{\mathbb{E}}\zeta(G)|=| \operatorname{\mathbb{E}}\bigl{[}D^{2}\eta(W\sigma(H))[\widehat{W}\sigma( \widehat{H}),\widehat{W}\sigma(\widehat{H})]-D\eta(W\sigma(H))[W\sigma(H)] \bigr{]}|, \tag{6.3}\]
via (4.4) and independence. Hence, bounding the right-hand side of (6.3) yields a bound on the left-hand side of (4.4).
We first write
\[\widehat{W}\sigma(\widehat{H})=\sum_{j=1}^{m}\widehat{V}_{j}\quad\text{where we set}\quad\widehat{V}_{j}\coloneqq\sum_{i=1}^{n}\widehat{W}_{ij}\sigma(\widehat{H}_{j}) \mathbf{e}_{i},\]
and adopt parallel notation to define \(V_{j}\). Because \(\widehat{W}_{ij}\) are independent of each other and of \(W\), centered, and assumed to have common variance \(c_{w}/m\), for the first term in (6.3) we have
\[\operatorname{\mathbb{E}}\bigl{[}D^{2}\eta(W\sigma(H))[\widehat{W}\sigma( \widehat{H}),\widehat{W}\sigma(\widehat{H})]\bigr{]}=\sum_{j=1}^{m} \operatorname{\mathbb{E}}\biggl{\{}D^{2}\eta(W\sigma(H))[\widehat{V}_{j}, \widehat{V}_{j}]\biggr{\}}. \tag{6.4}\]
Working now on the second term of (6.3),
\[\big{(}W\sigma(H)\big{)}^{j}:=W\sigma(H)-V_{j}\quad\text{where}\quad V_{j}=\sum_{i =1}^{n}W_{ij}\sigma(H_{j})\mathbf{e}_{i}, \tag{6.5}\]
and which is independent of \((W_{ij})_{i=1}^{n}\) and \(H_{j}\). Using that independence to subtract a term with expectation zero in the second line below, followed by an application of a Taylor type argument, we have
\[\mathbb{E}\big{[}D\eta(W\sigma(H))[W\sigma(H)]\big{]}\] \[=\sum_{j=1}^{m}\mathbb{E}\bigg{\{}D\eta(W\sigma(H))[V_{j}]-D\eta \big{(}(W\sigma(H))^{j}\big{)}[V_{j}]\bigg{\}}\] \[=\sum_{j=1}^{m}\mathbb{E}\bigg{\{}D^{2}\eta\big{(}(W\sigma(H))^{j }\big{)}[V_{j},V_{j}]\bigg{\}}\] \[\quad+\sum_{j=1}^{m}\int_{0}^{1}\mathbb{E}\bigg{\{}D^{2}\eta\big{(} sW\sigma(H)+(1-s)(W\sigma(H))^{j}\big{)}[V_{j},V_{j}]-D^{2}\eta\big{(}(W \sigma(H))^{j}\big{)}[V_{j},V_{j}]\bigg{\}}\text{Leb}(\text{d}s)\] \[=\sum_{j=1}^{m}\mathbb{E}\bigg{\{}D^{2}\eta\big{(}(W\sigma(H))^{j }\big{)}[\widehat{V}_{j},\widehat{V}_{j}]\bigg{\}} \tag{6.6}\] \[\quad+\sum_{j=1}^{m}\int_{0}^{1}\mathbb{E}\bigg{\{}D^{2}\eta\big{(} sW\sigma(H)+(1-s)(W\sigma(H))^{j}\big{)}[V_{j},V_{j}]-D^{2}\eta\big{(}(W \sigma(H))^{j}\big{)}[V_{j},V_{j}]\bigg{\}}\text{Leb}(\text{d}s). \tag{6.7}\]
To bound (6.3), we first subtract this expression from (6.4) and, then bound the absolute value. In particular, we first bound the absolute difference between (6.4) and (6.6), and then the absolute value of (6.7). For the former, applying the second inequality of (4.2), which gives that the second derivative of \(\eta\) is Lipschitz, followed by Holder's inequality, yields that this difference is bounded by
\[\sum_{j=1}^{m}\Bigl{|}\mathbb{E}\bigg{\{}D^{2}\eta\big{(}(W\sigma (H))^{j}\big{)}[\widehat{V}_{j},\widehat{V}_{j}]-\mathbb{E}D^{2}\eta\big{(}W \sigma(H)\big{)}[\widehat{V}_{j},\widehat{V}_{j}]\bigg{\}}\Bigr{|}\\ \leqslant\frac{1}{3}\sum_{j=1}^{m}\mathbb{E}\bigg{[}\Bigl{\|}V_{ j}\Bigr{\|}_{\infty}\Bigl{\|}\widehat{V}_{j}\Bigr{\|}_{\infty}^{2}\bigg{]} \leqslant\frac{1}{3}\sum_{j=1}^{m}\mathbb{E}\bigg{[}\Bigl{\|}V_{j}\Bigr{\|}_{ \infty}^{3}\bigg{]}. \tag{6.8}\]
Similarly, but more simply, the absolute value of (6.7) is bounded by one-half this same quantity.
To bound (6.8), we use the fact that \(H_{j}\) is independent \(W_{ij},i=1,\ldots,n\), and again apply Holder's inequality, to find that
\[\frac{1}{3}\sum_{j=1}^{m}\mathbb{E}\bigg{[}\Bigl{\|}\sum_{i=1}^{n }W_{ij}\sigma(H_{j})\mathbf{e}_{i}\Bigr{\|}_{\infty}^{3}\bigg{]} \leqslant\frac{1}{3}\sum_{j=1}^{m}\mathbb{E}\big{[}\|\sigma(H_{j} )\|_{\infty}^{3}\big{]}\,\mathbb{E}\left[\Bigl{\|}\sum_{i=1}^{n}W_{ij}\mathbf{ e}_{i}\right\|^{4}\right]^{3/4}\] \[=\frac{1}{3}\sum_{j=1}^{m}\mathbb{E}\big{[}\|\sigma(H_{j})\|_{ \infty}^{3}\big{]}\,\mathbb{E}\left[\Bigl{(}\sum_{i=1}^{n}W_{ij}^{2}\Big{)}^{2 }\right]^{3/4}\] \[\leqslant\frac{m}{3}\,\mathbb{E}\big{[}\|\sigma(H_{1})\|_{\infty}^ {3}\big{]}\bigg{(}\frac{n^{2}Bc_{w}^{2}}{m^{2}}\bigg{)}^{3/4}\]
\[=\frac{1}{3}c_{w}^{3/2}B^{3/4}\,\mathbb{E}\big{[}\|\sigma(H_{1})\|_{ \infty}^{3}\big{]}\frac{n^{3/2}}{\sqrt{m}},\]
where we have used that \(\mathbb{E}\big{[}W_{ij}^{4}\big{]}\leqslant B(c_{w}/m)^{2}\). Hence, we obtain the desired inequality, (6.1).
In Section 6.2, Lemma 6.1 is used to derive bounds on the difference between \(G^{(\ell)}\) and \(F^{(\ell)}\) in the smooth function metric for general \((\mathcal{M},\mathsf{d})\). For informative bounds in the Wasserstein metric when \(\mathcal{M}\equiv\mathcal{S}^{n}\), we apply Theorem 1.1. The following lemma gives some moment bounds that are used along with Lemma 5.3 to bound the terms \(\mathbb{E}\|F-F_{\varepsilon}\|_{\infty}\) and \(\mathbb{E}\|G-G_{\varepsilon}\|_{\infty}\) appearing in (1.2).
**Lemma 6.2**.: _For fixed \(p\in\mathbb{N}\), assume \(H\in L^{2p}(\mathcal{M};\mathbb{R}^{m})\) is a random field with identically distributed coordinate processes, and let \(W:\mathbb{R}^{m}\to\mathbb{R}\) be an \(1\times m\) random matrix that is independent of \(H\) and has centered independent entries satisfying \(\mathbb{E}[W_{1j}^{2p}]\leqslant\tilde{c}/m^{p}\), for some \(\tilde{c}>0\). Letting \(\sigma:\mathbb{R}\to\mathbb{R}\) be Lipschitz with constant \(\mathrm{Lip}_{\sigma}\), define \(F:\mathcal{M}\to\mathbb{R}\) by_
\[F(x)=W\sigma\big{(}H(x)\big{)},\]
_and finally, letting \(A_{m}^{(2p)}\) be the set of \((j_{1},\ldots,j_{2p})\in\{1,\ldots,m\}^{2p}\) where the label of every coordinate appears at least twice, there is a constant \(c\) depending only on \(p\) and \(\tilde{c}\) such that_
\[\mathbb{E}\left[(F(x)-F(y))^{2p}\right] \leqslant\frac{\tilde{c}}{m^{p}}\sum_{(j_{1},\ldots,j_{2p})\in A_ {m}^{(2p)}}\prod_{\ell=1}^{2p}\mathbb{E}\left[\Big{(}\sigma(H_{j_{\ell}}(x))- \sigma(H_{j_{\ell}}(y))\Big{)}^{2p}\right]^{1/(2p)} \tag{6.9}\] \[\leqslant c\,\mathrm{Lip}_{\sigma}^{2p}\,\mathbb{E}\big{[}\big{(}H _{1}(x)-H_{1}(y)\big{)}^{2p}\big{]}. \tag{6.10}\]
Proof.: For the first inequality, direct calculation gives
\[\mathbb{E}\left[(F(x)-F(y))^{2p}\right] =\sum_{j_{1},\ldots,j_{2p}=1}^{n_{1}}\mathbb{E}\bigg{[}\prod_{ \ell=1}^{2p}W_{1,j_{\ell}}\bigg{]}\,\mathbb{E}\bigg{[}\prod_{\ell=1}^{2p} \Big{(}\sigma(H_{j_{\ell}}(x))-\sigma(H_{j_{\ell}}(y))\Big{)}\bigg{]}\] \[=\sum_{(j_{1},\ldots,j_{2p})\in A_{m}^{(2p)}}\mathbb{E}\left[\prod _{\ell=1}^{2p}W_{1,j_{\ell}}\right]\mathbb{E}\left[\prod_{\ell=1}^{2p}\Big{(} \sigma(H_{j_{\ell}}(x))-\sigma(H_{j_{\ell}}(y))\Big{)}\right]\!,\]
which follows since \(W_{1j}\) are independent and have mean zero. From this (6.9) easily follows by Holder's inequality.
As \(H\) has identically distributed entries, we see that (6.9) satisfies,
\[\frac{\tilde{c}}{m^{p}}\sum_{(j_{1},\ldots,j_{2p})\in A_{m}^{(2p )}}\prod_{\ell=1}^{2p} \mathbb{E}\bigg{[}\Big{(}\sigma(H_{j_{\ell}}(x))-\sigma(H_{j_{\ell }}(y))\Big{)}^{2p}\bigg{]}^{1/(2p)}\] \[=\frac{\tilde{c}}{m^{p}}\,\big{|}A_{m}^{(2p)}\big{|}\,\mathbb{E} \bigg{[}\Big{(}\sigma(H_{1}(x))-\sigma(H_{1}(y))\Big{)}^{2p}\bigg{]},\] \[\leqslant c\,\,\mathbb{E}\bigg{[}\Big{(}\sigma(H_{1}(x))-\sigma(H _{1}(y))\Big{)}^{2p}\bigg{]},\]
where the last inequality follows because \(\big{|}A_{m}^{(2p)}\big{|}=\mathrm{O}(m^{p})\), with a constant depending only on \(p\). The upper bound (6.10) now easily follows, since \(\sigma\) is Lipschitz.
### \(W_{1}\) bounds for wide random neural networks: Proof of Theorem 1.2
Combining the previous results, we can now prove our main theorem for wide random neural networks.
Proof of Theorem 1.2.: The proof proceeds by induction on \(\ell=2,\ldots,L\) for the hypotheses that there is a constant \(c\) (which may change from line to line) depending only on \((c_{w}^{(m)},c_{b}^{(m)},B^{(m)})_{m=0}^{L}\), \(n,p,\operatorname{Lip}_{\sigma}\) and \(\sigma(0)\) such that
\[\begin{split}\mathsf{d}_{\mathcal{W}}&\big{(}F^{( \ell)},G^{(\ell)}\big{)}\\ &\leqslant c\sum_{m=1}^{\ell-1}\biggl{(}n_{m+1}^{1/2}\biggl{(}n_{ m+1}^{4}\biggl{)}^{(1-\frac{n}{p})/(8(1-\frac{n}{p})+6(n+\iota))}\log(n_{m}/n_{m+1}^{4}) \biggr{)}\prod_{j=m+1}^{\ell-1}\mathbb{E}\|W^{(j)}\|_{\operatorname{op}},\end{split} \tag{6.11}\]
and
\[\mathbb{E}\left[(G_{i}^{(\ell)}(x)-G_{i}^{(\ell)}(y))^{2p}\right]\leqslant c \,\mathsf{d}(x,y)^{2p},\,\,\,i=1,\ldots,n_{\ell}, \tag{6.12}\]
and finally
\[\mathbb{E}\left[\|\sigma(G_{i}^{(\ell)}-b_{i}^{(\ell)})\|_{\infty}^{3} \right]\leqslant c,\,\,\,i=1,\ldots,n_{\ell}. \tag{6.13}\]
We first note that the bias \(b^{(\ell)}\) plays no role in the bound and can be set to zero. The reduction is obvious for (6.12) and (6.13), since we can write \(G^{(\ell)}=\widetilde{G}^{(\ell)}+b^{(\ell)}\), with \(\widetilde{G}^{(\ell)}\) a Gaussian process independent of \(b^{(\ell)}\), having covariance \(\widetilde{C}^{(\ell)}(x,y)=C^{(\ell)}(x,y)-\operatorname{I}_{n_{2}}c^{(\ell)}\). To see why we can also make this simplification for (6.11), assume that this inequality holds for \(F^{(\ell)}\) and \(G^{(\ell)}\) when the biases are zero. Define \(\widetilde{F}^{(\ell)}=F^{(\ell)}+b^{(\ell)}\) and \(\widetilde{G}^{(\ell)}=G^{(\ell)}+b^{(\ell)}\), where the summands are independent. For any Lipschitz \(\zeta:\operatorname{C}(\mathcal{S}^{n};\mathbb{R}^{n_{2}})\to\mathbb{R}\) we have, by independence, that
\[\bigl{|}\mathbb{E}[\zeta(\widetilde{F}^{(\ell)})]-\mathbb{E}[\zeta(\widetilde {G}^{(\ell)})]\bigr{|}=\bigl{|}\mathbb{E}[\zeta(F^{(\ell)}+b^{(\ell)})-\zeta( G^{(\ell)}+b^{(\ell)})]\bigr{|}=\bigl{|}\mathbb{E}[\widetilde{\zeta}(F^{(\ell)})- \widetilde{\zeta}(G^{(\ell)})]\bigr{|}\]
where
\[\widetilde{\zeta}(f)=\mathbb{E}[\zeta(f+b^{(2)})],\]
which is \(1\)-Lipschitz, since
\[\bigl{|}\widetilde{\zeta}(f)-\widetilde{\zeta}(g)\bigr{|}=\bigl{|}\mathbb{E} \bigl{[}\zeta(f+b^{(2)})-\zeta(g+b^{(2)})\bigr{]}\bigr{|}\leqslant\|f-g\|_{ \infty}.\]
Hence Wasserstein bounds in the case where the biases are non-zero are upper bounded by those in the zero bias case. Note that eliminating the biases \(b^{(\ell)}\) in this manner requires them to be Gaussian, as otherwise the process \(G^{(\ell)}\) may not be Gaussian.
We now begin the proof of the base case, \(\ell=2\). We first show (6.12), as well as some other related moment bounds used to show (6.11). We start by applying (6.10) from Lemma 6.2 with \(W=W_{1,\cdot}^{(1)},H=F^{(1)}\) and \(m=n_{1}\), to find
\[\mathbb{E}\Bigl{[}\bigl{(}F_{1}^{(2)}(x)-F_{1}^{(2)}(y)\bigr{)}^{2p}\Bigr{]} \leqslant c\,\mathbb{E}\Bigl{[}\bigl{(}F_{1}^{(1)}(x)-F_{1}^{(1)}(y)\bigr{)}^{ 2p}\Bigr{]}.\]
Applying (6.9) from Lemma 6.2 with \(W=W_{1,\cdot}^{(0)},H(x)=x\) and \(m=n_{0}\), and recalling that \(\sigma\) is Lipschitz, we obtain
\[\mathbb{E}\left[\bigl{(}F_{1}^{(1)}(x)-F_{1}^{(1)}(y)\bigr{)}^{2p}\right] \leqslant c\sum_{(j_{1},\ldots,j_{2p})\in A_{n_{0}}^{(2p)}}\prod_{\ell=1}^{2p} \lvert x_{j_{\ell}}-y_{j_{\ell}}\rvert\leqslant c\sum_{j_{1},\ldots,j_{2p}=1} ^{n_{0}}\prod_{\ell=1}^{2p}\lvert x_{j_{\ell}}-y_{j_{\ell}}\rvert\]
\[\big{|}\sigma(G_{1}^{(1)}(x))\big{|}\leqslant\big{|}\sigma(G_{1}^{(1)}(x))- \sigma(G_{1}^{(1)}(y))\big{|}+|\sigma(G_{1}^{(1)}(y))-\sigma(0)|+|\sigma(0)|\] \[\leqslant\text{Lip}_{\sigma}\big{(}\omega_{G_{1}^{(1)}}(\pi)+|G_{1 }^{(1)}(y)|\big{)}+|\sigma(0)|,\]
where \(\omega_{G_{1}^{(1)}}(\theta)\) denotes the modulus of continuity of \(G_{1}^{(1)}\) at level \(\theta\); see Definition 5.1. Taking the supremum over \(x\) implies
\[\big{\|}\sigma(G_{1}^{(1)})\big{\|}_{\infty}\leqslant c\big{(}\omega_{G_{1}^{( 1)}}(\pi)+|G_{1}^{(1)}(y)|+|\sigma(0)|\big{)}. \tag{6.18}\]
Because \(G^{(1)}(y)=W^{(0)}y\), it is easy to see that
\[\mathbb{E}\Big{[}\big{\|}\sigma(G_{1}^{(1)})\big{\|}_{\infty}^{3}\Big{]} \leqslant c. \tag{6.19}\]
Substituting this upper bound into (6.17) and combining with (6.16) in Theorem 1.1 implies
\[\mathsf{d}_{\mathcal{W}}(F^{(2)},G^{(2)})\leqslant c\sqrt{n_{2}}\Big{(} \varepsilon^{\frac{1}{2}(1-\frac{n}{p})}\sqrt{\log(1/\varepsilon)}+\delta+ \delta^{-2}\varepsilon^{-2(n+\iota)}\frac{n_{2}^{2}}{\sqrt{n_{1}}}\Big{)}.\]
Choosing
\[\delta=\varepsilon^{-\frac{2}{3}(n+\iota)}\bigg{(}\frac{n_{2}^{4}}{n_{1}}\bigg{)} ^{1/6}\ \ \text{and}\ \ \varepsilon=\bigg{(}\frac{n_{2}^{4}}{n_{1}}\bigg{)}^{1/(3(1-\frac{n}{p})+4(n+ \iota))}\]
we have shown that
\[\mathsf{d}_{\mathcal{W}}(F^{(2)},G^{(2)})\leqslant c\sqrt{n_{2}}\bigg{(}\frac{ n_{2}^{4}}{n_{1}}\bigg{)}^{(1-\frac{n}{p})/(6(1-\frac{n}{p})+8(n+\iota))}\sqrt{ \log(n_{1}/n_{2}^{4})}.\]
For (6.13), in exactly the same way as (6.18), we have for any \(y\in\mathcal{S}^{n}\),
\[\big{|}\sigma(G_{1}^{(2)}(x))\big{|}\leqslant c\big{(}\omega_{G_{1}^{(2)}}( \pi)+|G_{1}^{(2)}(y)|+|\sigma(0)|\big{)}. \tag{6.20}\]
But (6.12) and Proposition 5.2 together imply \(\mathbb{E}\big{[}\omega_{G_{1}^{(2)}}(\pi)^{3}\big{]}\leqslant c\). Because \(G_{1}^{(2)}\) is Gaussian, we have that
\[\mathbb{E}\big{[}|G_{1}^{(2)}(y)|^{3}\big{]}=2\sqrt{2/\pi}\,\mathrm{Var}(G_{1 }^{(2)}(y))^{3/2},\]
and, by definition and using (6.19),
\[\mathrm{Var}(G_{1}^{(2)}(y))=c_{w}^{(1)}\,\mathbb{E}\big{[}\sigma\big{(}G_{1 }^{(1)}(y)\big{)}^{2}\big{]}+c_{b}^{(1)}\leqslant c.\]
Thus
\[\mathbb{E}\big{[}\|\sigma(G_{1}^{(2)})\|^{3}\big{]}\leqslant c,\]
and the base case is established.
For the induction step, assume (6.11), (6.12), and (6.13) for some \(\ell\geqslant 2\); we show these three conditions are satisfied when \(\ell\) is replaced by \(\ell+1\). For (6.12) we have from the definition of the covariance \(C^{(\ell+1)}\) of \(G^{(\ell+1)}\) that
\[\mathbb{E}\Big{[}\big{(}G_{1}^{(\ell+1)}(x)-G_{1}^{(\ell+1)}(y) \big{)}^{2}\Big{]} =c\,\mathbb{E}\left[\Big{(}\sigma\big{(}G_{1}^{(\ell)}(x)\big{)} -\sigma\big{(}G_{1}^{(\ell)}(y)\big{)}\Big{)}^{2}\right]\] \[\leqslant c\,\mathbb{E}\left[\big{(}G_{1}^{(\ell)}(x)-G_{1}^{(\ell )}(y)\big{)}^{2}\right]\] \[\leqslant c\,\mathsf{d}(x,y)^{2},\]
where the first inequality uses that \(\sigma\) is Lipschitz, and the second step the induction hypothesis. As \(G^{(\ell+1)}\) is Gaussian, we now also have that
\[\mathbb{E}\Big{[}\big{(}G_{1}^{(\ell+1)}(x)-G_{1}^{(\ell+1)}(y)\big{)}^{2p} \Big{]}\leqslant c\,\mathsf{d}(x,y)^{2p}, \tag{6.21}\]
thus advancing the induction hypothesis for (6.12).
Now turning to (6.11), we first define an intermediate random field
\[\widehat{F}^{(\ell+1)}:=W^{(\ell)}\sigma\big{(}G^{(\ell)}\big{)}, \tag{6.22}\]
where we take \(G^{(\ell)}\) to be independent of \(W^{(\ell)}\). By the triangle inequality, we have
\[\mathsf{d}_{\mathcal{W}}\big{(}F^{(\ell+1)},G^{(\ell+1)})\big{)}\leqslant \mathsf{d}_{\mathcal{W}}\big{(}F^{(\ell+1)},\widehat{F}^{(\ell+1)}\big{)}+ \mathsf{d}_{\mathcal{W}}\big{(}\widehat{F}^{(\ell+1)},G^{(\ell+1)}\big{)}. \tag{6.23}\]
By definition, for the first term
\[\mathsf{d}_{\mathcal{W}}\big{(}F^{(\ell+1)},\widehat{F}^{(\ell+1)}\big{)}= \mathsf{d}_{\mathcal{W}}\big{(}W^{(\ell)}\sigma(F^{(\ell)}),W^{(\ell)}\sigma( G^{(\ell)})\big{)}. \tag{6.24}\]
The function \(\widetilde{\zeta}(f)=\mathbb{E}\big{[}\zeta(W^{(\ell)}\sigma(f))\big{]}\) satisfies
\[\big{|}\widetilde{\zeta}(f)-\widetilde{\zeta}(g)\big{|}\leqslant\mathbb{E}\big{[} \|W^{(\ell)}\|_{\mathrm{op}}\big{]}\operatorname{Lip}_{\sigma}\|f-g\|_{\infty},\]
and so the independence of \(W^{(\ell)}\) from \(F^{(\ell)}\) and \(G^{(\ell)}\) implies (6.24) is upper bounded as
\[\mathsf{d}_{\mathcal{W}}\big{(}F^{(\ell+1)},\widehat{F}^{(\ell+1)}\big{)} \leqslant\mathbb{E}\big{[}\|W^{(\ell)}\|_{\mathrm{op}}\big{]}\operatorname{ Lip}_{\sigma}\mathsf{d}_{\mathcal{W}}\big{(}F^{(\ell)},G^{(\ell)}\big{)}. \tag{6.25}\]
Now working on the second term of (6.23), we apply Theorem 1.1 and bound \(\|\widehat{F}^{(\ell+1)}-\widehat{F}^{(\ell+1)}_{\varepsilon}\|_{\infty}\), \(\|G^{(\ell+1)}-G^{(\ell+1)}_{\varepsilon}\|_{\infty}\), and \(\mathsf{d}_{\mathcal{F}}(F^{(\ell+1)},G^{(\ell+1)})\). By (6.10) of Lemma 6.2 with \(W=W^{(\ell)}_{1,\cdot}\), \(H=G^{(\ell)}\) and \(m=n_{\ell}\), we have
\[\mathbb{E}\Big{[}\big{(}F^{(\ell+1)}_{1}(x)-F^{(\ell+1)}_{1}(y)\big{)}^{2p} \Big{]}\leqslant c\,\mathbb{E}\Big{[}\big{(}G^{(\ell)}_{1}(x)-G^{(\ell)}_{1}( y)\big{)}^{2p}\Big{]}\leqslant c\,\mathsf{d}(x,y)^{2p},\]
where the last inequality holds via the induction hypothesis (6.12). In conjunction with inequality (6.21) for \(G^{(\ell+1)}\), Lemma 5.3 now implies that
\[\max\big{\{}\mathbb{E}\|F^{(\ell+1)}-F^{(\ell+1)}_{\varepsilon}\|_{\infty}, \mathbb{E}\|G^{(\ell+1)}-G^{(\ell+1)}_{\varepsilon}\|_{\infty}\big{\}} \leqslant c\,\sqrt{n_{\ell+1}}\,\varepsilon^{\frac{1}{2}(1-\frac{n}{p})}\sqrt {\log(1/\varepsilon)}. \tag{6.26}\]
Now, Lemma 6.1 with \(F=\widehat{F}^{(\ell+1)}\) and \(H=G^{(\ell)}\), noting that \(G^{(\ell)}\) is continuous with i.i.d. coordinate processes, implies
\[\mathsf{d}_{\mathcal{F}}\big{(}\widehat{F}^{(\ell+1)},G^{(\ell+1 )}\big{)} \leqslant\big{(}c^{(\ell)}_{w}\big{)}^{3/2}\big{(}B^{(\ell)} \big{)}^{3/4}\,\mathbb{E}\big{[}\|\sigma(G^{(\ell)}_{1})\|^{3}\big{]}\frac{n _{\ell+1}^{3/2}}{\sqrt{n_{\ell}}}\] \[\leqslant\big{(}c^{(\ell)}_{w}\big{)}^{3/4}\frac{n_{\ell+1}^{3/2} }{\sqrt{n_{\ell}}},\]
where we have used the induction hypothesis (6.13) in the final inequality. Applying this inequality along with (6.26) in Theorem 1.1 yields
\[\mathsf{d}_{\mathcal{W}}(\widehat{F}^{(\ell+1)},G^{(\ell+1)})\leqslant c\sqrt {n_{\ell+1}}\bigg{(}\delta^{-2}\varepsilon^{-2(n+\iota)}\frac{n_{\ell+1}^{2}}{ \sqrt{n_{\ell}}}+\varepsilon^{\frac{1}{2}(1-\frac{n}{p})}\sqrt{\log(1/ \varepsilon)}+\delta\bigg{)}, \tag{6.27}\]
and choosing
\[\delta=\varepsilon^{-\frac{2}{3}(n+\iota)}\bigg{(}\frac{n_{\ell+1}^{4}}{n_{ \ell}}\bigg{)}^{1/6}\ \ \text{and}\ \ \varepsilon=\bigg{(}\frac{n_{\ell+1}^{4}}{n_{\ell}}\bigg{)}^{1/(3(1-\frac{n}{p} )+4(n+\iota))}\]
gives
\[\mathsf{d}_{\mathcal{W}}(\widehat{F}^{(\ell+1)},G^{(\ell+1)})\leqslant c\sqrt {n_{\ell+1}}\bigg{(}\frac{n_{\ell+1}^{4}}{n_{\ell}}\bigg{)}^{(1-\frac{n}{p})/( 6(1-\frac{n}{p})+8(n+\iota))}\sqrt{\log(n_{\ell}/n_{\ell+1}^{4})}.\]
Using this bound and (6.25) in (6.23), and applying the induction hypothesis (6.11) advances the induction for (6.11).
Finally, advancing the induction for (6.13), i.e., bounding \(\mathbb{E}\left[\|\sigma(G^{(\ell+1)}_{1})\|^{3}\right]\leqslant c\), follows in exactly the same way as for the base case, starting at (6.20).
### Improved \(\mathbf{W_{1}}\) bounds: Proof of Theorem 1.4
This subsection proves Theorem 1.4, showing the rate improvement under the additional assumption that \(\sigma\) has three bounded derivatives. The rate improvement illustrated in Remark 1.5 comes from the fact that for the induction steps, we work with the smooth metric \(\mathsf{d}_{\mathcal{F}}\), and only smooth at the final layer, rather than at each layer of the induction in the \(\mathsf{d}_{\mathcal{W}}\) metric; compare (6.27) and (6.28).
**Theorem 6.3**.: _Assume that \(\sigma\) has three bounded derivatives, and let the weights satisfy the moment condition in (1.3). Recalling the definition of \(\beta_{L}\) from (1.4), for any \(L\geqslant 2\), there exists a positive constant \(c\), depending only on \((c_{w}^{(\ell)},c_{b}^{(\ell)},B^{(\ell)})_{\ell=0}^{L},n,p\), and \(\|\sigma^{(k)}\|_{\infty}\), \(k=1,2,3\), such that \(\mathsf{d}_{\mathcal{F}}(F^{(L)},G^{(L)})\leqslant c\,\beta_{L}\)._
Proof.: The proof follows by an induction similar to that in the proof of Theorem 1.2. For the base case \(L=2\), first note we can again set \(b^{(2)}=0\), since if \(\zeta\in\mathcal{F}\), then straightforward considerations imply \(\widetilde{\zeta}(f):=\mathbb{E}[\zeta(f+b^{(2)})]\in\mathcal{F}\). Thus, for \(\widetilde{G}^{(2)}-b^{(2)}\) and \(\widetilde{F}^{(2)}=F^{(2)}-b^{(2)}\) we have
and so it is enough to bound the right-hand side for generic \(\widetilde{\zeta}\in\mathcal{F}\). With this simplification, we can apply Lemma 6.1 with \(m=n_{1}\) and \(n=n_{2}\) to find
\[\mathsf{d}_{\mathcal{F}}\big{(}F^{(2)},G^{(2)}\big{)}\leqslant(c_{w}^{(1)})^ {3/2}(B^{(1)})^{3/4}\,\mathbb{E}\big{[}\|\sigma(G_{1}^{(1)})\|_{\infty}^{3} \big{]}\frac{n_{2}^{3/2}}{\sqrt{n_{1}}}\leqslant c\frac{n_{2}^{3/2}}{\sqrt{n_ {1}}},\]
where the last inequality follows from (6.19), which states \(\mathbb{E}\big{[}\|\sigma(G_{1}^{(1)})\|_{\infty}^{3}\big{]}\leqslant c\).
To advance the induction, assume the bound on \(\mathsf{d}_{\mathcal{F}}(F^{(\ell)},G^{(\ell)})\). In exactly the same way as above, we can assume \(b^{(\ell)}=0\). Now, recall that in (6.22), we defined the intermediate random field
\[\widehat{F}^{(\ell+1)}:=W^{(\ell)}\sigma\big{(}G^{(\ell)}\big{)},\]
where \(G^{(\ell)}\) is independent of \(W^{(\ell)}\). The triangle inequality, as before, yields
\[\mathsf{d}_{\mathcal{F}}\big{(}F^{(\ell+1)},G^{(\ell+1)})\big{)}\leqslant \mathsf{d}_{\mathcal{F}}\big{(}F^{(\ell+1)},\widehat{F}^{(\ell+1)}\big{)}+ \mathsf{d}_{\mathcal{F}}\big{(}\widehat{F}^{(\ell+1)},G^{(\ell+1)}\big{)}.\]
and we again define the function \(\widetilde{\zeta}(f)=\mathbb{E}\big{[}\zeta(W^{(\ell)}\sigma(f))\big{]}\). We need to argue that up to a constant factor, \(\widetilde{\zeta}\in\mathcal{F}\). Starting with the first derivative and denoting component-wise (Hadamard) multiplication by \(\circ\), we first have
\[\big{|}\widetilde{\zeta}(f+g)-\mathbb{E}\big{[}\zeta(W^{(\ell)} \big{(}\sigma(f)+\sigma^{\prime}(f)\circ g\big{)}\big{]}\big{|} \leqslant\sup_{h}\|D\zeta(h)\|\,\mathbb{E}\big{\|}W^{(\ell)}\big{(} \sigma(f+g)-\sigma(f)-\sigma^{\prime}(f)\circ g\big{)}\big{\|}\] \[\leqslant\sup_{h}\|D\zeta(h)\|\,\mathbb{E}\|W^{(\ell)}\|_{\rm op} \|\sigma^{\prime\prime}\|_{\infty}\|g\|_{\infty}^{2}.\]
Combining the above display with a direct Taylor-like computation, we next have that
\[\widetilde{\zeta}(f+ g)-\widetilde{\zeta}(f)\] \[=\mathbb{E}\Big{[}D\zeta(W^{(\ell)}\sigma(f))\big{[}W^{(\ell)} \big{(}\sigma^{\prime}(f)\circ g\big{)}\big{]}\Big{]}+\mathrm{O}\big{(}\|g\|_{ \infty}^{2}\big{)}\] \[=\mathbb{E}\Big{[}D\zeta(W^{(\ell)}\sigma(f))\big{[}W^{(\ell)} \big{(}\sigma^{\prime}(f)\circ g\big{)}\big{]}\Big{]}+\mathrm{O}\big{(}\|g\|_{ \infty}^{2}\big{)},\]
so that
\[D\widetilde{\zeta}(f)[g]=\mathbb{E}\Big{[}D\zeta(W^{(\ell)}\sigma(f))\big{[}W^{( \ell)}\big{(}\sigma^{\prime}(f)\circ g\big{)}\big{]}\Big{]}.\]
Since \(\sup_{h}\|D\zeta(h)\|\leqslant 1\), it follows that
\[\sup_{f}\|D\widetilde{\zeta}(f)\|\leqslant\|\sigma^{\prime}\|_{\infty}\, \mathbb{E}\|W^{(\ell)}\|_{\mathrm{op}}.\]
Similar but more onerous computations show
\[D^{2}\widetilde{\zeta}(f)[g^{(1)},g^{(2)}]= \ \mathbb{E}\Big{[}D^{2}\zeta(W^{(\ell)}\sigma(f))\big{[}W^{( \ell)}\big{(}\sigma^{\prime}(f)\circ g^{(1)}\big{)},W^{(\ell)}\big{(}\sigma^{ \prime}(f)\circ g^{(2)}\big{)}\big{]}\Big{]}\] \[\ +\mathbb{E}\Big{[}D\zeta(W^{(\ell)}\sigma(f))\big{[}W^{(\ell)} \big{(}\sigma^{\prime\prime}(f)\circ g^{(1)}\circ g^{(2)}\big{)}\big{]}\Big{]},\]
so that
\[\sup_{f}\|D^{2}\widetilde{\zeta}(f)\|\leqslant\|\sigma^{\prime}\|_{\infty}^{2 }\,\mathbb{E}\Big{[}\|W^{(\ell)}\|_{\mathrm{op}}^{2}\Big{]}+\|\sigma^{\prime \prime}\|_{\infty}\,\mathbb{E}\|W^{(\ell)}\|_{\mathrm{op}}<\infty.\]
Finally, some straightforward but space-consuming manipulations, using in particular that
\[|D^{2}\zeta(h)[g^{(1)},g^{(2)}]|\leqslant 3\|g^{(1)}\|_{\infty}\|g^{(2)}\|_{ \infty}\|D^{2}\zeta(h)\|,\]
from Barbour et al. (2023, Lemma 2.4), imply that
\[\frac{\big{\|}D^{2}\widetilde{\zeta}(h)-D^{2}\widetilde{\zeta}(f)\big{\|}}{ \|f-h\|}\leqslant c\,\max\bigl{\{}1,\mathbb{E}\big{[}\|W^{(\ell)}\|_{\mathrm{ op}}^{3}\big{]}\big{\}}.\]
Hence, using the independence of \(W^{(\ell)}\) with \(F^{(\ell)}\) and \(G^{(\ell)}\) we have
\[\mathsf{d}_{\mathcal{F}}\big{(}F^{(\ell+1)},\widehat{F}^{(\ell+1)}\big{)} \leqslant c\,\max\bigl{\{}1,\mathbb{E}\big{[}\|W^{(\ell)}\|_{\mathrm{op}}^{3} \big{]}\big{\}}\mathsf{d}_{\mathcal{F}}\big{(}F^{(\ell)},G^{(\ell)}\big{)},\]
and the proof now follows as that for Theorem 1.2, mutatis mutandis.
We are now ready to prove Theorem 1.4, using Theorem 1.1. Compared to the proof of Theorem 1.2, the specific choice of the smoothing and regularization terms, \(\varepsilon\) and \(\delta\) are different, resulting in the required rate improvement.
Proof of Theorem 1.4.: We apply Theorem 1.1, with \(F=F^{(L)}\) and \(W=G^{(L)}\), and hence with \(d=n_{L}\). Applying Lemma 5.3, using induction with (6.10) and (6.14), we have that
\[\mathbb{E}\|F^{(L)}-F^{(L)}_{\varepsilon}\|_{\infty}\leqslant c\,\sqrt{n_{L} }\,\varepsilon^{\frac{1}{2}(1-\frac{n}{p})}\sqrt{\log(1/\varepsilon)},\]
and the same bound also holds for \(\mathbb{E}\|G^{(L)}-G^{(L)}_{\varepsilon}\|_{\infty}\). From Theorem 6.3, we have
\[\mathsf{d}_{\mathcal{F}}(F^{(L)},G^{(L)})\leqslant c\,\beta_{L}.\]
Putting everything together, we have
\[\mathsf{d}_{\mathcal{W}}(F^{(L)},G^{(L)})\leqslant c\sqrt{n_{L}}\,\Big{(} \sqrt{n_{L}}\,\beta_{L}\,\delta^{-2}\varepsilon^{-2(n+\iota)}+\varepsilon^{ \frac{1}{2}(1-\frac{n}{p})}\sqrt{\log(1/\varepsilon)}+\delta\Big{)}\,. \tag{6.28}\]
Picking \(\varepsilon\) and \(\delta\) as
\[\delta=\varepsilon^{-2(n+\iota)/3}(n_{L}\beta_{L}^{2})^{1/6}\qquad\varepsilon =(n_{L}\beta_{L}^{2})^{1/(3(1-\frac{n}{p})+4(n+\iota))},\]
we obtain the desired result. |
2306.01674 | Neural Differential Recurrent Neural Network with Adaptive Time Steps | The neural Ordinary Differential Equation (ODE) model has shown success in
learning complex continuous-time processes from observations on discrete time
stamps. In this work, we consider the modeling and forecasting of time series
data that are non-stationary and may have sharp changes like spikes. We propose
an RNN-based model, called RNN-ODE-Adap, that uses a neural ODE to represent
the time development of the hidden states, and we adaptively select time steps
based on the steepness of changes of the data over time so as to train the
model more efficiently for the "spike-like" time series. Theoretically,
RNN-ODE-Adap yields provably a consistent estimation of the intensity function
for the Hawkes-type time series data. We also provide an approximation analysis
of the RNN-ODE model showing the benefit of adaptive steps. The proposed model
is demonstrated to achieve higher prediction accuracy with reduced
computational cost on simulated dynamic system data and point process data and
on a real electrocardiography dataset. | Yixuan Tan, Liyan Xie, Xiuyuan Cheng | 2023-06-02T16:46:47Z | http://arxiv.org/abs/2306.01674v1 | # Neural Differential Recurrent Neural Network with Adaptive Time Steps
###### Abstract
The neural Ordinary Differential Equation (ODE) model has shown success in learning complex continuous-time processes from observations on discrete time stamps. In this work, we consider the modeling and forecasting of time series data that are non-stationary and may have sharp changes like spikes. We propose an RNN-based model, called _RNN-ODE-Adap_, that uses a neural ODE to represent the time development of the hidden states, and we adaptively select time steps based on the steepness of changes of the data over time so as to train the model more efficiently for the "spike-like" time series. Theoretically, _RNN-ODE-Adap_ yields provably a consistent estimation of the intensity function for the Hawkes-type time series data. We also provide an approximation analysis of the RNN-ODE model showing the benefit of adaptive steps. The proposed model is demonstrated to achieve higher prediction accuracy with reduced computational cost on simulated dynamic system data and point process data and on a real electrocardiography dataset.
## 1 Introduction
We consider the modeling and forecasting of time series characterized by _irregular_ time steps and _non-stationary_ patterns, which are commonly observed in various applications, such as finance [10] and healthcare [3]. We treat the data as a sequence of ordered observations from an unknown underlying continuous-time process, sampled at discrete time points. Recurrent Neural Network (RNN) [29] is frequently employed to model such sequential data. This work proposes to use an RNN-based model with a neural Ordinary Differential Equation (ODE) to fit time series data.
The classical neural ODE type approaches typically assume _regular_ time grids in the data sequences, or the _same_ irregular time grids across different sequences [27]. When dealing with highly non-stationary data, such as a time series with sudden spikes, it becomes imperative to select sufficiently small time steps to ensure accurate modeling of regions with steep changes. These regions require more refined time steps, especially the ones with abrupt spikes. However, it is common for most of the time horizon to observe a slow-varying and less steep time series, which is more "flat" over time; thus, the refined time steps would result in unnecessarily high computational costs. Some examples of time series with abrupt spikes are shown in Figure 1, illustrating a spectrum ranging from continuous time series to discontinuous time series such as counting processes.
To train the neural ODE for data with non-stationary patterns more effectively, we propose an approach that employs _adaptive time steps_ in the neural ODE model, which we refer to as _RNN-ODE-Adap_. The model adaptively selects the time steps based on the local variation of the time series,
enabling it to capture underlying trends with potentially fewer steps. Our numerical experiments showed that, compared to other baseline models that use regular time steps, _RNN-ODE-Adap_ could achieve higher prediction accuracy with similar or lower time complexity. The contribution of the work is as follows.
* Based on a neural ODE model characterizing the dynamics of hidden states, we propose an algorithm to construct adaptive time steps, which assigns refined time steps to data around "spikes" while using rough time steps for data in "flat" segments. This can significantly reduce the computational cost in the training process with little impact on the modeling performance.
* We provide theoretical insights into the consistency of the model using the example of Hawkes process type data, and the approximation guarantee of the RNN-ODE model that illuminates the benefits of adaptive time steps.
* We conduct numerical experiments on both synthetic data and a real-world time-series data set to demonstrate the advantage of the proposed algorithm in terms of both modeling accuracy and computational efficiency.
### Related Works
Neural ODE.Our work is closely related to the neural ODE [2] model, which parameterizes the derivative of the hidden state using a neural network. In [2], a generative time-series model was proposed, which takes the neural ODE as the decoder. Furthermore, [27] proposed a non-generative model with continuous-time hidden dynamics to handle irregularly sampled data based on [2]. Compared with existing works related to neural ODE [27, 35, 37, 5, 22, 18, 23, 12, 11], we model the ODE that determines the progression of hidden states by including the data itself in the derivative of the hidden state. In contrast to existing works on non-stationary environments such as the piecewise-constant ODE [11], our work proposes to use adaptive time steps to automatically adapt to sparse spikes in the time series, without pre-defining the time period for each piece of ODE.
Neural CDE.We note that the Neural Controlled Differential Equation (CDE) [19] also incorporates the observations into the model continuously. Specifically, the hidden states in [19] follow the CDE \(h(t)=h(t_{0})+\int_{t_{0}}^{t}f_{\theta}(h(s))\mathrm{d}X_{s}\), where the integral is a Riemann-Stieltjes integral. We would
Figure 1: Illustration of spike-like time series. The crosses denote the discretely sampled time steps, which can be irregular. In the left panel, the subsequences enclosed with the orange, yellow, and green brackets represent the (training or testing) windows generated from this sequence.
like to emphasize some key differences between model (2) and Neural CDE. The \(X_{s}\) in Neural CDE is the natural cubic spline of \(\{(x(t_{i}),t_{i})\}_{i}\), and \(f_{\theta}:\mathbb{R}^{d_{h}}\to\mathbb{R}^{d_{h}\times(D+1)}\), where \(d_{h}\) is the number of hidden units and \(D\) is the data dimension. Thus, for the same number of hidden units, Neural CDE requires a more complex parameterized \(f_{\theta}\) to model \(h(t)\). Moreover, since \(X_{s}\) is obtained by cubic spline, it is less naturally adapted to the prediction task that requires extrapolation to the time stamps not seen when computing the spline. Therefore, it is hard to evaluate the prediction performance of Neural CDE and thus we defer the evaluation under the Neural CDE setting for future work.
Continuous-Time RNNs.Our model belongs to the extensive family of continuous-time RNNs, originating from [26]. Several existing studies explore various RNN architectures, such as [1, 17, 6, 30, 16] These RNN models leverage their structures to address the exploding and vanishing gradient problem. Our model also adopts a continuous-time ODE framework for time series data, and the proposed adaptive time stamp selection method can be viewed as effectively reducing the length of the discrete sequence when a significant part of the process is changing slowly. Meanwhile, our approach can also be used concurrently with the methodologies such as in [6]. As the focus of our work is to model the "spike-like" time series data, the combination of our model and the existing continuous-time RNN models can further improve the efficiency when applied to such data.
Time Adaptivity.Previous studies have investigated the incorporation of time adaptivity in continuous-time RNNs, such as GACTRNN [13], TARN [16], and LEM [31]. In these works, time adaptivity was incorporated by multiplying the ODE with an adaptively learned time modulator, usually parametrized by another sub-network. In contrast, our method adaptively selects time steps during the preprocessing phase, where the selection process only utilizes the steepness of change of the time series data. Therefore, the proposed model does not involve the training of a sub-network for the time modulator as in the previous models, which may incur an increase in model size and additional computational costs.
## 2 Problem Setup
### Training Data and Prediction Task
Consider a random continuous time series \(x(t)\in\mathbb{R}^{D}\) over the time horizon \([0,T]\) for some \(T\in\mathbb{R}^{+}\). We observe multiple independent and identically distributed samples of the continuous process \(x(t)\), where each sample is sampled at discrete time stamps, which can vary across different samples. We split the observed sequences into training and testing sequences. From the training sequences, we generate a total of \(K^{(\text{Tr})}\) training windows, denoted as \(\{\mathbf{x}^{(\text{Tr},k)}\}_{k=1}^{K^{(\text{Tr})}}\), each of window length \(N\) as our training data (see the left panel of Figure 2 for an illustrative example). Here \(\mathbf{x}^{(\text{Tr},k)}\coloneqq\{x^{(\text{Tr},k)}(t_{1}^{(\text{Tr},k)}), \ldots,x^{(\text{Tr},k)}(t_{N}^{(\text{Tr},k)})\}\) and \(0<t_{1}^{(\text{Tr},k)}<\cdots<t_{N}^{(\text{Tr},k)}\leq T\) are the corresponding time stamps for the \(k\)-th training window. Similarly, we create \(K^{(\text{Te})}\) testing windows of length \(N\) from the testing sequences, denoted as \(\{\mathbf{x}^{(\text{Te},k)}\}_{k=1}^{K^{(\text{Te})}}\). To simplify the notation, we may drop the superscripts and write \(\{x(t_{1}),x(t_{2}),\ldots,x(t_{N})\}\) for a given training window if it does not cause confusion.
Our goal is to make predictions based on historical data. Given a historical series \(x(t_{1}),\ldots,x(t_{n})\), we aim to perform either one-step or multi-step predictions. The one-step prediction involves predicting \(x(t_{n+1})\) at a single future time \(t_{n+1}\) based on \(\{x(t_{1})\ldots,x(t_{n})\}\), while the \(m\)-step prediction includes forecasting \(\{x(t_{n+1}),\ldots,x(t_{n+m})\}\) at future times \(t_{n+1}<\cdots<t_{n+m}\). The detailed formulas for measuring the prediction accuracy are provided in Appendix B.1.
### Training Objective
Given the time horizon \([0,T]\), recall that we have a collection of training windows with the \(k\)-th one denoted as \(\mathbf{x}^{(\mathrm{Tr},k)}=\{x^{(\mathrm{Tr},k)}(t_{1}^{(\mathrm{Tr},k)}),\ldots,x^ {(\mathrm{Tr},k)}(t_{N}^{(\mathrm{Tr},k)})\}\). Here, the time steps are allowed to be _heterogeneous_ for different training windows. We train the model parametrized by neural networks (see Section 3.1 for the neural ODE model adopted in this paper) with trainable parameters \(\Theta\) using the _mean-squared regression loss_ function
\[\mathcal{L}(\Theta;\{\mathbf{x}^{(\mathrm{Tr},k)}\}_{k=1}^{K^{(\mathrm{Tr})}})= \sum_{k=1}^{K^{(\mathrm{Tr})}}\sum_{i=1}^{N}\|\hat{x}^{(\mathrm{Tr},k)}(t_{i}^ {(\mathrm{Tr},k)})-x^{(\mathrm{Tr},k)}(t_{i}^{(\mathrm{Tr},k)})\|^{2}|t_{i}^{( \mathrm{Tr},k)}-t_{i-1}^{(\mathrm{Tr},k)}|, \tag{1}\]
where \(\Theta\) are the network parameters, \(\hat{x}^{(k)}(t)\) is the output of the neural ODE model under parameter \(\Theta\) conditioned on all past observation, and \(t_{0}^{(\mathrm{Tr},k)}\) is the added initial time stamp for each window.
The time difference term \(|t_{i}^{(\mathrm{Tr},k)}-t_{i-1}^{(\mathrm{Tr},k)}|\) ensures that the empirical mean-squared error loss (1) matches the \(\ell_{2}\) loss for function estimation. This term will be important to balance the fitting errors among time intervals with different time steps. This term would be necessary for the proposed scheme with adaptive (non-uniform) time steps. In our numerical examples, we also performed an ablation study regarding this term to demonstrate its necessity; see Figure A9 in Appendix B.5 for an example.
## 3 Method
We state the neural ODE model used for the hidden dynamic in Section 3.1. The algorithm for adaptive steps is introduced in Section 3.2, and the computational complexity is explained in Section 3.3. More implementation details, such as evaluation metrics and choice of thresholds, are given in Appendix B.1.
### Neural ODE for RNN model
To be able to model the observation \(x(t)\) as a function of a hidden value \(h(t)\), we follow the previous continuous-time RNN neural-ODE approach [1, 6] to model the hidden dynamics of \(h(t)\) as
\[h^{\prime}(t)=f(h(t),x(t);\theta_{h}), \tag{2}\]
where \(f\) is a neural network parameterized by \(\theta_{h}\). If one directly adopts the neural ODE model [2] to the hidden state \(h(t)\), the ODE model would be \(h^{\prime}(t)=f(h(t),t;\theta)\) without the observed time series data \(x(t)\). In contrast, the model (2) incorporates the observed incoming time series data \(x(t)\) as an input to \(f\), which is important for modeling the time series data especially when the underlying dynamics is non-stationary. The time evolution of the observed series \(x(t)\) is modeled by an output neural network \(g\) that maps the hidden value \(h(t)\) to \(x(t)\) as
\[\hat{x}(t)=g(h(t);\theta_{d}), \tag{3}\]
where \(g\) is called the output function parameterized by \(\theta_{d}\).
Given the neural network functions \(f\) and \(g\) (which generally can adopt any architecture) and the observed time series data \(x(t)\), from any initial input \(h(0)\), we can numerically solve the RNN neural ODE model (2) to obtain the \(h\) values at any time \(t\in(0,T)\) as \(h(t)=h(0)+\int_{0}^{t}f(h(s),x(s);\theta_{h})ds\), and then predict the value of \(x(t)\) by \(\hat{x}(t)=g(h(t);\theta_{d})\). The neural ODE integration can be solved
by existing first-order or higher-order schemes, and the back-propagation can be computed by the adjoint method [24, 2]. If one uses the forward Euler scheme, the discrete-time dynamic of \(h(t)\) (after incorporating the time step into the network function \(f\)) becomes \(h_{i+1}=h_{i}+f(h_{i},\theta_{i})\), which recovers the structure of Residual networks [28, 14]. In this work, we adopt the forward Euler scheme in experiments due to its better stability than higher-order schemes when the dynamic has steep changes. Our methodology of adaptive time grids can potentially be extended to higher-order differential schemes.
### Adaptive Time Steps
We propose to learn a neural-ODE RNN model using adaptive time stamps, and thus the method is called _RNN-ODE-Adap_. The construction of adaptive time steps is summarized in Algorithm 1. The intuition behind the proposed algorithm is to assign longer (rough) time intervals during time regions where the time series is slowly time-varying (such as "flat" curves), while assigning shorter (fine) time intervals during those regions with "spikes" (highly non-stationary and fast time-varying regimes). For constructing the adaptive time stamps, we assume the initial time grid is sufficiently fine and adopt a dyadic-partition type algorithm to be detailed as follows.
Given a raw (discrete-time) training window \(x(t_{0}),x(t_{1}),\ldots,x(t_{N})\) sampled at the finest level of the time stamps \(0\leq t_{0}<\cdots<t_{N}\leq T\). For simplicity, below we write it as \(x_{0},x_{1},x_{2},\ldots,x_{N}\). Without loss of generality, we assume \(N\) is a power of two. We first define a monitor function \(M(\cdot)\) that measures the variation of the sub-sequence \(\{x_{i},\ldots,x_{j}\}\), \(i<j\). In this paper, we mainly adopt the _maximum variation_ defined as
\[M(\{x_{i},\ldots,x_{j}\}):=\max_{i+1\leq k\leq j}\frac{\|x_{k}-x_{k-1}\|_{2}} {|t_{k}-t_{k-1}|}, \tag{4}\]
which captures the maximum variation among any two adjacent time stamps. Here we may also choose \(\ell_{p}\) norms for any \(p\geq 1\).
We then screen from the finest level of time grids and adaptively merge neighboring time grids if their maximum variation is below a pre-specified threshold \(\epsilon>0\). In detail, for the first level \(l=1\), we group the original \(N\) time intervals into \(N/2\) sub-intervals (as demonstrated in Figure 2) and each
Figure 2: Illustration of adaptive time steps resulted from Algorithm 1. In this example, \(N=8\) and \(L=2\); three samples are removed in phase \(l=1\), and one sample is removed in phase \(l=2\).
sub-interval contains three consecutive time stamps: \(\{x_{0},x_{1},x_{2}\},\{x_{2},x_{3},x_{4}\},\ldots,\{x_{N-2},x_{N-1},x_{N}\}\). Then we calculate the maximum variation \(M(x_{0},x_{1},x_{2}),\ldots,M(x_{N-2},x_{N-1},x_{N})\) for each sub-interval. We then merge the two consecutive time intervals into one, i.e., remove the middle time stamp \(x_{2n+1}\), for \(n=0,1,\ldots,N/2-1\), if
\[M(x_{2n},x_{2n+1},x_{2n+2})<\epsilon.\]
In other words, we only keep the time stamps on which the maximum variation exceeds \(\epsilon\).
The above selection procedure is repeated similarly for \(l=2,3,\ldots\) until a pre-specified maximum integer \(L\). The value \(L\) corresponds to the roughest time interval. The algorithm is detailed in Algorithm 1, in which we maintain a set \(\mathcal{D}\) that characterizes which time stamps to be removed. Meanwhile, we also keep a Flag vector in each round as an indicator of whether the midpoint time stamp was removed in the _last_ round and Flag\({}_{\texttt{new}}\) indicating whether the middle time stamp will be removed in the _current_ round. The elements in the Flag vector equals \(0\) for intervals with slight variation (\(M(\cdot)\leq\epsilon\)) and \(1\) otherwise. The primary usage of such Flag vector is that for two consecutive intervals in round \(l^{\prime}\), we only merge the intervals if both of them are slow-varying intervals (i.e., merged from a previous round \(l<l^{\prime}\)).
```
1:Input: Data series \(\{x_{0},x_{1},x_{2},\ldots,x_{N}\}\); threshold \(\epsilon>0\); \(L\in\mathbb{Z}_{+}\).
2:Initialize: \(\mathcal{D}=\emptyset\). A flag vector Flag = \(\{0,0,\ldots,0\}\) of length \(N\).
3:for\(l=1\)to\(L\)do
4: Define a new flag: Flag\({}_{\texttt{new}}=\{0,0,\ldots,0\}\) of length \(\lfloor N/2^{l}\rfloor\).
5:for\(i=1\)to\(\lfloor N/2^{l}\rfloor\)do
6:if Flag\([2(i-1)+1]=\texttt{Flag}[2i]=0\)then
7: Compute the monitoring function \(M(\{x_{2^{l}(i-1)},x_{2^{l}(i-1)+2^{l-1}},x_{2^{l}i}\})\).
8:if\(M<\epsilon\)then
9:\(\mathcal{D}=\mathcal{D}\cup(2^{l}(i-1)+2^{l-1})\).
10:else
11: Mark Flag\({}_{\texttt{new}}[i]=1\).
12:endif
13:else
14: Mark Flag\({}_{\texttt{new}}[i]=1\).
15:endif
16:endfor
17: Update Flag = Flag\({}_{\texttt{new}}\).
18:endfor
19:Output: Indexes of removed time steps \(\mathcal{D}\).
```
**Algorithm 1** A dyadic algorithm for selecting adaptive time steps.
The output of Algorithm 1 is the set \(\mathcal{D}\) of time stamps to be removed. The model is trained on the remaining time steps only. We provide an illustration in Figure 2 of the algorithm for selecting adaptive time steps. From the final results in Figure 2, it can be seen that the output of the adaptive time steps uses longer time steps to model stationary periods (from \(i=0\) to \(4\) and from \(i=6\) to \(8\)), and uses _shorter_ time steps to model _spikes_ (from \(i=4\) to \(6\)) in the sequence.
### Computational Complexity
The computational complexity of applying Algorithm 1 in the preprocessing stage to \(K^{\rm(Tr)}\) training windows is \(O(K^{\rm(Tr)}ND)\), where \(D\) is the data dimension. For the neural ODE model described as in
(2)-(3), when \(f\) possesses the same network structure as a vanilla RNN with \(d_{h}\) hidden units and \(g\) is a one-layer fully connected network, the complexity in the training process is \(O(n_{e}K^{\rm(Tr)}\bar{N}_{a}d_{h}(d_{h}+D))\), where \(n_{e}\) and \(\bar{N}_{a}\) represent the number of training epochs and the average length of the adaptive windows, respectively.
Since the computational cost in the training process usually dominates that in the preprocessing step (which happens as long as \(n_{e}d_{h}\geq 2^{L}\)), the overall complexity of the _RNN-ODE-Adap_ model is \(O(n_{e}K^{\rm(Tr)}\bar{N}_{a}d_{h}(d_{h}+D))\). This is of the same order as the complexity of training a vanilla RNN (we refer to Appendix B.2 for the specific structure) with \(d_{h}\) hidden units in \(n_{e}\) epochs, using \(K^{\rm(Tr)}\) training windows with the same length \(\bar{N}_{a}\). Therefore, compared with the complexity when training with the original finest \(N\) time grids, the complexity associated with the adaptive method will be reduced by a factor of \(\bar{N}_{a}/N\). The smallest achievable complexity will be reduced by a factor of \(1/2^{L}\) when choosing a sufficiently large threshold \(\epsilon\).
## 4 Theory
In this section, we provide the recovery consistency of the training objective (Section 4.1) and approximation error guarantee of the RNN-ODE model revealing the benefit of adaptive step size (Section 4.2). All proofs can be found in Appendix A.
### Function Estimation for Event-type Data
We present the theoretical analysis for function estimation based on the proposed model under counting-type time series. It is worthwhile noting that counting-type time series represent a special class of continuous-time models since they exemplify the _extreme_ case of "spike-like" data, where we have discontinuities from zero to one, as shown at the right end of Figure 1.
For event-type sequences, the raw data contains a list of event times \(0<t_{1}<t_{2}<\ldots<t_{n}<T\) on the time horizon \([0,T]\). Each timestamp is the time when an event happens. In practice, the estimation is performed on _discrete-time grids_. Define the counting process \(N(t):=\sum_{i=1}^{n}\mathbf{1}(t_{i}\leq t)\) as the total number of events happened before time \(t\). We convert such continuous-time data into discrete observations by discretizing the time interval \([0,T]\) into \(M\) intervals of equal length \(\Delta t=T/M\), and then let \(x_{m}=N(m\Delta t)-N((m-1)\Delta t)\), \(m=0,1,\ldots,M\) (by convention \(x_{0}=0\)). When \(\Delta t\) is chosen sufficiently small, it becomes the Bernoulli process where \(x_{i}\in\{0,1\}\).
We consider the temporal Hawkes processes [25], in which the values \(x_{i}\) are mostly zero under mild assumptions, corresponding to sparse "spikes". Such temporal Hawkes processes can be characterized by its _conditional intensity_ function defined as
\[\lambda^{*}(t)=\lim_{\Delta\to 0}\Delta^{-1}\mathbb{E}[N(t+\Delta)-N(t)| \mathcal{F}_{t}],\]
where the filtration \(\mathcal{F}_{t}\) stands for the information available up to time \(t\). In the case of Hawkes processes, \(\lambda(t)=\mu+\alpha\int_{0}^{t}\phi(t-s)dN(s)\) is simply a linear function of past jumps of the process, where \(\phi(\cdot)\) is the influence kernel. For example, under the special case of exponential kernels, the intensity function becomes \(\lambda^{*}(t)=\mu+\alpha\beta\int_{0}^{t}e^{-\beta(t-\tau)}dN(\tau)\).
The intensity function recovery consistency by minimizing least-square population loss is proved in Theorem 4.2 under a memory constraint. We parameterize the function by a neural network (NN) based structure characterized as in (2)-(3). We define the prototypical network architecture below.
**Definition 4.1**.: _Define the function class NN-ODE\((d_{\rm out},L_{h},p_{h},L_{d},p_{d})\) as_
\[\begin{split}\text{NN-ODE}(d_{\rm out},L_{h},p_{h},L_{d},p_{d})& \coloneqq\{F:\mathbb{R}\mapsto\mathbb{R}^{d_{\rm out}}|F(t)=g(h(t)),h^{ \prime}(t)=f(h(t),x(t)),\\ g&\text{ is NN with $L_{d}$ layers and max-width $p_{d},h$ is NN with $L_{h}$ layers and max width $p_{h}.$}\}\end{split} \tag{5}\]
**Theorem 4.2**.: _Assume there exist \(d\) buffer time steps with samples \(x_{-d},\ldots,x_{-1}\) prior to the Hawkes count data \(\{x_{0},\ldots,x_{M}\}\) and each time step has duration \(\Delta t=T/M\). We further assume NN-ODE\((d_{\mathrm{out}},L_{h},p_{h},L_{d},p_{d})\) is rich enough to model the true intensity function. Then the minimizer \(F^{*}\) to the population loss function_
\[\Psi(F)\coloneqq\sum_{m=1}^{M}\mathbb{E}\big{[}(x_{m}-F(m\Delta t)\Delta t)^{2 }|x_{m-d}\ldots x_{m-1}\big{]},\]
_optimized within the neural network class \(F\in\text{NN-ODE}(D,L_{h},p_{h},L_{d},p_{d})\), satisfies \(F^{*}(m\Delta t)=\tilde{\lambda}(m):=\frac{1}{\Delta t}\int_{(m-1)\Delta t}^{m \Delta t}\lambda^{*}(t)dt\), which is the discretized intensity._
_Remark 4.3_.: We have the recovered intensity function \(F^{*}(m\Delta t)=\tilde{\lambda}(m)\) and is extendable to the entire time horizon as \(F^{*}(t)=F^{*}(m\Delta t)\mathbf{1}\{(m-1)\Delta t<t\leq m\Delta t\}\) for any \(t\in[0,T]\). In Appendix A.1 we extend the analysis to show that under the asymptotic scenario when \(M\to\infty\), we have that \(\int_{0}^{T}|F^{*}(t)-\lambda^{*}(t)|dt\to 0\).
The above argument is made under the population loss, showing that using the least-square loss function can recover the actual intensity function for the Hawkes process. It is mainly due to the generality of the ODE model (2) and (3), which is consistent with Hawkes process and most time series models. The argument may be extended to empirical processes by utilizing the empirical concentration of the process.
### Approximation Analysis of RNN-ODE-Adap
For theoretical generality, in this subsection, we consider the continuous-time process \(y(t)\in\mathbb{R}^{D^{\prime}}\) satisfying
\[h^{\prime}(t)=f(h(t),x(t)),\quad y(t)=g(h(t)),\quad h(0)=h_{0},\quad t\in[0,T], \tag{6}\]
where \(x(t)\in\mathbb{R}^{D}\) is the observable input data and \(h(t)\in\mathbb{R}^{d_{h}}\) is the underlying hidden process from some initial value \(h_{0}\). Taking \(y(t)\) to be \(x(t)\) reduces the model to the case (2)(3) considered in the other parts of the work. We provide two theorems: Theorem 4.6 proves the uniform approximation to \(y(t)\) by continuous-time RNN-ODE model without time discretization; Theorem 4.9 further takes into account the discrete-time scheme and obtains the approximation on a time grid.
Approximation of the continuous-time model.We will use neural network functions \(f_{\theta}\) and \(g_{\phi}\) to approximate the functions \(f\) and \(g\), respectively, see Lemma 4.5. Given \(x(t)\) on \([0,T]\), let \(h_{\mathrm{NN}}(t)\) be the solution to the hidden-process ODE \(h^{\prime}_{\mathrm{NN}}(t)=f_{\theta}(h_{\mathrm{NN}}(t),x(t))\) from \(h_{\mathrm{NN}}(0)=h_{0}\). This leads to the output process \(y_{\mathrm{NN}}(t)\) defined by
\[h^{\prime}_{\mathrm{NN}}(t)=f_{\theta}(h_{\mathrm{NN}}(t),x(t)),\quad y_{ \mathrm{NN}}(t)=g_{\phi}(h_{\mathrm{NN}}(t)),\quad h_{\mathrm{NN}}(0)=h_{0}, \quad t\in[0,T]. \tag{7}\]
The approximation of \(y_{\mathrm{NN}}(t)\) to \(y(t)\) will be based on the approximation of \(f_{\theta}\) and \(g_{\phi}\), which calls for the regularity condition of the system (6). We take the following technical conditions.
**Assumption 4.4**.:
1. _The observed process_ \(x:[0,T]\to[-1,1]^{D}\) _and is Lipschitz continuous over_ \(t\)_; The hidden process_ \(h:[0,T]\to[-1,1]^{d_{h}}\)_._
2. \(f:[-1.1,1.1]^{d_{h}}\times[-1,1]^{D}\to\mathbb{R}^{d_{h}},(\eta,x)\mapsto f( \eta,x)\)_, and is Lipschitz continuous with respect to both_ \(\eta\) _and_ \(x\)_._
3. \(g:[-1.1,1.1]^{d_{h}}\to[-1,1]^{D^{\prime}},\eta\mapsto g(\eta)\) _is Lipschitz continuous._
We let \(L_{g}\) denote the global Lipschitz constant of \(g\) on \([-1.1,1.1]^{d_{h}}\). For \(f\), both global and local Lipschitz constants on the domain \([-1.1,1.1]^{d_{h}}\times[-1,1]^{D}\) are used. More detailed definitions of these constants will be introduced in Lemma 4.5 (for the global constant) and Theorem 4.6 (for the local constant).
The next lemma directly follows by applying [36] to the case where \(f\) and \(g\) have 1st-order regularity (Lipschitz continuity). The proof is given in appendix A.2.
**Lemma 4.5**.: _For any \(\epsilon_{f},\epsilon_{g}>0\), there exist neural networks \(f_{\theta},g_{\phi}\) such that_
\[\max_{\eta\in[-1.1,1.1]^{d_{h}},x\in[-1,1]^{D}}\left\|f(\eta,x)-f_{\theta}( \eta,x)\right\|_{2}<\epsilon_{f},\max_{\eta\in[-1.1,1.1]^{d_{h}}}\left\|g( \eta)-g_{\phi}(\eta)\right\|_{2}<\epsilon_{g}, \tag{8}\]
_and_
* \(f_{\theta}\) _has_ \(O(\ln\frac{C_{f}}{\epsilon_{f}}+\ln d_{h}+1)\) _layers and_ \(O((C_{f}/\epsilon_{f})^{d_{h}+D}(\ln\frac{C_{f}}{\epsilon_{f}}+\ln d_{h}+1))\) _trainable parameters._
* \(g_{\phi}\) _has_ \(O(\ln\frac{C_{g}}{\epsilon_{g}}+\ln D^{\prime}+1)\) _layers and_ \(O((C_{g}/\epsilon_{g})^{d_{h}}(\ln\frac{C_{g}}{\epsilon_{g}}+\ln D^{\prime}+1))\) _trainable parameters._
_The constants in big-\(O\) may depend on \(D,D^{\prime}\), and \(d_{h}\). Here \(C_{f}\coloneqq\max\{L^{f,h},L^{f,x},M_{f}\}\), where \(M_{f}=\sup_{(\eta,x)\in[-1.1,1.1]^{d_{h}}\times[-1,1]^{D}}\left\|f(\eta,x)\right\|\) and \(L^{f,h},L^{f,x}\) are denote the Lipschitz constant of \(f\) on \([-1.1,1.1]^{d_{h}}\times[-1,1]^{D}\) (see formal definitions in (A1) in the proof of Lemma 4.5 in Appendix A.2). \(C_{g}\coloneqq\max\{L_{g},M_{g}\}\), and \(M_{g}=\sup_{\eta\in[-1.1,1.1]^{d_{h}}}\left\|g(\eta)\right\|\)._
For the spike-like data, the majority of the regions are slow-varying, with the spikes occupying only a minor part of the whole interval \([0,T]\). Thus, the whole interval \([0,T]\) may be partitioned into two disjoint sets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\), each of which consisting of unions of disjoint intervals in \([0,T]\). To characterize this partition more precisely, we define the constants related to an interval in \([0,T]\) as follows:
* For an interval \([s,t]\subset[0,T]\), we define the domains \(B^{h},B^{x}\) as \[B^{h}:=(h([s,t])+B_{r}^{d_{h}})\subset[-1.1,1.1]^{d_{h}},\quad B^{x}:=(x([s,t] )+B_{r}^{D})\cap[-1,1]^{D},\] (9) with \(r=0.1\), and \(B_{r}^{d_{h}},B_{r}^{D}\) represent balls with radius \(r\) in \(\mathbb{R}^{d_{h}},\mathbb{R}^{D}\) respectively (see Figure 3 for illustration). Here, \(h([s,t])+B_{r}^{d_{h}}\) means the Minkowski addition, namely \(\{h_{1}+h_{2},h_{1}\in h([s,t]),h_{2}\in B_{r}^{d_{h}}\}\), and \(x([s,t])+B_{r}^{D}\) is defined in the same way. Then, we denote \[L_{[s,t]}^{f,h} \coloneqq\sup_{x\in B^{x}}\sup_{\eta,\eta_{2}\in B^{h}}\frac{ \left\|f(\eta_{1},x)-f(\eta_{2},x)\right\|}{\|\eta_{1}-\eta_{2}\|},\] \[L_{[s,t]}^{f,x} \coloneqq\sup_{h\in B^{h}}\sup_{x_{1},x_{2}\in B^{x}}\frac{\left\| f(\eta,x_{1})-f(\eta,x_{2})\right\|}{\|x_{1}-x_{2}\|},\] (10) as the local Lipschitz constants of \(f\) within the domain \(B^{h}\times B^{x}\), and \[M_{[s,t]}^{f}\coloneqq\sup_{(\eta,x)\in B^{h}\times B^{x}}\|f(\eta,x)\|_{2}.\] (11)
With the local Lipschitz constant defined as above, we suppose that any time grid \([s_{1},t_{1}]\) in \(\mathcal{D}_{1}\) corresponds to a local Lipschitz constant \(L_{[s_{1},t_{1}]}^{f,h}\leq L_{\mathrm{low}}\). On contrast, if \([s_{2},t_{2}]\) belongs to \(\mathcal{D}_{2}\), the local Lipschitz constant \(L_{\mathrm{low}}<L_{[s_{2},t_{2}]}^{f,h}\leq L_{\mathrm{high}}(\leq L^{f,h})\). Here, \(\mathcal{D}_{1}\) is comprised of regions with slow
variations, while \(\mathcal{D}_{2}\) encompasses regions with sharp changes, as demonstrated in Figure 3. It may often be the case that \(|D_{1}|\) is greater than \(|D_{2}|\). Then, we define
\[L^{\rm(avg)}\coloneqq\frac{1}{T}(L_{\rm low}|D_{1}|+L_{\rm high}|D_{2}|). \tag{12}\]
Following Lemma 4.5 and the partition described above, Theorem 4.6 below provides the approximation results for the continuous-time process \(y(t)\) using (7).
**Theorem 4.6**.: _Under Assumption 4.4 and for \(L^{\rm(avg)}\) defined as in (12), suppose \(\epsilon_{f},\epsilon_{g}>0\) and \(\epsilon_{f}\) satisfies_
\[Te^{L^{\rm(avg)}T}\epsilon_{f}<0.1, \tag{13}\]
_and let \(f_{\theta},g_{\phi}\) be the neural networks satisfying (8) (the model complexity is bounded as in Lemma 4.5), then_
\[\max_{t\in[0,T]}\|y(t)-y_{\text{NN}}(t)\|<\epsilon_{g}+L_{g}Te^{L^{\rm(avg)}T} \epsilon_{f}. \tag{14}\]
_Remark 4.7_ (Interpretation of \(L^{\rm(avg)}\) and local Lipschitz constants).: (14) can provide an improved bound because when the data have sharp changes, \(L_{\rm high}\) (as the \(\infty\)-norm of the Lipschitz constant over time) can be large while \(L^{\rm(avg)}=(L_{\rm low}|D_{1}|+L_{\rm high}|D_{2}|)/T\) (as certain \(L^{1}\)-norm of the Liptshictz constant over time) may stay at a smaller value. The partition \(\mathcal{D}_{1}\cup\mathcal{D}_{2}\) reflects how adaptively choosing grids may help improve the theoretical results, and this will be further explored in the next subsection. Therein \(x(t)\) will only be observed at a discrete time grid, which can be adaptively chosen according to the local Lipschitz constants (see Theorem 4.9 for more details).
_Remark 4.8_ (Arbitrary desired accuracy in (14)).: For any \(\varepsilon>0\), we can choose
\[\epsilon_{f}<\frac{1}{T\exp(L^{\rm(avg)}T)}\min\{0.1,\frac{\varepsilon}{2L_{g }}\},\quad\epsilon_{g}<\frac{\varepsilon}{2},\]
then the right-hand side of (14) is bounded by \(\varepsilon\).
Approximation under time discretization.We assume that \(x(t)\) is only observed at discrete time grids \(\{t_{i}\}_{i=1}^{N}\) instead of on the whole interval \([0,T]\). Below, the time grids can be chosen adaptively, which will be detailed in Remark 4.10. Given the time grids \(\{t_{i}\}_{i=1}^{N}\), we define the following constants that will be used in the theorems later:
Figure 3: Demonstration of the domains \(B^{x}\) and \(B^{h}\) defined as in (9) for the time interval \([s,t]\) (here \(d_{h}=2,D=1\)). The domains \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) that correspond to slowly and fast varying regions are colored in orange and blue respectively.
* By (A2), for each \(i\), let \(L_{i}^{f,h},L_{i}^{f,x}\) and \(M_{i}^{f}\) be defined as in (10) and (11) respectively, where we take the interval \([s,t]\) as \([t_{i-1},t_{i}]\).
* By (A1), for each \(i\), let \(L_{i}^{x}\) be the Lipschitz constant of \(x(t)\) on \(t\in[t_{i-1},t_{i}]\). \(i=1,\ldots,N+1\) (we follow the convention that \(t_{0}=0,t_{N+1}=T\)).
For \(\Delta t_{i}\coloneqq t_{i}-t_{i-1}\) and \(\hat{h}_{\text{NN}}(0)=h_{0}\), the forward Euler scheme is applied on (7) as follows:
\[\hat{h}_{\text{NN}}(t_{i})=\hat{h}_{\text{NN}}(t_{i-1})+\Delta t_{i}f_{\theta} (\hat{h}_{\text{NN}}(t_{i-1}),x(t_{i-1})),\quad\hat{y}_{\text{NN}}(t_{i})=g_{ \phi}(\hat{h}_{\text{NN}}(t_{i})),\quad i=1,\ldots,N. \tag{15}\]
Compared to Theorem 4.6, Theorem 4.9 below additionally accounts for the discretization error from the numerical integration, providing an upper bound of the approximation error using \(x(t)\) observed at discrete time grids. Theorem 4.9 focuses on the forward Euler method, and we refer to Remark 4.11 for its extension to higher-order schemes.
**Theorem 4.9**.: _Under Assumption 4.4 and given a time grid \(\{t_{i}\}_{i=1}^{N}\) on \([0,T]\) at which \(x(t)\) is observed. Suppose \(\epsilon_{f},\epsilon_{g}>0\) and \(\epsilon_{f},\Delta t_{j}\) satisfy_
\[T\exp(\sum_{i=1}^{N}L_{i}^{f,h}\Delta t_{i})\left(\epsilon_{f}+\max_{j}\{\mu_{ j}\Delta t_{j}\}\right)<0.1, \tag{16}\]
_where_
\[\mu_{j}\coloneqq L_{j}^{f,h}M_{j}^{f}+L_{j}^{f,x}L_{j}^{x},\]
_and let \(f_{\theta},g_{\phi}\) be the neural networks satisfying (8) (the model complexity is bounded as in Lemma 4.5), then_
\[\max_{i}\|y(t_{i})-\hat{y}_{\text{NN}}(t_{i})\|\leq\epsilon_{g}+L_{g}T\exp( \sum_{i=1}^{N}L_{i}^{f,h}\Delta t_{i})\left(\epsilon_{f}+\max_{j}\{\mu_{j} \Delta t_{j}\}\right). \tag{17}\]
The condition (16) in Theorem 4.9 is imposed to guarantee that the numerically integrated hidden states \(\{\hat{h}_{\text{NN}}(t_{i})\}\) belong to \([-1.1,1.1]^{d_{h}}\), so that the approximation results in Lemma 4.5 are applicable.
_Remark 4.10_ (Arbitrary desired accuracy in (17)).: For any \(\varepsilon>0\), suppose the time grids satisfy that
\[\max_{j}\{\mu_{j}\Delta t_{j}\}\}<\frac{1}{T\exp(\sum_{i=1}^{N}L_{i}^{f,h} \Delta t_{i})}\min\{0.05,\ \frac{\varepsilon}{3L_{g}}\}, \tag{18}\]
then we can choose
\[\epsilon_{f}<\frac{1}{T\exp(\sum_{i=1}^{N}L_{i}^{f,h}\Delta t_{i})}\min\{0.05,\ \frac{\varepsilon}{3L_{g}}\},\quad\epsilon_{g}<\frac{\varepsilon}{3},\]
to make the right-hand side of (17) bounded by \(\varepsilon\).
_Remark 4.11_ (Extension to higher-order integration schemes).: The numerical integration scheme (15) can be extended to the multi-step explicit methods of higher orders (e.g. Runge-Kutta methods), given that the time grid selection appropriately fulfills the requirements of the integration scheme. For example, we may choose \(t_{i+1}-t_{i}=t_{i}-t_{i-1}\) for adjacent sub-intervals \([t_{i-1},t_{i}],[t_{i},t_{i+1}]\) to apply the commonly used RK4 method.
Theorem 4.9 provides insights into the utility of the adaptive steps for improving the model fitting performance, which is reflected in the last term, involving \(\max_{j}\{\mu_{j}\Delta t_{j}\}\), in Eq. (17). Specifically, time grids may be selected such that \(\Delta t_{i}\) is small if \(L_{i}^{x}\) is great, indicating a steep change in \(x(t)\) for \(t\in[t_{i-1},t_{i}]\). On the contrary, when the variation in \(x(t)\) is smaller, we employ larger \(\Delta t_{i}\) to reduce the total number of required time grids.
## 5 Numerical Experiments
We validate the performance of the proposed method using three types of datasets: the simulated spiral series, the simulated event data, and a real ECG dataset. We present the complexity vs. accuracy tradeoff curves for different methods, and demonstrate the advantage of the _RNN-ODE-Adap_ method. The code can be found at [https://github.com/Yixuan-Tan/RNN_ODE_Adap](https://github.com/Yixuan-Tan/RNN_ODE_Adap).
### Trained Models
In this section, we examine and report the performance of two models, _RNN-ODE_ and _RNN-ODE-Adap_. Both models are trained to minimize the loss function as in (1), and the difference lies in the choice of the time grid. Specifically, _RNN-ODE_ is trained using regular (non-adaptive) time steps, and _RNN-ODE-Adap_ is trained with adaptively selected time steps by Algorithm 1. The architecture of both models is the same as vanilla RNN; see more details in Appendix B.2.
As explained in Section 3.3, the computational cost of training the models is proportional to the average length of the training windows. Hence, in this section, when we compare the performance under varying complexities, the "complexities" are discussed in terms of averaged "numbers of grids" of the training data. More details of the experiment settings are in Appendix B, with the boxplots of the error plots and additional results provided in Appendix B.5.
### Simulated Spiral Data
We first investigate the capability of our method to fit and capture the underlying dynamics of the simulated spiral data. For a given matrix \(A\in\mathbb{R}^{2\times 2}\), one spiral is generated by integrating the ODE
\[x^{\prime}(t)=f(x(t))=\|x(t)\|^{-2}Ax(t), \tag{19}\]
over the time span \([0,T]\), with the initial value \(x(0)=x_{0}\in\mathbb{R}^{2}\). The initial training and testing windows are of length \(N=64\), corresponding to the largest complexity shown in Figure 4.
We first compare the on-sample prediction performance with RNN and LSTM [15] under different computational complexities. For each testing window, we use the first half as available historical
Figure 4: Comparison of the MSE prediction errors on the simulated spiral data generated from Eq. (19) for RNN, LSTM, RNN-ODE, RNN-ODE-Adap. The \(x\)-axis represents the average length of the training windows, which reflects the complexity of the models (see Section 3.3).
data and perform predictions for the second half. Figure 4 shows the averaged Mean Squared Errors (MSEs) computed as in Eq. (A6) for the models, with varying complexities. From the trade-off curves in Figure 4, we observe that when the training cost is relatively low, namely the training windows are short, all the models underfit and do not learn the dynamics well, resulting in poor prediction performance. When the training cost increases and the training windows consist of more time steps, the models overfit, and prediction accuracy worsens.
Moreover, we observe from Figure 4 that for the same complexity, RNN-ODE significantly improves the forecasting performance compared to the vanilla RNN, especially when the number of grids is not too small so that the models begin to learn the dynamics well. Additionally, RNN-ODE-Adap further achieves smaller prediction errors than RNN-ODE since it selects data points more informatively with the same number of grids. Finally, we note that while LSTM performs best in most cases, it possesses a more complex network structure. We refer to Appendix B.5 for additional results about the LSTM and Lipschitz-RNN [6] variants of the adaptive model.
Figure 5 shows examples of spiral reconstructions using about 30% of the data. The time steps might be obtained by interpolation and regular (the upper half of Figure 5), or be chosen adaptively by Algorithm 1 and thus irregular (the lower half of Figure 5). We can see that there are mismatches between shapes reconstructed by RNN and LSTM and the ground truth spiral shape. In contrast, we note that RNN-ODE and RNN-ODE-Adap are consistent with the underlying spirals.
### Simulated Point-Process Data
We further apply our method to a simulated example of event times data generated from temporal Hawkes processes [25] as described in Section 4. We train the model (2)-(3) and estimate the true intensity function \(\lambda(t)\) using the output \(x(t)\). The mean squared loss \(L=\int_{0}^{T}(dN(t)/dt-\lambda(t))^{2}dt\) is used when fitting the neural ODE model.
The fitting errors of the four models versus the complexity are shown below in Figure 6 (left). It can be observed that for all the models, the fitting errors decrease as the training complexity
Figure 5: Comparison of RNN, LSTM, RNN-ODE, RNN-ODE-Adap on the reconstruction of simulated spiral data generated from Eq. (19), using regular (upper) v.s. irregular (lower) time series.
increases. Furthermore, RNN-ODE-Adap achieves the best fitting performance. The right panel shows the log-log plot of RNN-ODE and RNN-ODE-Adap, from which we can see more clearly that for fixed model complexity (network structure), the proposed model approaches the true intensity function.
Figure 6 (right) shows two examples of fitting performance. In this example, all models use 33 grids on average. Thus, the complexity is 50% of the largest one. It can be observed that RNN fails to capture the smooth decay of the kernel. Furthermore, we can see that with the adaptive choice of time steps which is more refined, RNN-ODE-Adap can learn the dynamics of the intensity function much better - it can estimate the "jump" in the intensity accurately.
### Real Data: ECG Time Series
We validate the proposed RNN-ODE-Adap on one public electrocardiography (ECG) dataset PTB-XL [34, 9]. We focus on learning the underlying dynamics of ECG signals using the RNN-ODE model and use adaptive time steps for "spikes" in data series. We remark that windows of the highest sampling rate are chosen to have \(N=96\) time grids, in which usually two cycles are contained. In this way, the prediction of the second half given the first half would be more meaningful.
Figure 7 (left) shows the on-sample prediction MSEs of the four methods for two different prediction lengths, 24 and 48, which are 1/4 and 1/2 of the whole window. Here, the prediction is performed with the original finest grids by integrating the ODE function. It can be seen that RNN-ODE has smaller prediction errors than RNN on average, and adding addictive steps helps achieve slightly better performance. Furthermore, LSTM still achieves the smallest error most of the time. The reason for this is explained in Section 5.2 and may be due to its more complex network structure.
Figure 7 (right) and Figure A11 in Appendix B.5 present examples of prediction on the testing windows for prediction lengths 48 and 24, respectively. These examples demonstrate that RNN-ODE-Adap models capture the cycles and trends of the ECG more effectively than RNN. The good performance implies that the proposed algorithm could be used to fit and predict the ECG-type signal well. This also implies potential future applications of RNN-ODE-Adap in real healthcare problems.
Figure 6: Left: Comparison of the fitting errors of the underlying intensity function of the simulated event-type data generated from the Hawkes process for RNN, LSTM, RNN-ODE, RNN-ODE-Adap. \(x\)-axis represents computational complexity, \(y\)-axis is the fitting error computed as in Eq. (A7). Right: Examples of fitted intensity function of the simulated event times data generated from the Hawkes process using RNN, LSTM, RNN-ODE, RNN-ODE-Adap.
## 6 Discussion
In this paper we propose a general framework for constructing adaptive time steps when using the neural ODE combining the observed data to model times series. We demonstrate that it tends to be more efficient for modeling "spike-like" time series. The proposed algorithm of adaptive time steps is widely applicable to other types of models, not limited to neural ODE and RNN models. Moreover, the selection of adaptive time steps can be generalized to a broad class of non-stationary time series with different kinds of non-stationarities.
This highlights the potential for further research in this filed, which may be approached from several angles. Firstly, there is a need for more theoretical analysis of the proposed RNN-ODE-Adap framework under a broad spectrum of non-stationary time series, ranging from continuous to discontinuous data sequences. Additionally, there is a need to investigate the more flexible adaptive scheme, which could freely add middle steps adaptively and merge time grids without being restricted to dyadic partitioning.
## Acknowledgement
Y.T. and X.C. are partially supported by Simons Foundation (ID 814643) and NSF (DMS-2007040).
|
2310.12688 | Compression of Recurrent Neural Networks using Matrix Factorization | Compressing neural networks is a key step when deploying models for real-time
or embedded applications. Factorizing the model's matrices using low-rank
approximations is a promising method for achieving compression. While it is
possible to set the rank before training, this approach is neither flexible nor
optimal. In this work, we propose a post-training rank-selection method called
Rank-Tuning that selects a different rank for each matrix. Used in combination
with training adaptations, our method achieves high compression rates with no
or little performance degradation. Our numerical experiments on signal
processing tasks show that we can compress recurrent neural networks up to 14x
with at most 1.4% relative performance reduction. | Lucas Maison, Hélion du Mas des Bourboux, Thomas Courtat | 2023-10-19T12:35:30Z | http://arxiv.org/abs/2310.12688v1 | # Compression of Recurrent Neural Networks using Matrix Factorization
###### Abstract
Compressing neural networks is a key step when deploying models for real-time or embedded applications. Factorizing model's matrices using low-rank approximations is a promising method for achieving compression. While it is possible to set the rank before training, this approach is neither flexible nor optimal. In this work, we propose a post-training rank-selection method called Rank-Tuning that select a different rank for each matrix. Used in combination with training adaptations, our method achieve high compression rates with no or little performance degradation. Our numerical experiments on signal processing tasks show that we can compress recurrent neural networks up to \(14\times\) with at most \(1.4\%\) of relative performance reduction.1
Footnote 1: Code and models are publicly available at [https://github.com/Deathekirl/low-rank-approximation](https://github.com/Deathekirl/low-rank-approximation), together with instructions for downloading the datasets.
_Keywords--_ model compression, low-rank approximation, singular value decomposition, rank-tuning, recurrent neural networks
## 1 Introduction and Related Work
We observe a clear tendency of neural network size growth [1]. New architectures using hundreds of millions or even billions of parameters are elaborated, leading to state-of-the-art performance but also to increasing difficulties: slowness of inference, increased demand for storage, exploding electricity consumption and pollution. These issues inhibit the deployment of high performance neural network models in real-time and embedded applications.
Several general strategies for reducing networks' complexity have been explored with success, including distillation [2], pruning [3], quantization [4], low-rank approximations [5]. The interested reader can refer to [6] for a survey of network compression methods. In this work we focus on low-rank approximations which are a promising technique not yet included in deep learning libraries at the time of writing.
There exist several methods for approximating matrices, for instance the Low-Rank (LR) decomposition [5] where Singular Value Decomposition (SVD) is used to factorize matrices, reducing the number of operations to carry out for the matrix product. Another example is the Tensor-Train (TT) decomposition [7] where a matrix is represented as a product of several tensors.
The LR decomposition can be exploited in a number of ways. The most common approach is to learn the weights of pre-factorized matrices by fixing their rank before training [5, 7, 8, 9, 10]. However, this strategy requires a costly2 research of optimal ranks, or a-priori knowledge of them. Moreover, it's not very adaptable since ranks cannot be changed after training.
Footnote 2: It is necessary to re-train the model from scratch for each ranks combination.
Another approach consists in learning whole matrices and factorize them after training. There exist several training adaptations that can greatly enhance the quality of the decomposition. In [11, 12, 13], authors apply a regularization based on matrix norms (see section 2.2.1). Trained Rank Pruning (TRP, [11]) combines regularization and training-time factorization (see section 2.2.2) and uses _energy_ as a criterion for choosing ranks. The Learning-Compression (LC, [14]) algorithm enable learning weights and ranks alternatively.
These methods have various application areas. One can factorize deep neural networks for vision tasks [5, 9] or ASR tasks [10]. In [11, 12, 13, 5, 14], authors factorize convolutional networks (LeNet, VGG, ResNet) evaluated on image recognition tasks (MNIST, CIFAR, ImageNet, etc.). In [7, 8], authors factorize recurrent networks (RNN, LSTM) evaluated on ASR (Automatic Speech Recognition) or linguistic modeling tasks.
Regularization and training-time factorization have been used in [11] to compress convolutional networks. Recurrent networks have also been compressed using ranks fixed prior to training [8].
The present paper aims at compressing recurrent neural networks through training-time factorization and regularization. To the best of our knowledge, it is the first work implementing both these strategies while also selecting the best ranks during post-training.
The rest of this article is organized as follows. In section 2 we first introduce the mathematical background of our work, then present the two training adaptation methods we used, and finally describe post-training rank selection strategies. In section 3 we present the different evaluation tasks and our model's architecture. We present our results and discuss our findings in section 4. Section 5 concludes this study.
## 2 Low-Rank approximations
for neural networks
This section introduces the key concept of SVD. Then we present two techniques used at training time to obtain matrices of low-rank and achieve better compression. Finally, several post-training rank selection strategies are presented.
### Using SVD to factorize a model
A neural network \(\mathcal{M}\) with parameters \(\theta_{\mathcal{M}}\) is composed of an ensemble of \(M\) weight matrices and bias vectors. For instance, a linear layer uses a matrix \(W\) and a vector \(b\) to perform the operation \(y=Wx+b\). Same reasoning can be applied to more complex units like a GRU (Gated Recurrent Unit [15]) or an attention layer used in the Transformer [16] architecture, which are decomposable in simple matrix-vector operations. The size of the vectors being negligible when compared to the size of the matrices, we focus on compressing the latter.
Let's write a weight matrix \(W\in M(n,m)\) and a data matrix \(A\in M(m,k)\). It costs \(nm\) memory units to store \(W\), and \(nmk\) operations to compute \(W\times A\). It is possible to decompose \(W\) as a product of three matrices
\[W=U\Sigma V^{T}=U\mathrm{diag}(\sigma_{1},...,\sigma_{m})V^{T}, \tag{1}\]
with \(U\in M(n,n)\), \(\Sigma\in M(n,m)\) and \(V\in M(m,m)\). Note that \(\Sigma\) contains the _singular values_ of \(W\), sorted in descending order. According to the Eckart-Young theorem [17], this is an accurate decomposition that always exists.
However, it is also possible to approximate this factorization if we truncate the \(\Sigma\) matrix, keeping only its largest values. The factorization is said to be of _rank_\(r\) if we keep the \(r\) first singular values and drop the rest. In this case we have
\[W\simeq\hat{W}=U_{r}\Sigma_{r}V_{r}^{T}=(U_{r}\Sigma_{r})V_{r}^{T}=\hat{U}_{r} V_{r}^{T}, \tag{2}\]
with \(r\leq\min(n,m)\) and \(U_{r}\), \(\Sigma_{r}\), \(V_{r}^{T}=U[:,:r]\), \(\Sigma[:r]\), \(V^{T}[:r,:]\). As we increase the rank \(r\), the quality of the approximation improves [17], as illustrated by the formula
\[\left\|W-\hat{U}_{r}V_{r}^{T}\right\|_{F}=\sqrt{\sum_{i=r+1}^{m}\sigma_{i}^{2 }}. \tag{3}\]
The storage cost of \(\hat{W}\) is \(r(n+m)\), and the matrix multiplication \(W\times A\) becomes
\[WA\simeq\hat{W}A=\hat{U}_{r}V_{r}^{T}A=\hat{U}_{r}(V_{r}^{T}A), \tag{4}\]
which costs \(r(n+m)k\) operations. In the end, we obtain both a memory and complexity gain if the following condition is met:
\[r<\frac{nm}{n+m}. \tag{5}\]
It is therefore possible to both compress the model and accelerate its inference by decomposing the weight ma
trices using low-ranks.3 The lower the ranks, the better the compression. However, this could lead to a precision diminution if the matrices are too much approximated. We will explore this trade-off in details in section 4.3.
Footnote 3: In theory, there is a speed gain as soon as equation 5 is verified. In practice, the gain depends on the compute library implementation.
### Training adaptations for neural networks compression
SVD can be leveraged to compress any model, including those already trained. However, the compression rate may be poor if no adaptation has been used during training, as we will see in section 4. Here, we present two complementary ways of adapting the training which will increase the compression potential of the model without hurting much its performance.
#### 2.2.1 Nuclear Regularization
The rank of a matrix \(W\) is
\[\text{rank}(W)=\sum_{i=1}^{m}\mathbf{1}_{\sigma_{i}>0}. \tag{6}\]
Minimizing the rank of \(W\) is therefore equivalent to maximizing the number of null singular values. Note that if some singular values are sufficiently close to zero, setting them to zero will reduce the rank of the matrix without changing much its action on vectors. It follows that diminishing the singular values during training will reduce the rank of the matrices, making it easier to compress the model.
One method to do so is to add regularization to the loss function, effectively constraining the rank of the matrices during training. In this work we use the _nuclear norm4_ as a regularizer:
Footnote 4: Also known as _trace norm_ or _Schatten 1-norm_.
\[\left\|W\right\|_{*}=\sum_{i=1}^{m}\sigma_{i}. \tag{7}\]
The loss at epoch \(t\) then becomes
\[\mathscr{L}=\sum_{(X,y)}\mathscr{L}_{task}(\mathcal{M}(X,\theta_{\mathcal{M} }),y)+\lambda(t)\sum_{W\in\theta_{\mathcal{M}}}\left\|W\right\|_{*}, \tag{8}\]
where the sum is evaluated on all the network' weight matrices.5\(\mathscr{L}_{task}\) is the task's objective loss (e.g. cross-entropy for classification tasks and Mean Squared Error (MSE) for regression tasks), and \(\lambda(t)\) is an ad-hoc time-depending function defined as
Footnote 5: However, we decided to only take into account the GRU’ matrices, since they represent more than 99% of the network’ weights.
\[\lambda(t)=\bar{\lambda}\times\begin{cases}0&\text{if }t<T_{1}\\ \frac{t-T_{1}}{T_{2}-T_{1}}&\text{if }T_{1}\leq t<T_{2}\\ 1&\text{otherwise}\end{cases}, \tag{9}\]
where \(\bar{\lambda}\) is the final weight of the regularization and \(T_{1}\) and \(T_{2}\) are two pivot epoch numbers. This function allows for a progressive addition of the regularization during training. Indeed, we find that a model cannot learn if regularization is added from the first epoch. We believe that it needs some time to adjust its weights (which are randomly initialized), after which we can add regularization.
#### 2.2.2 Hard Low-Rank Approximation
Another method for reducing the ranks during training consists in factorizing the matrices every \(N\) epochs with target rank \(R\) (see algorithm 1 for details). We call this method _Hard Low-Rank Approximation_ (HLRA) because it imposes a brutal constraint on the network, effectively reducing the ranks at the cost of performance (i.e. the optimized metric). During training the model eventually regain the lost performance between the factorization steps. In our experiments, we obtain a model with low-rank matrices with no or little performance loss.
```
0: Target rank \(R\), period \(N\) if\(t\equiv 0\mod N\)then for\(W_{i}\in\theta_{\mathcal{M}}\)do \(U,\Sigma,V^{T}=\text{SVD}(W_{i})\) \(U_{R},\Sigma_{R},V_{R}^{T}=U[:,:R],\Sigma[:R],V^{T}[:R,:]\) \(\hat{W_{i}}=U_{R}\times\Sigma_{R}\times V_{R}^{T}\) endfor \(\theta_{\mathcal{M}}\leftarrow\{\hat{W_{1}},...,\hat{W_{M}}\}\) endif
```
**Algorithm 1** HLRA method
### Post-training rank selection strategies
After training, the next step is to factorize each matrix to its optimal rank6 with respect to the model's performance (evaluated on the validation split). The
optimal rank of each weight matrix is unknown. We could brute-force search the ranks, but this is intractable in practice. It is therefore necessary to use heuristics.
#### 2.3.1 Simple strategies
A naive approach for choosing ranks consists in setting all matrices to the same rank \(R_{0}\) and then evaluating the model's performance.7 We can choose the smallest \(R_{0}\) such that the performance stays greater than some threshold.
Footnote 7: Since matrices differ in size, some ranks may be too large for some of them. It is for instance impossible to decompose a \((300,200)\)-matrix to the rank \(250\), since the rank has to be greater than both dimensions. In this case, the matrix is not decomposed.
This strategy however is likely not optimal, since some matrices in the network are harder to factorize than others. A small number of matrices will be responsible for a large part of the performance degradation when \(R_{0}\) is low. One can easily spot these on figure 1: the matrices of the deepest layers have slowly decreasing singular values. When ignoring hard-to-factorize matrices, we can obtain better compression results. This second approach is said to be _adaptative_ since, for a given rank, matrices are factorized only if the associated decomposition error \(\left\|W-\hat{U}_{r}V_{r}^{T}\right\|_{F}\) is less than a given threshold, effectively filtering hard-to-factorize matrices.
#### 2.3.2 Rank-Tuning
We introduce Rank-Tuning, which is a more sophisticated method for selecting the ranks. It is described in algorithm 2. We factorize each matrix \(W_{i}\) it at rank \(r_{i}\) while leaving all the others unfactorized. We then run the model with the compressed matrix and evaluate its performance \(p\) on the validation dataset. If \(p\) is at least at \(\Delta_{p}\) of the topline performance \(p^{*}\), we stop and move on to the next matrix. Otherwise, we keep increasing \(r_{i}\) (therefore diminishing the approximation error) until the condition is met. Finally, when all the ranks \(\hat{r_{1}},...,\hat{r_{M}}\) have been found, we factorize each matrix \(W_{i}\) at rank \(\hat{r_{i}}\) to obtain the compressed network.
```
0: Topline \(p^{*}\), degradation tolerance \(\Delta_{p}\) for\(W_{i}\in\theta_{\mathcal{M}}\)do \(n,m=\text{shape}(W_{i})\) \(U,\Sigma,V^{T}=\text{SVD}(W_{i})\) for\(1\leq r<\frac{nm}{n+m}\)do \(U_{r},\Sigma_{r},V_{r}^{T}=U[:.:r],\Sigma[:r],V^{T}[:r,:]\) \(\hat{W_{i}}=U_{r}\times\Sigma_{r}\times V_{r}^{T}\) \(\hat{\theta}_{\mathcal{M}}\leftarrow\{W_{1},...,\hat{W_{i}},...,W_{M}\}\) \(p=\text{performance}(\mathcal{M},\hat{\theta}_{\mathcal{M}})\) if\(p>p^{*}-\Delta_{p}\)then \(\hat{r_{i}}\gets r\) break endif endfor return\(\hat{r_{1}},...,\hat{r_{M}}\)
```
**Algorithm 2** Rank-Tuning
#### 2.3.3 Complexity considerations
The exact computational cost of the procedures defined in sections 2.3.1 and 2.3.2 is quite difficult to evaluate, for several reasons:
* they act on matrices of various shapes;
* they rely on highly optimized operations (SVD, matrices operations) which are implemented by linear algebra libraries and whose complexities are not naive.
However, we remark that network's inference is the most costly operation performed. To get an idea of the computational cost of selecting the ranks, we can simply count the number of times we run inference on the validation set.
When using the simple strategies, we run one inference for each possible value of \(R_{0}\). The number of inferences is of the order of \(\mathcal{O}(\underset{n,m=\text{shape}(W_{i})}{\max}(\min(n,m)))\). In our setting, this simplifies to \(\mathcal{O}(h)\), where \(h\) is the hidden size of the GRUs. See section 3.2 for more details on our models' architecture.
When using Rank-Tuning, we run one inference for each possible rank for each matrix. The number of inferences is of the order of \(\mathcal{O}(\underset{n,m=\text{shape}(W_{i})}{\sum}\frac{nm}{n+m})\) which simplifies to \(\mathcal{O}(M\times h)\). The cost of running Rank-Tuning is noticeably higher than those of naive strategies, but it is likely to lead to better compression rates. Moreover, it is still far more efficient than a brute-force search, which would require \(\mathcal{O}(h^{M})\) inferences to run.
Experimental Setup
In this section we first introduce the three tasks on public datasets we used for evaluating our method. We also describe the architecture of our model.
### Introducing the tasks
**SequentialMNIST** is a toy dataset based on the MNIST database (Modified National Institute of Standards and Technology database [18]). It consists of a sequential representation of MNIST images: each \(28\times 28\) image is flattened, leading to a sequence of length \(784\), which is then divided in two sequences of equal length, representing the top and the bottom of the image.
Given the first sequence (top), the task is to reconstruct the second (bottom). This is a sequence-to-sequence problem operating on vectors of size \(392\). We use the original train/test splits, and \(20\%\) of the training set for validation during training. The MSE between the output sequence and the target sequence is used as the learning loss and as the performance metric at test time.
**DOCC10** (Dyni Odontocete Click Classification, 10 species [19]) is a dataset of marine mammals echolocation clicks. Each sample is a sequence of length \(8192\) centered on a _click_ of interest. The task is to classify each sample in one of the \(10\) classes of marine mammals. The dataset is balanced and composed of \(134\,080\) samples, with the train/test splits available on the online challenge website8. We use \(20\%\) of the training set for validation during training. Cross-Entropy is used as the learning loss, while the test performance metric is classification accuracy.
Footnote 8: [https://challengedata.ens.fr/participants/challenges/32/](https://challengedata.ens.fr/participants/challenges/32/)
Footnote 9: [https://www.kaggle.com/datasets/hdumasde/pythgoremodreco](https://www.kaggle.com/datasets/hdumasde/pythgoremodreco)
#### AugMod
[20], which stands for _Augmented Modulation_, is a synthetic dataset of radio signals. It is publicly available.10 The authors used seven different linear modulations and five bins of SNR (Signal to Noise Ratio). They generated \(5000\) examples per (SNR, modulation) pair, leading to a total of \(175\,000\) examples, each of sequence length \(1024\). The task is to classify each sample in one of the seven modulation classes. We use \(10\%\) of the dataset for validation and \(10\%\) for testing. Cross-Entropy is used as the learning loss, while the test performance metric is classification accuracy.
Footnote 10: [https://www.kaggle.com/datasets/hdumasde/pythgoremodreco](https://www.kaggle.com/datasets/hdumasde/pythgoremodreco)
The hyperparameters used for each of these tasks are grouped in table 1. Note that for AugMod, we trained two models of different capacity, labelled _AugMod_ - _Large_ and _AugMod_ - _Small_. We do not apply regularization to the small version of the model, as this results in a larger accuracy loss.
### Model's architecture
Our model consists of a \(l\)-layered GRU with hidden-size \(h\) and dropout applied after each layer, followed by a ReLU (Rectified Linear Unit [21]) activation. At the end, a max pooling layer collapses the time dimension, producing a one-dimensional vector which is finally fed to a fully connected layer. Applying a pooling layer over the time dimension has the advantage of making the model invariant to the number of samples in the input signal [20]. Table 2 summarizes the hyperparameters characterizing the architectures of the models.
Note that for the SequentialMNIST task the model is slightly different. We remove the pooling layer, as a result of which we obtain a sequence as output.
\begin{table}
\begin{tabular}{l c c c c c} \hline & \(\lambda\) & \(R\) & \(T_{1}\) & \(T_{2}\) & \(N\) & Epo. \\ \hline SeqMNIST & \(10^{-4}\) & 40 & 10 & 120 & 20 & 150 \\ DOCC10 & \(10^{-3}\) & 20 & 5 & 25 & 10 & 50 \\ AugMod - L & \(10^{-4}\) & 20 & 10 & 30 & 15 & 50 \\ AugMod - S & 0 & 10 & n.a. & n.a. & 15 & 100 \\ \hline \end{tabular}
\end{table}
Table 1: Hyperparameters for the different tasks. See equation 9 and algorithm 1 for details.
\begin{table}
\begin{tabular}{l c c c c} \hline & Input & Output & \(l\) & \(h\) \\ \hline SeqMNIST & 1 & 1 & 2 & 100 \\ DOCC10 & 1 & 10 & 3 & 150 \\ AugMod - Large & 2 & 7 & 3 & 150 \\ AugMod - Small & 2 & 7 & 2 & 62 \\ \hline \end{tabular}
\end{table}
Table 2: Details of the model’s hyperparameters characterizing the architecture.
Results and Discussion
For each model architecture listed in Table 2, we train two models: one baseline trained without any training adaptation (standard training), and another trained using nuclear regularization and HLRA (LRA-aware training). We denote these models _Base_ and _LRA_ respectively.
In this section we first present a qualitative analysis of the impact of training adaptation on the models' singular values. Then we show how much we can compress models using our method. Finally we discuss the size-accuracy trade-off permitted by Rank-Tuning.
### Impact of training adaptations on singular values
Both nuclear regularization and HLRA are supposed to act on the singular values of the matrices. We can check that it is indeed the case by plotting the singular values in decreasing order, that is, the list of values \(\sigma_{1},...,\sigma_{m}\), which are the parameters of the diagonal of matrix \(\Sigma\) (see equation 1). Figure 1 is an example of such a figure for the DOCC10 task, using a logarithmic scale.
We observe that for the _Base_ model, singular values decrease slowly from \(\approx 20\) to \(\approx 0.1\). This is in sharp contrast with the singular values of the _LRA_ model, which decrease steeply from \(\approx 7\) to \(\approx 10^{-4}\). This clearly show the impact of training adaptations.
We can see that some matrices have singular values that decrease less rapidly than others. Indeed, the deeper the layer, the slower singular values decrease. We believe that matrices farther from input need more independent parameters to process the data they receive, their rank is therefore higher and this is reflected by their singular values. As a matter of fact, when we run the Rank-Tuning algorithm on the models, we observe that for each matrix, the selected rank is correlated with the decrease of the associated curve. The lower the rank, the faster the decrease. Thus, looking at this figure, one can quickly identify which matrices will be hard to compress (meaning that they have a high rank).
### Compression of neural networks
As we mentioned in section 3, each dataset is split into three sets (train, validation and test). We use these to define the following compression procedure:
1. train the model using the training set. We use the validation set to do early-stopping;
Figure 1: Singular values evolution, with and without training adaptations, for the different matrices. Curves above and under the black dashed line correspond to a _Base_ model and a _LRA_ model, respectively. GRUs being bilateral, each curve is present twice, one for each direction.
2. after training, use the validation set to search for optimal factorization ranks;
3. use the test set to measure performance of the uncompressed model;
4. use the test set to measure performance of the compressed model.
It turns out that the Rank-Tuning algorithm significantly outperforms10 the simpler rank-selection strategies that we exposed in section 2.3.1, at the cost of a higher computational cost. In the rest of this section, all the models are compressed using Rank-Tuning unless stated otherwise.
Footnote 10: Meaning that models can be compressed further for an equivalent performance degradation.
Results on the different tasks are regrouped in Table 3. For a given task, we report the size (number of parameters), FLOP (Floating-point Operations), and performance of _Base_ and _LRA_ models, before and after applying the compression step (i.e. the Rank-Tuning algorithm). As we can see, our training adaptation method significantly improves the compressibility of the _LRA_ models. With a performance degradation of roughly 1%, we achieve very high compression rates, from 69% to 93% depending on the task, whereas for the _Base_ models (standard training), the compression rates range from 10% to 78%. Note that we systematically obtain better compression rates when using training adaptations.
We observe that compressing the models almost always leads to a small performance reduction. This is expected: since compression is not lossless, we lose some information (parameters) when factorizing matrices. However, by design these are unimportant parameters, that's why we are able to achieve high compression rates without hurting much the performance. We also observe on Table 3 that the use of training adaptations can lead to a performance reduction as high as one accuracy point _before_ compression. If this is statistically significant, this would mean that LRA-aware training slightly hurts the model's performance in order to improve its compressibility. In order to disentangle this observation from the natural variance due to weights initialization, we trained 10 _Base_ and 10 _LRA_ models on the SequentialMNIST task and checked their MSE. We obtained a mean MSE of \(0.321(\sigma=0.022)\) for _Base_ and of \(0.317(\sigma=0.014)\) for _LRA_ models respectively, where \(\sigma\) gives the standard deviation. We thus observe no significant differences between the two training methods. Though we did not run the same experiment with the other tasks due to the high computational cost, we believe we would observe a similar trend.
Note that we don't report inference speed because we're unable to measure it in a satisfactory manner. Although in theory network compression via matrix factorization decreases both the model's size and inference time (as explained in section 2.1), in practice inference relies heavily on matrix multiplications optimized by linear algebra libraries. Thus, without an implementation of a two-step multiplication as described in equation 4, it is not possible to take advantage of the factorization at inference time. Nonetheless, we report the FLOP metric in Table 3 to give an indication of the speed gain that could be achieved using an adequate implementation.
### Size-accuracy trade-off
The Rank-Tuning algorithm 2 is parameterized by the precision tolerance \(\Delta_{p}\). By varying \(\Delta_{p}\), we can produce several compressed models with different compression rates and performance degradation. Note that this can be
Figure 2: A representation of the size-accuracy trade-off for the AugMod task. Uncompressed and compressed models are represented by stars and dots, respectively.
done without additional computations, by slightly adapting the Rank-Tuning algorithm to use a range \([\Delta_{1},\Delta_{2},...]\).
We trained several models (Large and Small architectures, with and without LRA-aware training) on the AugMod dataset, then compressed them with \(\Delta_{p}=\frac{i}{1000}p^{*}\) for \(1\leq i\leq 10\). For each of these factorized models, we plotted its size and performance relative to the baseline (Large, uncompressed). The results are shown on figure 2. As we can see, LRA-aware training significantly improves over standard training both in terms of compression rate and performance. Moreover, this graph clearly shows one of the key advantage of our method: one can choose the desired size-accuracy trade-off according to what is needed for the model's deployment.
Finally, we discuss one key question in model optimisation: in terms of performance, is it better to use a large compressed model or a small uncompressed one? According to our experiments on the AugMod task, a large model trained with LRA-aware training can be compressed to the size of a small model, while retaining superior accuracy (see figure 2, skyblue curve). However, if we want to obtain a model as small as possible, it is better to compress a small model. On the AugMod dataset, we were able to compress the small model nearly \(50\times\) compared to the large model, while retaining \(97\%\) of its accuracy.
## 5 Conclusion
Compressing a neural network while maintaining its performance remains challenging. In this work, we used regularization and training-time factorization to train recurrent neural networks which can be more effectively compressed. Using these training adaptations in conjunction with a post-training rank-selection, we were able to reach very high rates of compression with no or little performance loss. Our experiments show that the proposed method can be used to select a good trade-off between compression and performance. We leave as future work the application of this method to other architectures, as well as the exploration of the combination of low-rank approximations with other compression methods like quantization or distillation.
|
2310.10909 | Heterogenous Memory Augmented Neural Networks | It has been shown that semi-parametric methods, which combine standard neural
networks with non-parametric components such as external memory modules and
data retrieval, are particularly helpful in data scarcity and
out-of-distribution (OOD) scenarios. However, existing semi-parametric methods
mostly depend on independent raw data points - this strategy is difficult to
scale up due to both high computational costs and the incapacity of current
attention mechanisms with a large number of tokens. In this paper, we introduce
a novel heterogeneous memory augmentation approach for neural networks which,
by introducing learnable memory tokens with attention mechanism, can
effectively boost performance without huge computational overhead. Our
general-purpose method can be seamlessly combined with various backbones (MLP,
CNN, GNN, and Transformer) in a plug-and-play manner. We extensively evaluate
our approach on various image and graph-based tasks under both in-distribution
(ID) and OOD conditions and show its competitive performance against
task-specific state-of-the-art methods. Code is available at
\url{https://github.com/qiuzh20/HMA}. | Zihan Qiu, Zhen Liu, Shuicheng Yan, Shanghang Zhang, Jie Fu | 2023-10-17T01:05:28Z | http://arxiv.org/abs/2310.10909v1 | # Heterogenous Memory Augmented Neural Networks
###### Abstract
It has been shown that semi-parametric methods, which combine standard neural networks with non-parametric components such as external memory modules and data retrieval, are particularly helpful in data scarcity and out-of-distribution (OOD) scenarios. However, existing semi-parametric methods mostly depend on independent raw data points - this strategy is difficult to scale up due to both high computational costs and the incapacity of current attention mechanisms with a large number of tokens. In this paper, we introduce a novel heterogeneous memory augmentation approach for neural networks which, by introducing learnable memory tokens with attention mechanism, can effectively boost performance without huge computational overhead. Our general-purpose method can be seamlessly combined with various backbones (MLP, CNN, GNN, and Transformer) in a plug-and-play manner. We extensively evaluate our approach on various image and graph-based tasks under both in-distribution (ID) and OOD conditions and show its competitive performance against task-specific state-of-the-art methods. Code is available at [https://github.com/qiuzh20/HMA](https://github.com/qiuzh20/HMA).
## 1 Introduction
Semi-parametric methods, which parametrize the mapping from an input domain \(\mathcal{X}\) to an output domain \(\mathcal{Y}\) with both neural net parameters and non-parametric data, are widely used in a variety of tasks including but not limited to meta-learning [42, 53], energy-based models [2] and planning [27]. With a non-parametric component in the neural net architecture, the model may better incorporate priors like distances between data points and characterize attributes such as data uncertainty. A typical design is to retrieve data associated with the current input with k-nearest neighbors (kNN). For instance, kNN-LM [29] proposes to perform inference with data retrieval from a small batch of inputs, similar to the concept of support set used in few-shot learning [54]. In practice, the retrieval strategies and parameters, such as the number of nearest neighbors, must be carefully designed for performance and low computational overhead [16, 47].
Instead of using full datasets, one may augment the neural net architecture with _external_ memory. With an external memory module, one can store a tiny amount of data points or latent features obtained through dataset distillation [60], random selection [31], or learned policies [47]. Especially with the attention mechanism employed by NPT [31], a model with external memory can utilize non-parametric components more flexibly than traditional methods such as Gaussian processes [12] due to the learnable dependencies instead of a fixed kernel function. Still, it suffers from the same scalability problem as the attention has to operate directly on a potentially large data set or features.
Inspired by some recent work that effectively learns a tiny amount of synthetic data for teaching student networks [60, 77], we propose Heterogeneous Memory Augmentation (HMA), a general approach for learning dependencies between data and augmenting networks without high computational overhead for various downstream tasks. HMA sequentially employs on the feature space of some backbone network (a) a real memory augmentation module (RMA), with classical memory augmentation methods following [42, 22], and (b) a synthetic memory augmentation module (SMA). Specifically, SMA learns compressed memory entries that encode _dataset-relevant_ information and leverages attention across datapoints (ABD) which consider the cross-relationship between the input batch in a manner similar
to NPT. These learned memory entries assist SMA in working effectively without relying on large batch sizes or additional selection methods. Notably, our HMA is architecture-agnostic and can be plugged into almost any backbone architecture in a few lines of code, without any additional training phases or losses.
In summary, our contributions are:
* We introduce heterogeneous memory augmentation (HMA), a better semi-parametric module that leverages both synthetic (learnable) and real (past data) memory. Compared to previous methods, our HMA can better capture the data dependencies for semi-parametric inference.
* The key component, synthetic memory augmentation (SMA), utilizes attention mechanisms to perform non-parametric inference on top of learned memory slots and inputs. The use of attention across datapoints allows the module to better capture the data dependencies 1) between the input batch and the memory slots as well as 2) between individual datapoints in the input batch, compared to traditional non-parametric techniques like Gaussian processes.
* Our model-agnostic HMA directly operates on the feature space and, therefore, can be incorporated into various backbones in a plug-in-play fashion.
* With learnable memory slots in SMA, HMA is more tailored to downstream tasks. Extensive experiments show its effectiveness in various in-distribution and out-of-distribution tasks.
## 2 Related Work
### Semi-parametric Methods
Retrieval methods leverage the similarity between data points to provide information for prediction. For input \(x\), the prediction probability is \(p=\lambda p_{\text{kNN}}+(1-\lambda)p_{\text{BaseModel}}\) to improve its ability to handle long-tail data and robustness [29]. During testing, the model can directly utilize similar training data for predictions. These characteristics make the retrieval method bring substantial improvements in various tasks of NLP [28, 71, 37] and graph tasks [59]. GNN-LM [40] also builds a graph by combining the samples with their corresponding retrieval data and further enhances the sample features through graph aggregation. Non-Parametric Classifier [44, 69] stores the training data's features and labels, enhancing similarity-based classification. Prototypical Networks [53, 45, 55] employ specifically designed objectives to learn a metric space. Each class's non-parametric "prototype representations" are easily obtained and used for few-shot and zero-shot tasks in this space. Compared to these methods, HMA learns synthetic memory slots directly with gradient descent updates, which can be more tailored for downstream tasks.
Figure 1: The overall framework of HMA. For inputs from some backbone, we concatenate them with label embeddings from **known** token similar to CLS. In RMA, a momentum memory queue and attention-based reader provide inputs with cross-batch information. During training, the real memory is updated with features from the momentum encoder and **true** label embeddings. In SMA, attention between datapoints is used to pass information among inputs and synthetic memory slots.
### Attention
Attention [58] can integrate various information and offer a versatile mechanism to uncover and exploit relationships between data. BatchFormer [25] performs attention in batches, allowing the model weights to learn the relationships. NPT [31] deploys Attention between Attributes and Attention between Datapoints to incorporate the learning of representation and relationships between data. However, ABD in NPT leads to a computational complexity of \(O(n^{2})\), which makes it difficult to incorporate a large amount of data and further hinders the feature learning process. To address the efficiency issue, [50] presents inducing points to avoid expensive full self-attention.
### Neural Memory
Memory mechanisms can be used to discover and exploit the relationships between data through contrastive learning [22, 48]. Such relationships are also pervasive in meta-learning. [70] proposes a learnable memory to capture the meta-class information of semantic segmentation during the base class training procedure. Unlike the works mentioned above that use memory in a no-parametric fashion, [54] utilizes relevant features in the input to recall memory and uses them to modify the network parameters. Meta Networks [42] and Neural Stored-program Memory [35] directly store network parameters in different training stages in memory.
Soft prompts [39, 38, 52] can also be regarded as a form of learnable memory that captures task-relevant information during fine-tuning and aids in leveraging the knowledge within pre-trained models. L2P [64, 63] stores a series of soft prompts and reads them using key-value matching, resulting in remarkable performance during continued learning.
## 3 Preliminaries
**Attention mechanism**[58] is a fundamental component in deep learning models that enables the model to focus on relevant information selectively. It is commonly formulated as follows:
\[\text{Att}(Q,K,V)=\text{softmax}(QK^{T}/\sqrt{d_{k}})V, \tag{1}\]
where \(Q,K,V\) are stacked matrixes of queries, keys, and values. Typically, multi-head attention (MultiHA) is used for large model capacity. Using multiple attention modules in parallel, the model can capture different aspects of the input data and combine their results. The outputs of the attention heads are concatenated to yield the final output:
\[\text{MultiHA}(Q,K,V)=\text{concat}(\text{head}_{1},\cdots,\text{head}_{h}), \text{ where }\text{head}_{i}=\text{Att}(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V})\]
where \(W_{i}^{Q},W_{i}^{K}\) and \(W_{i}^{V}\) are learnable matrices for the \(d_{k}\)-dimensional queries, keys and values for \(i\)-th head. By \(AB\) with \(A\in\mathbb{R}^{n\times a\times b}\) and \(B\in\mathbb{R}^{n\times b\times c}\), we mean batched matrix multiplication in Einstein summation notation einsum\((A,B,\) nab,nbc-nac\()\). The output of MHA is computed with additional components, including feed-forward layer (FF), layer normalization (LN) and residual connection:
\[\text{MHA}(Q,K,V)=H+\text{FF}(\text{LN}(H)),\text{ where }H=V+\text{ MultiHA}(\text{LN}(Q),\text{LN}(K),\text{LN}(V)) \tag{2}\]
**Non-parametric transformer (NPT)**[31] extends the standard transformer in a semi-parametric way by leveraging the dependencies between datapoints with the attention among large batched inputs. Specifically, NPT first encodes the input batch \(H\in\mathbb{R}^{bs\times d}\) and reshape the output into \(H^{\prime}=\mathbb{R}^{1\times bs\times(d*h)}\), and then applies attention across datapoints (ABD) by treating the input batch dimension as the token dimension:
\[H^{\prime}_{\text{ABD}}=\text{ABD}(H)=\text{MHA}(H^{\prime},H^{\prime})\in \mathbb{R}^{1\times bs\times(d*h)} \tag{3}\]
Once information has been exchanged between datapoints in the input batch, the ABD output \(H^{\prime}_{\text{ABD}}\in\mathbb{R}^{bs\times 1\times(d*h)}\) is reshaped back into \(H_{\text{ABD}}\in\mathbb{R}^{bs\times d\times h}\) to apply a self-attention operation on the feature dimension \(d\), which we term attention across attributes (ABA) to distinguish it from ABD:
\[\text{ABA}(H_{\text{ABD}})=\text{MHA}(H_{\text{ABD}},H_{\text{ABD}},H_{\text {ABD}})\in\mathbb{R}^{bs\times d\times h} \tag{4}\]
ABD and ABA can be iteratively applied for several times before feeding the resulted (and reshaped) feature into a classifier or some other neural net.
## 4 Heterogenous Memory Augmentation
HMA performs two consecutive steps on the extracted feature batch 4.1: 1) real memory augmentation (RMA) 4.2, and 2) synthetic memory augmentation (SMA) 7.4.2. As semi-parametric methods heavily rely on the quality of non-parametric components, we employ RMA to _expand the non-parametric part implicitly_. Subsequently, we combine it with synthetic memory and apply attention between datapoints. The final output feature from SMA is fed into a linear projection head to compute logits for classification. We summarize HMA in the supplementary material.
### Feature Extraction
Given input \(X\) and label \(Y\) with batch size \(bs\), We compute the feature \(h^{1}=E(X)\in\mathbb{R}^{bs\times d_{1}}\), where \(E(\cdot)\) can be any backbone, and \(d_{1},d_{2}\) is the feature embedding dimension. In line with the setup of NPT, we incorporate labels as attributes. However, directly adding the embeddings of the true labels would leak information. To address this, for an n-classification problem, we replace the true labels \(Y\) with an additional label, denoted as n+1, and treat it as the CLS token as common in transformer literature [14]. We compute label embedding \(e=E_{\text{label}}(\textbf{n+1})\in\mathbb{R}^{bs\times d_{2}}\), where \(d_{2}\) is the dimension of the label embedding. After concatenating, we obtain aggregated features \(C^{1}=[h^{1};e]\in\mathbb{R}^{bs\times d},d=d_{1}+d_{2}\). This concatenated representation \(C^{1}\) is used for later processes.
### Real Memory Augmentation (RMA)
The properties of non-parametric components greatly influence the effectiveness of semi-parametric methods. NPT uses large batch sizes to reduce the influence of randomly selected non-parametric components. Retrievers ensure reference data quality through top-k selection. We aim to develop a method that effectively provides information beyond individual batches.
Drawing inspiration from MoCo [22], which employs a momentum-updated memory queue to provide stable and extensive contrastive samples, we introduce a real memory buffer \(M_{\text{buffer}}\in\mathbb{R}^{m_{1}\times d}\) with size \(m_{1}\) (this buffer is empty at the beginning) and a momentum encoder \(E_{m}\).
**Reading.** We utilize MHA2 to extract information from the memory buffer, thereby implicitly providing cross-batch information for subsequent SMA. To achieve this, we first transform \(M_{\text{buffer}}\) from \(\mathbb{R}^{m_{1}\times d}\) to \(\mathbb{R}^{1\times m_{1}\times d}\) and \(C^{1}\) from \(\mathbb{R}^{bs\times d}\) to \(\mathbb{R}^{bs\times 1\times d}\). Then, we repeat \(M_{\text{buffer}}\)\(bs\) times in the first dimension to obtain \(M^{\prime}_{\text{buffer}}\in\mathbb{R}^{bs\times m_{1}\times d}\). Subsequently, we concatenate \(M^{\prime}_{\text{buffer}}\) and \(C^{1}\) along the second dimension, resulting in \(C^{1}_{\text{aug}}\in\mathbb{R}^{bs\times 1+m_{1}\times d}\)). RMA computes the output feature \(C^{2}\) by:
\[C^{2}=\text{MHA}(C^{1},C^{1}_{\text{aug}},C^{1}_{\text{aug}})\in\mathbb{R}^{ bs\times 1\times d} \tag{5}\]
After RMA, \(C^{2}\) incorporates information from real data beyond a single batch. In 5.3, we observed that using RMA alone seldom brings improvements. Moreover, while using SMA alone can lead to some improvements, adding RMA achieves pronounced improvements.
**Writing.** We insert new aggregated input features \([E_{m}(X),E_{\text{label}}(Y)]\in\mathbb{R}^{bs\times d}\) into the memory queue, and update the momentum encoder momentum update rule as in MoCo: \(E^{t+1}_{m}=\lambda E^{t}_{m}+(1-\lambda)E^{t+1}\), in which \(E^{t+1}\) is the SGD-updated encoder from \(E^{t}\).
### Synthetic Memory Augmentation (SMA)
In NPT, the ABD mechanism allows models to utilize non-parametric components flexibly. We can also leverage this mechanism to introduce learnable synthetic memory, which captures dataset-relevant information. These synthetic memories also serve as compact non-parametric components for the ABD mechanism, eliminating the need for NPT's reliance on large batch sizes.
Although we can directly initialize a series of learnable parameters as memory slots with dimensions \(d_{1}+d_{2}\), we also aim to incorporate label information into the inference process of SMA. Specifically, for each category \(i\in[n]\), we initialize \(m_{2}\) learnable memory slots. Each memory slot is in \(\mathbb{R}^{d_{1}}\) and has corresponding label \(y_{i}\). With the label embedder \(E_{\text{emb}}\), we map the label \(y_{i}\) to the label embedding \(e_{\text{syn},i}\in\mathbb{R}^{d_{2}}\). By concatenating all learnable memory slots and label embeddings, we obtain a learnable memory \(M_{\text{SMA}}\in\mathbb{R}^{(n*m_{2})\times(d_{1}+d_{2})}\). We reshape \(C^{2}\) and \(M_{\text{SMA}}\), and concatenate them to obtain \(C^{2}_{\text{aug}}\in\mathbb{R}^{1\times(bs+m_{2}*n)\times d}\). The SMA output features are obtained by
\[C^{3}=\text{MHA}(C^{2},C^{2}_{\text{aug}},C^{2}_{\text{aug}})\in\mathbb{R}^{ 1\times bs\times d}. \tag{6}\]
Finally, we use a linear projection head to compute the logits from \(C^{3}\). During the training phase, the learnable memory slots are updated by gradient, while the label embeddings are regenerated using the updated embedder \(E_{\text{emb}}\).
## 5 Experiments
In this section, we evaluate the performance of HMA and explore its characteristics. Section 5.1 demonstrates that HMA can be combined with various backbones and improve image and graph tasks. Following the success of semi-parametric methods on OOD tasks, in section 5.2, we examine HMA's out-of-distribution generalization capabilities under different distributional shifts. Finally, in Section 5.3, we demonstrate the characteristics of both real and synthetic memory slots. Ablation study shows while ABD itself can bring improvements, with the help of real memory and synthetic memory slots, HMA consistently achieves competitive performance against task-specific state-of-the-art methods.
**Experiment Setup.** Our experiments strictly follow the basic settings like batch size, learning rate schedule, number of epochs, optimizer, evaluation metric, and test model selection in previous works for a fair comparison. In CIFAR-10 task, we use the codebase3. Each method is trained 200 epochs, and the final test accuracies over three random seeds are reported. In tuning ViT [15] task, we follow [52] to tune the ViT-B/32 model, which was pretrained on Imagenet-21K [13]. In Colored MNIST task 2, we use code base from [3]. In obg molecular property prediction tasks 3, we use the official code base. In real-world graph ID tasks 4 and graph OOD tasks 65, we follow the official implementation of G-mixup [62] and CIGA [21]. More details can be found in the supplementary material.
Footnote 3: [https://github.com/kuangliu/pytorch-cifar](https://github.com/kuangliu/pytorch-cifar)
**Table Notations.** We have **bolded** the best results and added an underline to the second-best results in all the tables. To present the effects of different components in HMA and provide a comprehensive view, we included the results of ablations for each component in the subsequent tables: 1) RMA represents only real memory augmentation from 4.2. 2) ABD and 3) ABD+SYN represent the use of only attention between datapoints from 7.4.2 without and with learnable synthetic memory, respectively. 4) ABD+RMA represents using attention between datapoints and RMA.
### HMA is a General Augmentation Mechanism
We initially tested HMA on CIFAR-10. We adopted a ResNet-18 [23] encoder, similar to the one used in NPT [31]. The original NPT framework reported a performance of 93.7% on CIFAR-10, which is even lower than using ResNet alone (93.9%). HMA achieves an accuracy of 95.54\(\pm\)0.11%, slightly outperforming the backbone ResNet (which achieves 95.43\(\pm\)0.01%). We acknowledge that minor differences may be attributed to the fact that this setting has already been well-studied.
To evaluate the performance of HMA on larger models and its adaptability to pre-trained models, we conducted experiments following the setup of ViT [15] on the pets37 [46], flowers102 [43], CIFAR100 [32], and DTD [10] datasets. We performed a simple hyperparameter search for HMA using different values of \(m_{1}=1,2,4\times bs\) and \(m_{2}=4,8,16\). In addition to directly tuning the ViT backbone, we compared HMA with the SOTA methods [52] that introduced soft prompts (referred to as learnable memory in the paper). "prompt-p" in the table indicates adding soft prompts only at the input layer, similar to p-tuning [39]. "prompt-a" refers to independently adding soft prompts at each layer of ViT, similar to p-tuning v2 [38]. PARAM refers to the ablation where non-parametric memory is not introduced, and only the parameters of the HMA 2 module are used. We report the mean and standard deviation for 3 seeds. The results are summarized in Table 1.
From Table 1, we can observe that the complete HMA consistently achieves better results than the best prompt tuning. The inferior performance of prompt tuning may be attributed to the interference caused by inserting new randomly initialized parameters. In contrast, HMA operates solely in the feature space of the backbone's output, avoiding such interference and allowing for more effective utilization of the original parameters.
We also conducted experiments following [52] in the scenario where only the CLS token and head of the pretrained model are tuned, and the results are shown in Table 2. Since HMA and prompt tuning are orthogonal techniques, we also examined the effects of adding HMA on top of prompt tuning. Aloughth HMA, operating solely in the feature space, does not show as significant improvement as prompt tuning when most of the model parameters are fixed, it can be combined with prompt tuning to achieve more pronounced improvements.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & DTD & PETS37 & FLOWERS102 & CIFAR100 & AVG \\ \hline \hline ViT & 75.11\(\pm\)0.24 & 90.71\(\pm\)0.07 & 98.48\(\pm\)0.40 & 91.48\(\pm\)0.27 & 88.95 \\ prompt-p & 75.69\(\pm\)0.56 & 90.49\(\pm\)0.22 & 98.59\(\pm\)0.22 & 91.61\(\pm\)0.14 & 89.09 \\ prompt-a & 75.35\(\pm\)0.81 & 90.47\(\pm\)0.28 & 98.42\(\pm\)0.11 & 91.62\(\pm\)0.07 & 88.97 \\ \hline \hline param & 75.12\(\pm\)0.67 & 90.27\(\pm\)0.05 & 98.46\(\pm\)0.29 & 91.69\(\pm\)0.08 & 88.89 \\ rma & 75.67\(\pm\)0.16 & 89/96\(\pm\)0.31 & 98.61\(\pm\)0.05 & 91.63\(\pm\)0.31 & 88.97 \\ abd & 75.89\(\pm\)1.04 & 90.66\(\pm\)0.20 & 98.57\(\pm\)0.05 & 91.65\(\pm\)0.14 & 89.19 \\ abd+rma & 75.69\(\pm\)0.40 & 90.54\(\pm\)0.77 & 98.69\(\pm\)0.14 & 91.63\(\pm\)0.19 & 89.14 \\ abd+syn & 75.69\(\pm\)0.75 & 90.09\(\pm\)1.10 & 98.49\(\pm\)0.15 & 91.72\(\pm\)0.08 & 89.00 \\ HMA & **76.05\(\pm\)**0.39 & **91.08\(\pm\)**0.22 & **98.70\(\pm\)**0.16 & **91.93\(\pm\)**0.19 & **89.44** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy of full fine-tuning pretrained ViT on 4 image datasets. PROMPT-P and PROMPT-A indicate adding soft prompts only at the input layer and at each layer, respectively. PARAM is the parameter ablation. Other notations are introduced in 5.
To further validate the effectiveness of HMA with diverse backbones, we combine HMA with GCN [30], GIN [72] and their variants GCN-v, GIN-v[18, 36] backbones on three molecular property prediction tasks on the ogb benchmark [26]. The results are summarized in Table 3. HMA brings improvement among 10 of 12 reported AUROCs.
Furthermore, we follow G-mixup [21] and compare it with various augmentation methods, including DropEdge [51], DropNode [74], Subgraph [61], and Manifold-Mixup [62] on different datasets. All methods use the same GIN [72] backbone. The results are summarized in Table 4. We can see the three components of HMA have a similar effect to CIFAR-10, and full HMA consistently outperforms vanilla GIN backbone. By doing meaningful mixup with the help of elaborately designed graphon, G-mixup achieves better performance than simply mixing graphs in feature space [62]. Although HMA also only operates in feature space, it significantly outperforms Manifold-Mixup and shows competitive results to G-Mixup.
### HMA Performs Competitively on OOD Tasks
We test the OOD performance of HMA on the Colored MNIST dataset introduced in IRM [3]. This task is distinct from traditional MNIST as it introduces misleading correlations between labels and colors. Specifically, digits 0-4 and 5-9
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & IMDB-B & IMDB-M & REDD-B & REDD-M5 & REDD-M12 & AVG \\ \hline vanilla & 71.55\(\pm\)3.53 & 48.83\(\pm\)2.75 & 92.59\(\pm\)0.86 & 55.19\(\pm\)1.02 & 50.23\(\pm\)0.83 & 63.68 \\ D-edge & 72.20\(\pm\)1.82 & 48.83\(\pm\)3.02 & 92.00\(\pm\)1.13 & 55.10\(\pm\)0.44 & 49.77\(\pm\)0.76 & 63.58 \\ D-node & 72.16\(\pm\)0.28 & 48.33\(\pm\)0.98 & 90.25\(\pm\)0.98 & 53.26\(\pm\)4.99 & 49.95\(\pm\)1.70 & 62.79 \\ S-graph & 68.50\(\pm\)0.86 & 47.25\(\pm\)3.78 & 90.33\(\pm\)0.87 & 54.60\(\pm\)3.15 & 49.67\(\pm\)0.90 & 62.07 \\ M-mixup & 70.83\(\pm\)1.04 & 49.88\(\pm\)1.34 & 90.75\(\pm\)1.78 & 54.95\(\pm\)0.86 & 49.81\(\pm\)0.80 & 63.24 \\ G-mixup & 71.94\(\pm\)3.00 & **50.46\(\pm\)**1.49 & **92.90\(\pm\)**0.87 & 55.49\(\pm\)0.53 & **50.50\(\pm\)**0.41 & 64.25 \\ \hline RMA & 71.17\(\pm\)4.75 & 49.89\(\pm\)2.50 & 91.92\(\pm\)1.70 & 55.23\(\pm\)0.58 & 49.32\(\pm\)1.45 & 63.51 \\ ABD & 70.67\(\pm\)3.25 & 47.44\(\pm\)1.84 & 92.42\(\pm\)0.14 & 54.87\(\pm\)0.45 & 49.08\(\pm\)0.86 & 62.90 \\ abd+syn & 71.67\(\pm\)3.25 & 48.97\(\pm\)2.97 & 91.08\(\pm\)1.23 & 55.90\(\pm\)0.95 & 49.29\(\pm\)1.00 & 63.38 \\ HMA & **72.62\(\pm\)**0.48 & 49.78\(\pm\)0.74 & 92.80\(\pm\)1.37 & **56.37\(\pm\)**1.65 & 50.27\(\pm\)0.60 & **64.37** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Accuracies of different augmentation methods on real-world graphs. We strictly follow the official implementation and report the baseline results from G-mixup [21].
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & Dtd & PETS37 \\ \hline \hline vit\(\rightarrow\)+hma & 74.01\(\pm\)0.29\(\rightarrow\)74.91\(\pm\)0.39 & 89.17\(\pm\)0.07\(\rightarrow\)89.72\(\pm\)0.22 \\ prompt-p\(\rightarrow\)+hma & 73.98\(\pm\)0.56\(\rightarrow\)73.95\(\pm\)0.73 & 89.22\(\pm\)0.22\(\rightarrow\)89.58\(\pm\)0.60 \\ prompt-a\(\rightarrow\)+hma & 74.75\(\pm\)0.86\(\rightarrow\)**75.44\(\pm\)**0.06 & 89.88\(\pm\)0.28\(\rightarrow\)**90.71\(\pm\)**0.27 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy of only tuning CLS token and head of ViT. +HMA indicates adding HMA to the original method. The results on the right side of \(\rightarrow\) represent the performance after adding HMA.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & bace & bbbp & HIV & AVG \\ \hline GCN & 79.15\(\pm\)1.44 & 68.87\(\pm\)1.51 & 76.06\(\pm\)0.97 & 74.69 \\ GCN+hma & 80.59\(\pm\)1.15 & 69.96\(\pm\)0.55 & 77.26\(\pm\)1.42 & 75.85(\(\uparrow\)1.16) \\ \hline GCN-v & 68.93\(\pm\)6.95 & 67.80\(\pm\)2.35 & 75.99\(\pm\)1.19 & 70.91 \\ GCN-v+hma & 72.53\(\pm\)1.36 & 68.26\(\pm\)0.85 & 76.51\(\pm\)1.70 & 72.73(\(\uparrow\)1.82) \\ \hline GIN & 72.97\(\pm\)4.00 & 68.17\(\pm\)1.48 & 75.58\(\pm\)1.40 & 72.24 \\ GIN+hma & 77.18\(\pm\)2.20 & 69.47\(\pm\)1.50 & 76.47\(\pm\)0.17 & 74.38(\(\uparrow\)2.14) \\ \hline GIN-v & 73.46 \(\pm\)5.24 & 69.71\(\pm\)1.92 & 77.07\(\pm\)1.40 & 73.41 \\ GIN-v+hma & 76.56\(\pm\)1.38 & 68.43\(\pm\)1.19 & 75.83\(\pm\)1.23 & 73.61(\(\uparrow\)0.20) \\ \hline \hline \end{tabular}
\end{table}
Table 3: AUROC and standard deviation of backbones and HMA on ogb benchmark. We have marked the relative improvements over the backbone with an upward arrow (\(\uparrow\)) for the average results.
are considered two separate classes. Two domains \(p_{1},p_{2}\) can be accessed during training: in the first domain, green digits have a probability of \(p_{1}\) to be in 0-4; in the second, the probability is \(p_{2}\). During testing, there are two sets: 1. reversed color: green digits have a 90% chance of being 0-4; 2. gray: blue and red images are turned into gray. The original image size (28,28) is downsized for computational convenience to (14,14). HMA is tested with MLP from IRM [3] and CNN from [19] under both downsized and original image sizes. We modify \(p_{1},p_{2}\) from \((0.2,0.1)\) to \((0,0.02)\) to more accurately demonstrate the model's performance with distribution shifts.
This task provides a proof of concept for the semi-parametric approach in addressing out-of-distribution (OOD) tasks. While there is a distribution shift, there are still certain data points within the training set that share the same distribution as the test data. The semi-parametric approach effectively reduces the distribution shift by leveraging these data points as a support set. The average accuracy of the two test domains is presented in Figure 2. With different backbones, image sizes, and distribution shifts, we can observe that HMA consistently improves OOD performance.
In the following graph tasks, we follow CIGA [8] to examine HMA on diverse distribution shifts. Our baselines include ERM [57], SOTA interpretable GNNs like ASAP Pooling[49], and DIR[66], and SOTA OOD objectives like IRM[3], v-Rex[33], and IB-IRM[1].
In Table 6, we examine HMA on graph size shift scenarios on datasets converted from TU benchmarks[41]. Distribution shifts are generated through size-specific dataset splits following [8][73]: the training set comprises graphs with sizes smaller than the 50th percentile, 10% of which are held-out for the validation set, and the test set consists of those with sizes larger than the 90th percentile. Matthews correlation coefficient [4] is adopted due to the class imbalance. In Table 5, we further test HMA on larger real-world datasets with more complicated node degree biases. We use the Graph-SST datasets following CIGA[8]. Graphs are assigned based on their average degree to generate distribution shifts: the training set includes graphs with average degrees smaller than the 50th percentile, the validation set includes those with average degrees larger than the 50th percentile and smaller than the 80th percentile, and the test set includes the left. Graphs in Twitter are assigned in an inversed order to investigate OOD generalization ability from large degree graphs to small ones.
ERM is a strong baseline for these real-world tasks compared with specifically designed algorithms [19]. Our HMA consistently improves ERM baselines without introducing complex OOD objectives or learning strategies.
### Heterogenous Memory Shows Reasonable Properties
We use T-SNE [56] to visualize the relationships between the synthetic memory features and real data features in \(C^{2}\). We excluded the label embeddings to focus on the synthetic memory slots since they would result in identical embeddings for instances with the same label and affect the T-SNE results.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & SST5 & TWITTER & avg \\ \hline ERM & 43.72\(\pm\)0.59 & 63.54\(\pm\)2.10 & 53.63 \\ ASAP & **44.16\(\pm\)**1.36 & 60.68\(\pm\)2.10 & 52.42 \\ DIR & 41.12\(\pm\)1.96 & 59.85\(\pm\)2.98 & 50.49 \\ \hline IRM & 43.69\(\pm\)1.26 & 63.50\(\pm\)1.23 & 53.60 \\ V-REX & 43.28\(\pm\)0.52 & 63.21\(\pm\)1.57 & 53.25 \\ IB-IRM & 40.85\(\pm\)2.08 & 61.26\(\pm\)1.20 & 51.06 \\ CIGAV1 & 43.70\(\pm\)1.98 & 62.02\(\pm\)2.28 & 52.68 \\ CIGAV2 & 43.30\(\pm\)0.90 & 61.80\(\pm\)2.03 & 52.55 \\ \hline HMA & 44.03\(\pm\)1.01 & **65.12\(\pm\)**2.17 & **54.58** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Acc of OOD algorithms on averaged degrees shift.
Figure 2: OOD results for colored MNIST (left: original size, right: downsized) experiments. We change the OOD sample ratio from \((0.2,0.1)\) to \((0,0.02)\)
We present the synthetic memory slots' relationships in Figure 3 (left). It can be observed that slots belonging to the same class cluster together, indicating that the synthetic memory slots can learn class-specific information with the help of label embeddings. In Figure 3 (right), we show the relationships between three classes of synthetic memory slots and their corresponding real data features. It can be observed that different classes of real data features form separate clusters, while three separate clusters in Figure 3 (left) merge into one. This suggests that the learned memory in the synthetic memory slots differs from the real data features. The improvement in 5.1 and 5.2 may come from the information beyond the instance level in the synthetic memory slots.
We also visualize the synthetic memory slots features learned in graph tasks in Figure 4. Although the synthetic memory slots of class 1 scatter over other classes' clusters Figure 4 left, there is an interesting observation that the features of class 1 mix with the other three classes in the feature space correspondingly, as shown in Figure 4 right. The correspondence indicates that the synthetic memory slots might capture class-level information, similar to prototype representations [53].
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & NC11 & NC109 & proteins & DD & avg \\ \hline ERM & 0.15\(\pm\)0.03 & 0.16\(\pm\)0.07 & 0.19\(\pm\)0.07 & 0.29\(\pm\)0.1 & 0.198 \\ ASAP & 0.16\(\pm\)0.10 & 0.15\(\pm\)0.07 & 0.22\(\pm\)0.16 & 0.21\(\pm\)0.08 & 0.185 \\ DIR & 0.21\(\pm\)0.06 & 0.13\(\pm\)0.05 & 0.25\(\pm\)0.14 & 0.20\(\pm\)0.10 & 0.198 \\ \hline IRM & 0.17\(\pm\)0.02 & 0.14\(\pm\)0.01 & 0.21\(\pm\)0.09 & 0.22\(\pm\)0.08 & 0.185 \\ V-REX & 0.15\(\pm\)0.04 & 0.15\(\pm\)0.04 & 0.22\(\pm\)0.06 & 0.21\(\pm\)0.07 & 0.183 \\ IB-IRM & 0.12\(\pm\)0.04 & 0.15\(\pm\)0.06 & 0.21\(\pm\)0.06 & 0.15\(\pm\)0.13 & 0.158 \\ CIGAV1 & 0.25\(\pm\)0.08 & 0.26\(\pm\)0.09 & **0.35\(\pm\)**0.17 & 0.21\(\pm\)0.12 & **0.268** \\ CIGAV2 & **0.28\(\pm\)**0.10 & **0.27\(\pm\)**0.10 & 0.30\(\pm\)0.09 & 0.19\(\pm\)0.12 & 0.260 \\ \hline HMA & 0.19\(\pm\)0.05 & 0.18\(\pm\)0.04 & 0.34\(\pm\)0.07 & **0.32\(\pm\)**0.03 & 0.258 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Matthews correlation coefficients of different OOD algorithms on graph size shifts for real-world graphs. We strictly follow CIGA [8] and reproduce the results of ERM, GIGAv1&v2.
Figure 4: T-SNE on REDD-M5 for synthetic memory features (left) and input features (right)
Figure 3: T-SNE on CIFAR-10 for synthetic memory features (left) and real data features with synthetic memory features (right). ’0-train’ and ’0-mem’ refer to real and memory data features, respectively. The preceding label corresponds to the class from CIFAR10.
**Ablation Study.** The in-distribution performances on REDD-M5 are depicted in Figure 5(a) for different memory configurations, while the out-of-distribution performances on Twitter and SST are illustrated in Figure 5(b) and (c) respectively. It can be seen that while a suitable increase in the synthetic memory size \(m_{2}\) can bring improvement, larger \(m_{2}\) will hmu's performance. One potential explanation could be that a large \(m_{2}\) increases the optimization difficulty of ABD. Furthermore, it can be seen that with the addition of real memory augmentation, an increase in \(m_{2}\) provides more improvement.
From Table 1 and Table 4, we can find that singly using ABD or SMA proves beneficial in certain scenarios but not significant. The possible reason for this observation is the sensitivity of semi-parametric approaches. By solely relying on SMA without any data filtering, the quality of the non-parametric elements remains unassured. However, upon incorporating RMA, the effectiveness of HMA surpasses that of individual deployments of RMA and SMA. This aligns with our conjecture that RMA explicitly offers more information to the synthetic memory learning process.
## 6 Discussions, Conclusions, Limitations and Future work
**Two-stage Augmentation.** While it seems more natural to perform both RMA and SMA in one layer, _i.e._ to compute MHA with features aggregated from the input features, buffer features and the learnable features, the overall complexity is quadratic due to the SMA operations. It is possible to alternatively use linear approximations to standard attention (_e.g._, Performer [9]), it is beyond our scope and we leave it to future work.
We propose a universal memory augmentation method that can be seamlessly integrated with various backbone architectures. SMA leverages parametric and non-parametric approaches in the complementary combination of Attention Between Datapoints and learnable memory, thereby enabling the capture of class-specific and task-aware information, which is beneficial for ID and OOD downstream tasks. Additionally, based on momentum memory queue and attention-based reading, RMA enhances SMA with cross-batch information. Our experimental results demonstrate that HMA can consistently improve the performance of backbones and performs competitively with task-specific best-performing designs.
|
2304.06038 | Knowledge-Distilled Graph Neural Networks for Personalized Epileptic
Seizure Detection | Wearable devices for seizure monitoring detection could significantly improve
the quality of life of epileptic patients. However, existing solutions that
mostly rely on full electrode set of electroencephalogram (EEG) measurements
could be inconvenient for every day use. In this paper, we propose a novel
knowledge distillation approach to transfer the knowledge from a sophisticated
seizure detector (called the teacher) trained on data from the full set of
electrodes to learn new detectors (called the student). They are both providing
lightweight implementations and significantly reducing the number of electrodes
needed for recording the EEG. We consider the case where the teacher and the
student seizure detectors are graph neural networks (GNN), since these
architectures actively use the connectivity information. We consider two cases
(a) when a single student is learnt for all the patients using preselected
channels; and (b) when personalized students are learnt for every individual
patient, with personalized channel selection using a Gumbelsoftmax approach.
Our experiments on the publicly available Temple University Hospital EEG
Seizure Data Corpus (TUSZ) show that both knowledge-distillation and
personalization play significant roles in improving performance of seizure
detection, particularly for patients with scarce EEG data. We observe that
using as few as two channels, we are able to obtain competitive seizure
detection performance. This, in turn, shows the potential of our approach in
more realistic scenario of wearable devices for personalized monitoring of
seizures, even with few recordings. | Qinyue Zheng, Arun Venkitaraman, Simona Petravic, Pascal Frossard | 2023-04-03T15:37:40Z | http://arxiv.org/abs/2304.06038v1 | # Knowledge-Distilled Graph Neural Networks for Personalized Epileptic Seizure Detection +
###### Abstract
Wearable devices for seizure monitoring detection could significantly improve the quality of life of epileptic patients. However, existing solutions that mostly rely on full electrode set of electroencephalogram (EEG) measurements could be inconvenient for every day use. In this paper, we propose a novel knowledge distillation approach to transfer the knowledge from a sophisticated seizure detector (called the teacher) trained on data from the full set of electrodes to learn new detectors (called the student). They are both providing lightweight implementations and significantly reducing the number of electrodes needed for recording the EEG. We consider the case where the teacher and the student seizure detectors are graph neural networks (GNN), since these architectures actively use the connectivity information. We consider two cases (a) when a single student is learnt for all the patients using pre-selected channels; and (b) when personalized students are learnt for every individual patient, with personalized channel selection using a Gumbel-softmax approach. Our experiments on the publicly available Temple University Hospital EEG Seizure Data Corpus (TUSZ) show that both knowledge-distillation and personalization play significant roles in improving performance of seizure detection, particularly for patients with scarce EEG data. We observe that using as few as two channels, we are able to obtain competitive seizure detection performance. This, in turn, shows the potential of our approach in more realistic scenario of wearable devices for personalized monitoring of seizures, even with few recordings.
Keywords:Personalized seizure detection Graph neural networks Knowledge distillation.
## 1 Introduction
Epilepsy is a neurological disorder that is characterized by recurring, unprovoked seizures caused by surges of electrical activity in the brain and affects
nearly three million people [26]. About one third of the patients do not respond to treatment by drugs [17]. Hence, real-time seizure monitoring is crucial for improving the patients' quality of life, for example, by alerting caregivers that their assistance is needed once a seizure occurs. A continuous monitoring of the electroencephalogram (EEG) is useful in identifying and even predicting seizures in critically ill patients [19], particularly with the use of deep-learning approaches [27, 21, 12, 1, 23] The monitoring is usually performed in a hospital environment over the course of several days, which makes it infeasible to monitor patients long-term in non-ambulatory settings. Wearable devices could overcome the need of specialised intrusive medical equipment and hospital environment and enable real-time seizure monitoring on a daily basis. Existing measurement devices [3] that use EEG head caps with over 20 wired electrodes are however uncomfortable and difficult to wear over prolonged intervals and lighter and more discrete wearables are desirable for patients. Previous studies have attempted to reduce the number of EEG electrodes needed for seizure detection [8, 28, 9] with promising results. However, these solutions typically involve training detection systems from scratch for the new setting, and fail to incorporate the already existing historical EEG data of the patient recorded with many electrodes. Due to the nature of the disorder itself, seizure data is sparse in the number of available seizures and difficult to collect, and it is thus important to meaningfully use previous data. Further, it is known that the signals from the different regions of the brain (captured through the EEG electrodes) are not independent and exhibit strong inter-channel dependencies that could be viewed as a brain graph or a network. Hence, we ask the question:
_How to transfer information gained from a full set of channels/graph to settings with a reduced number of channels/subgraph while actively using the connectivity information?_
In this paper, we address this question by developing a novel approach for knowledge distillation (KD) with graph neural networks (GNNs) applied to seizure detection. Our motivation for the use of GNNs comes from the observation that they have been used extensively in applications with graph-structured data, and more recently have shown to result in promising seizure detection performance [22, 30]. More specifically, we propose a seizure detection model that consists of three interconnected blocks. Firstly, we have the knowledge distillation block, whereby we transfer the knowledge from a pre-trained seizure detection model to obtain a model that is light-weight and uses only a reduced set of input channels and the corresponding subgraph. Secondly, a channel selection block, which takes the full multi-channel input and retains the signal only on a reduced set of channels that are either pre-selected or learnt in a fully data-driven manner. Lastly, we have the GNN based seizure detection model that classifies the input in the form of the multi-channel signal from a reduced set of channels/electrodes and the corresponding subgraph, into seizure or non-seizure segments.
Our goal is to also investigate the influence of two important aspects in seizure detection performance with reduced channels: (i) prior knowledge (through the
use of the teacher model), and (ii) personalization/ patient-specific detection. The specific contributions of our paper are as follows:
* We propose new GNN models for epileptic seizure detection that build on knowledge distillation to generate models that are both light-weight and work on subgraphs of reduced nodes/channels. To the best of our knowledge, this is the first KD approach dedicated to obtaining subgraph GNNs with reduced channels.
* We propose two different models for seizure detection with reduced channels, namely one with pre-selected (clinically motivated) channels and one with data-driven channels obtained from Gumbel softmax channel selection.
* By applying our approach on pre-trained GNN that uses a full electrode set, we obtain personalized (patient-specific) and global (non patient-specific) GNN models that are both lightweight (using only \(\approx 3\%\) of the parameters of the teacher) and requires only a reduced subset of electrodes (requiring as low as only \(10\%\) of the original electrodes)
* We demonstrate the results of our approach on the TUH Seizure Corpus, which is one of the most popular and diverse datasets for epileptic seizures.
* We show empirically that the combination of personalization and KD could significantly improve seizure detection in cases of very scarce data, and in cases when the measurements are made from the relatively 'non-informative' electrodes.
Finally, it could be noted that epilepsy seizure detection is a very active research problem. In particular, there has been a steady increase in the number of graph-based approaches, and particularly GNNs applied to the problem of seizure detection and classification [30, 22, 5]. However, to the best of our knowledge no prior works exist that tackle the problem of channel reduction with GNNs and KD, particularly for seizure detection. While KD has been used in multiple settings related to GNNs [6, 7, 15, 4, 31, 32, 33], it has not been employed to the task of data-driven subgraph identification, which is the main objective in this paper.
## 2 Preliminaries
We now briefly review some of the basic concepts from GNNs and KD.
**Graph Neural Networks** Graph Neural Networks (GNNs) refer to a class of deep learning models designed for graph-structured data [24]. GNNs learn the representations of the nodes/channels in a graph and predict the labels or properties of nodes/edges by actively using the the underlying graph structure. Due to the graph structure, GNNs naturally provide an aspect of interpretability or explainability. GNNs have been shown to significantly outperform the use of CNNs or other non-graph approaches in many applications. While study and development of GNNs is an active research area, we consider the specific case of Graph convolutional networks (GCNs) in our work, since they form one of the simplest and most popular GNNs that directly generalize the convolution
operation from CNNs to a graph setting [16]. A multi-layer GCN has the layer-wise propagation rule in the hidden layers:
\[H^{(l+1)}=\sigma(AH^{(l)}\Theta^{(l)}) \tag{1}\]
where \(H^{l}\in\mathbb{R}^{N\times D}\) is the hidden node features at \(l\)-th layer; \(H^{0}\) denoting the input, \(\sigma\) a non-linear activation function such as ReLU or sigmoid, \(A\) the adjacency matrix, and \(\Theta(l)\) being the weight matrix in the \(l\)-th layer that is learnt from the data for a given task. Put simply, the graph convolution operation takes the weighted sum of the features of the neighbors of a node and applies a non-linear activation function to produce the updated features for the node. This operation is repeated for each layer, allowing the model to learn more complex representations of the graph structure and node features. The final output of a GCN is typically obtained by applying a linear layer to the features of the nodes in the final layer. Finally, depending on whether the task is regression or classification, the parameters of the GNN are learned by minimizing a loss function, respectively.
**Knowledge Distillation** Knowledge distillation (KD) [11] refers to transferring knowledge from a large/sophisticated pre-trained neural network (known as the _teacher_ network) to a smaller network (known as the _student_ network). The student represents a light-weight model derived from the teacher while enforcing the performance to be similar to that of the teacher. A distillation loss is used during training to guide the student to replicate the teacher's behavior as closely as possible. Different types of knowledge can be transferred, but the most straightforward one is response-based KD, which refers to the response of the output layer of the teacher. A widely used example of this is the class probability called as _soft targets_ defined using a softmax function as
\[p(z_{i},T)=\exp(z_{i}/T)/\sum_{j}\,\exp(z_{j}/T), \tag{2}\]
where \(p_{i}\) is the probability of belonging to class \(i\), \(z\) is the vector of logits (outputs of the last layer of the teacher to a given input). The temperature \(T\) controls the contribution of each soft target to the knowledge. When \(T\) is equal to 1, we get the standard softmax function, but as \(T\) increases, the probability distribution is softened. The distillation loss can be seen as comparing the class probabilities obtained from the teacher and the student. It enforces the distribution of the outputs produced by the student to be close to that of the teacher. The Kullback-Leibler (KL) divergence is therefore often used as the distillation loss function, and minimizing this loss during training makes the logits of the student get closer to the logits of the teacher [10]. Let \(z_{t}\) and \(z_{s}\) denote the representation produced by the teacher and student models, respectively, for the same input. Then, the final loss function used to train the student is a weighted average of the two terms and is defined as
\[L_{S}=(1-\delta)L_{D}(p(z_{t},T),p(z_{s},T))+\,\delta L_{CE}(y,p(z_{s},1)), \tag{3}\]
where \(L_{D}\) is the distillation loss function, \(p(z_{t},T)\) are the teacher soft targets, \(p(z_{s},T)\) are the student soft targets, \(L_{CE}\) is the cross entropy loss function,
are the ground truth labels, and \(\alpha\) is the weighting factor. The parameter \(\delta\) represents the relative weight given to the teacher's knowledge over the new training data corresponding to the student training \(-\) the higher \(\delta\), the lesser the model relies on the teacher for the training of the student. We shall consider the KD as part of our approach later in Section 3.
## 3 KD with GNNs for Seizure Detection
### Proposed Model
We first propose our approach to design a global seizure detection student GNN that works on data with reduced nodes/channels and the corresponding subgraph, obtained using KD from a teacher GNN that operates on the complete node set. Let \(D\) denote the number of nodes/channels in the full measurement. Let \(A\) denote the adjacency matrix of the graph describing the connections between the different channels. The adjacency matrix could be obtained in different ways like a correlation matrix, functional connectivity, or simply the matrix that captures the physical proximity of the electrodes on the scalp. In our paper, we use the latter.
Let \(\mathbf{x}\in\mathbb{R}^{D\times T}\) denote the input signal consisting of the recordings /measurements from all the \(D\) channels for \(T\) time samples. Let us consider a GNN with parameters \(\theta\) and let \(z_{\theta}(\mathbf{x},A)\) denote the output of the last layer or the logits learnt by the GNN, where \(A\in\mathbb{R}^{D\times D}\) denotes the graph between the channels. Further, let us use subscripts \(t\) and \(s\) for the teacher and student GNNs, respectively: \(z_{\theta_{t}}(\cdot,A)\) and \(z_{\theta_{s}}(\cdot,A)\) denote the output layers from the teacher and student GNNs, respectively. The teacher network is learnt by minimizing the following the binary cross entropy function \(BCE(\cdot,\cdot)\) between the class label \(y\) and the model prediction \(f^{t}_{\theta_{t}}(\mathbf{x})\)
\[\mathcal{L}_{CE}(\theta_{t})=\mathbb{E}_{\mathbf{x}}\left(BCE(y,z_{\theta_{t }}(\mathbf{x},A))\right), \tag{4}\]
with respect to \(\theta_{t}\), where \(\mathbb{E}\) denotes the expected value obtained by averaging over all training samples \(\mathbf{x}\). We use the BCE function since we consider here only the seizure versus non-seizure classification problem. In order to train the student GNN from the pre-trained teacher, we minimize a regularized BCE cost, where the regularization term is given by the distillation loss that minimizes the KL divergence between the soft-output of the teacher and student GNNs:
\[\mathcal{L}_{D}(\theta_{t}*,\theta_{s})=\mathbb{E}_{\mathbf{x}}\left(KL(p(z_{ \theta_{t}*},T)(\mathbf{x},A),p(z_{\theta_{s}}(\mathbf{x},A),T))\right), \tag{5}\]
where \(\theta_{t}*\) denotes the parameters of the pre-trained teacher. Then, the student network is trained by minimizing the total loss function:
\[L_{S}(\theta_{s})\triangleq(1-\delta)\mathcal{L}_{D}(\theta_{t}*,\theta_{s})+ \delta\,\mathcal{L}_{BCE}(\theta_{s}). \tag{6}\]
Our formulation so far uses the same input for both the student and teacher, and hence, the same number of input channels. This is because the KD formulation
assumes that the input to both the student and the teacher are of the same class, as we discussed in the Preliminaries. However, our ultimate goal is to transfer knowledge to a student that uses the measurements from reduced set of nodes/channels \(\mathbf{x}^{d}\) with \(d<D\), and not \(\mathbf{x}\). In other words, we wish to train a student model that works on a subgraph \(A^{\prime}\) of the original graph \(A\). We achieve this by modifying the graph used by the student deleting the edges from the full graph with adjacency matrix \(A\) as follows:
\[A^{\prime}=W^{\top}\,A\,W, \tag{7}\]
where \(W\in\mathbb{R}^{D\times d}\) denotes the selection matrix which is a permutation of the matrix given by concatenation of a identity matrix of dimension \(d\) with an all zero matrix of size \((D-d)\times d\)\(-\) retains only the subgraph of \(d\)-size subset of the channels.3 The input \(\mathbf{x}_{d}\) is then given by \(\mathbf{x}_{d}=W^{\top}\mathbf{x}\in\mathbb{R}^{d}\), corresponding to the nodes of the subgraph defined by \(W\). This in turn means that we must use \(z_{\theta_{s}}(\mathbf{x}_{d},A^{\prime})\) and not \(z_{\theta_{s}}(\mathbf{x},A)\) in the total loss function in (6). Further, in order that the hidden nodes corresponding to the deleted channels are not pooled in the GNN, we multiple the output of each hidden layer of the GNN also by \(W\). This in turn means that in practice the student GNN working on \(D\) nodes can be fed with zeroes at the test time on the discarded channels, corresponding to having only the reduced set of measurement channels as input for seizure detection. We note that, while the specific application setting used in this work is that of scalp EEG channels, our proposed approach can be applied also to other multi-channel settings such as fMRI, where there is knowledge of connectivity across channels/measurements. The use of GNNs also makes our approach inherently interpretable in terms of connectivity of the brain regions.
Footnote 3: In general, \(A^{\prime}\) may not necessarily be a connected graph, unless specifically regularized to be so.
We consider three different instances of our model in this work: (a) **G**lobal **S**tudent GNN with **P**re-**S**elected channel reduction (GS-PS) model, (b) **G**lobal **S**tudent GNN with **d**ata-**d**riven channel reduction (GS-DD) model, and (c) **P**ersonalized **S**tudent with **D**ata-**D**riven channel reduction (PS-DD) model We describe them next.
### GS-PS Model
We first consider the case when the reduced electrodes are preselected, or known already. In particular, we chose the four channels of T3, T4, T5, and T6 of the 19 channels from the T-20 montage [14] as the reduced electrode set. This is motivated by input from neuroscientists that say these temporal channels can be relatively more indicative channels for seizure in general [9]. In this case, the \(W\) matrix from Eq. (7) corresponds to a diagonal matrix with ones only at the indices corresponding to T3, T4, T5, and T6. We also validate the choice of these channels through the following experiment. We conduct an experiment where a new model with the same architecture as the teacher (keeping the full electrode channels) is trained to learn relevance weights \(w\) for each electrode:
this was simply achieved by applying a learnable diagonal matrix \(M\in\mathbb{R}^{D\times D}\) to the input before the GNN such that the effective input to the GNN was defined as \(\mathbf{x}_{M}^{\prime}=M\cdot\mathbf{x}\in\mathbb{R}^{D\times D}\). We notice that the weights assigned to the temporal and some of the occipital electrodes were the highest, in particular, T2,T3,T4, and T5, were given large weights. A more practical reason for the choice of temporal channels is the development of wearable sensors: many state of the art wearable sensors are of the behind the ear type, corresponding to these four temporal channels [9, 28]. We apply the proposed GS-PS model for seizure detection by training them on the data from training patients and apply them to detect seizures on new test patients. In this case, the subgraph is pre-determined.
### GS-DD Model
We next consider the case of learning a student with channel reduction achieved in a completely data-driven manner. We propose to use a Gumbel-softmax (GS) channel selection block akin to the approach pursued in [29]. Our proposed GD-DD model consists of two connected blocks, first, the GS block that selects the subset of channels/electrodes, followed by the GNN block that produces a label as shown in Figure 1. The details of the GS block are given next.
The Gumbel-Softmax EEG channel selection block was first proposed by Strypsteen and Bertrand [29], where channel selection was acheived through a learnable layer in the Deep Neural Network (DNN) in an end-to-end differentiable manner. The Gumbel-softmax layer represents a relaxation of the discrete selection operation that allows for differentiation [29][13][18]. Let \(x_{n}\) indicate the feature vector derived from channel \(n\), and \(x_{new_{i}}\) indicate the \(i\)th channel in the reduced set of channels. During training, the output of each selection neuron \(k\) is given by \(x_{new_{k}}=w_{k}^{T}X\), with \(w_{k}\) sampled from the concrete distribution given by [18]:
\[w_{nk}=\frac{\exp((\log\alpha_{nk}+G_{nk})/\beta)}{\sum_{j=1}^{N}\exp((\log \alpha_{jk}+G_{jk})}, \tag{8}\]
with \(G_{nk}\) independent and identically distributed samples from the Gumbel distribution and \(\beta\in(0,+\infty)\) the temperature parameter of the concrete distribution. The effective subset of input node features is computed as \(X_{new}=w^{T}X\). The temperature parameter \(\beta\) controls the extent of this relaxation from the one-hot selection: as \(\beta\) approaches 0, the distribution becomes more discrete, the sampled weights converges to one-hot vectors. The continuous relaxation allows \(w\) to be jointly optimized with model parameters, and to match the channel selection to the target model. The most pertinent EEG channels are thereby selected without prior expert knowledge or the need for manual feature selection. The learnable parameters \(\alpha\) of this distribution are jointly optimized with with the other network weights. At the end of training, the selection layer is made to select discrete channels by hard-thresholding the entries of \(w_{k}\) so that they select only \(K\) channels as \(w_{nk}=\begin{cases}1&\text{if }n=\arg\max_{j}\alpha_{jk}^{*}\\ 0&\text{otherwise},\end{cases}\), where \(\alpha^{*}\) is the learned matrix after training. We note that during test time, the GS block takes the
form of a fixed linear matrix multiplication \(W\) that acts to select the electrode channels. We also note that unlike the pre-selected case presented in Section 3.2, GS-DD model learns a _data-driven subgraph_.
In order to obtain a data-driven channel selection, we use the Gumbel softmax channel selection block as part of our GNN pipeline shown in Figure 1. In particular, we apply the GNN on the reduced subgraph obtained by selecting only a subset of input EEG channel signals \(X_{new}\) and that uses the adjacency matrix \(A_{new}\) corresponding to the selected channels. As discussed above, the GS block is parameterized by a learnable matrix \(\alpha\in\mathbb{R}^{N\times K}\), where \(N\) is the total number of electrodes, and \(K\) is the number of electrodes we wish to keep after reduction. When being fed a sample \(X\), the selection block sample a weight matrix \(W\in\mathbb{R}^{N\times K}\) from the concrete distribution following Equation (8). This can be viewed as a softmax operation, which produces a weight matrix whose elements summing to one as continuous relaxation of one-hot vectors. In our experiments, we use a similar method as in paper [29]. During the training, we set \(\beta(t)=\beta_{s}(\beta_{e}/\beta_{s})^{B}\), decreasing in an exponential manner where \(B\) is the total number of training epochs. In particular, \(\beta(t)\) is the temperature parameter at epoch \(t\), \(\beta_{s}\) and \(\beta_{s}\) are respectively the starting and ending \(\beta\). In our settings, \(\beta_{s}=100\), \(\beta_{s}=0.001\). As we noted before, while the complete set of electrodes is indeed used during training of the student GNN, this is not the case during test time as the \(W\) matrix will be set to ones and zeros, thereby not requiring any measurements from the non-selected electrodes.
**Channel consolidation** We note that, though we force the weight matrix to select a reduced set of channels, it is possible that a given channel is chosen multiple times since we have not actively enforced that there is no duplication. In order to discourage duplicate channels, we minimize the total loss regularized with the penalty given by [29]: \(\Omega(P)=\lambda\sum_{n=1}^{N}\text{ReLU}(\sum_{k=1}^{K}p_{nk}-\tau)\), where \(\text{ReLU}(\cdot)\) is the rectified linear unit, \(\lambda\) is the weight of the regularization loss, and \(\tau\) the threshold parameter. During training, we set \(\tau(t)=\tau_{s}(\tau_{e}/\tau_{s})^{B}\), decreasing in an exponential manner. In our settings, \(\tau_{s}=3\), \(\tau_{s}=1.1\). \(\lambda\) is set to be 5 to control the strength of the regularization.
Then, we learn the GS-DD model with the regularized student loss, to obtain a seizure detection model that is global and applicable to any patient.
Figure 1: Proposed approach
### PS-DD model
Epileptic seizures vary significantly between individuals and personalized models could be beneficial in taking into account their unique patterns and characteristics. This motivates us to extend our previous model to a personalized setting to for simultaneous electrode reduction and seizure detection for every single patient. As with the GS-PS and GS-DD models proposed in Sections 3.2 and 3.3, our aim here is to arrive at light-weight models for seizure detection that use only a subset of electrode channels using KD, but personalized to the patient. As with GS-DD model, we let the channels be selected in a data driven manner. Our hypothesis is that _both knowledge-distillation and personalized models have an important role_ to play in improving the seizure detection performance, _particularly in the cases when the available data is scarce_. The PS-DD model is in its essence the same as the GS-DD model in the architecture, with the crucial difference that the model is now trained in a patient-specific manner. This means that the PS-DD model also learns a _data-driven subgraph for every patient_.
## 4 Numerical Experiments
### Settings
**Dataset** We apply our models for the task of seizure detection on the data from the Temple University Hospital EEG Seizure Data Corpus (TUSZ) [20], which is one of the most popularly used, and diverse datasets that consists of over 100 patients of a wide range of ages (8-80) with different seizure types, e.g., focal seizure, tonic seizure, generalized non-specific seizure, and myoclonic seizure for long durations. The data is in the form of 19 channel EEG recordings in the 10-20 electrode placement system. As our work deals with the problem of seizure detection, no distinction is made between seizure types and all seizures were grouped into one class, resulting in a binary classification problem. The selected seizure (ictal) segments ranged between 5 and 60 seconds in length. Corresponding interictal segments of the same length were selected that ended one minute before seizure onset, following the methodolgy pursued in [9]. This resulted in a balanced dataset of 50% seizures and 50% nonseizure segments. The segments are taken sequentially without overlap. Every segment is then divided into windows of 5 seconds for both the classes. All selected segments were then split into five-second windows. The TUH dataset has two separate sets of recordings for train and for dev, which correspond to different set of patients for training and test, respectively. Similarly to the literature, we use only the patients from train for training models, and the test patients from dev for testing the learnt models on which the performance is reported. Finally, we have a total of 14382 samples for training and 4529 samples for testing, each sample being a 5 second window of multi-channel EEG signal.
**Data preprocessing** As customary in EEG signal processing, each sample is then filtered with a Butterworth bandpass filter of order 5 between 0.5 and 50 Hz to remove the artifacts and noise. Similarly to [25], the features were
calculated for each EEG channel: energy of the signal filtered in frequency bands (from 0.5 Hz to 30 Hz with a bandwidth of 3 Hz and from 30 Hz to 50 Hz with a bandwidth of 10 Hz), Hjorth complexity and mobility, decorrelation time, L2-norm of the approximation and detail coefficients obtained from a six-level wavelet decomposition using a Daubechies db4 wavelet, log amplitudes of the non-negative frequency components. This results in 647 features in total for each sample/window. The features are then normalized component-wise and taken as input \(\mathbf{x}\) to the GNN along with the distance based adjacency matrix.
**Training data** In order to train the teacher, no distinction is made between patients or segments and the entire training data is used to train the teacher. All the samples from all the test patients are used as test data. For the training the global models of GS-PS, and GS-DD, we use the data of all training patients during training and data from all test patients for testing. On the other hand, since the PS-DD model is trained for each patient separately, the training and test data segments are obtained by splitting the _segments_ of the given patient randomly. Further, in order to understand the effect of personalization, we divide the patients into three bands based on the amount of data segments they possess as shown in Table 1.
**Model Training** We use a two-layer GCN network with 32 hidden nodes in each hidden layer as the teacher model. It is trained with a batch size of 64 and a learning rate of \(10^{-5}\). The student network in all the three cases of GS-PS, GS-DD, and PS-DD, is a light-weight model with just one-layer GCN of only 1 hidden node. We note that the number of parameters to learn in the student is _just_ 3% of that of the teacher. Each of the three models is trained and tested both with and without KD in order to determine the contribution of the teacher knowledge. As described in Equation (6), the KL divergence loss is used as the distillation loss function and the binary cross-entropy loss is used for the student loss function. The hyperparameters in the total loss are obtained by performing 5-fold cross-validation. We set \(T=5\), and \(\delta\) values are set to \(0.1,0.5,0.8\). For GS-DD, we consider the case of \(K=4\) channels to compare the performance with that of GS-PS using the four temporal channels. For the PS-DD model, we use \(K=2\) electrodes for every patient.
**Evaluation metrics** Following [5, 2], we evaluate the performance of the three models using two standard metrics: f1-score and the Area Under the curve of the Receiver Operating Characteristic (AUROC), which are used regularly in evaluating seizure detection. In all the cases, the performance is averaged over the different test patients.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Data Bands** & **Number of Segments N** & **Batch Size** & **Epoch** \\ \hline Rare Data & \(4\leq N<20\) & 2 & 20 \\ Mid Data & \(20\leq N<100\) & 16 & 100 \\ Rich Data & \(N\geq 100\) & 64 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Three bands of patients.
### Detection Performance results
We now report the performance of the different approaches.
**GS-PS model** The performance of the teacher and the global student with the pre-selected temporal channels is presented in Table 2. In the pre-selected student, we observe that KD significantly improves the performance in terms of the f1-score that tends to be comparable to that of the teacher.
**GS-DD model** Unlike in the temporal channel pre-selection case, we see that the performance remains relatively constant to the different levels of KD. This is probably because the GS selection already results in a high performance, and the teacher does not offer notable improvement.
**PS-DD model** In the case of a personalized student GNN with only two electrodes (that we call PS-DD 2), we observe that the performance improves as \(\delta\) is increased, meaning more emphasis is given to the patient's data over the teacher's knowledge, with the highest performance obtained at \(\delta=0.8\). On the other hand, we also observe that completely relying on the patient's data and not using the teacher (\(\delta=1\)) reduces the performance. Further, we note that the performance of the student even without teacher's knowledge (\(\delta=1\)) is generally much better than that of the teacher or the global student. This in turn supports our intuition and hypothesis that personalization also plays a significant role in improving seizure detection performance. In the two plots in Figure 2, we depict the distributions of test f1 and AUROC of all test patients in the circumstances with or without KD, respectively for the PS-DD model. The averaged performances are indicated in numbers in the figures. The dashed red/green lines show the general performances of models without personalization. When trained on the general population, we obtain the test f1 of models with and without KD as 0.7 and 0.4, respectively. Whereas after personalization, the average test f1 are improved by 16% and 50% to around 0.8, corresponding to with and without KD, respectively. This shows that by tackling the diversity in EEG seizure data on a large population, personalization has the potential to improve seizure detection. The average test AUROC is improved by 8% to above 0.8. The detailed results are reported in Table 2. However, the average performance with KD is only slightly higher than the average performance without KD in both the metrics. This in turn motivated us to look into the performance in the three data bands individually next.
### Performance analysis
To better understand the effectiveness of our models, we do a detailed performance analysis by further dividing patients into three bands based on the number of seizure segments (rare-data band, mid-data band, rich-data band) and delve into the performances, respectively as shown in Table 1. In Table 3, we report the seizure detection results when the model training relies differently on the new patient data to different levels given by \(\delta=0.1\), \(\delta=0.5\) and \(\delta=.8\), respectively, in (6). The setting of \(\delta=0.8\) corresponds to the case where the student training relies more heavily on unseen patient-specific data than the teacher. Figure 3 shows the differences in the percentages of cases in each band where KD boosted
the model performance (in terms of test f1 and test AUROC). Overall, KD helps 72% (47 out of 65) patients in the rare-data band improve their model testing performances. But only 49% (26 out of 53) patients in the mid-data band and 54% (18 out of 33) patients benefit from the teacher. In general, we observe the tendency that patients with scarce data benefit the most from KD. This gives us the motivation to further delve into the rare-data band case.
In the rare-data band, we notice that we constantly encounter four patients with the lowest performance that bias the overall performance significantly. It turns out that these four cases correspond to the patients with the least training data. We refer to these cases as the four "extremes" in our experiments. Since the TUSZ dataset is rather diverse and we wish to see the averaged performance without a strong bias, we chose to exclude the extremes out and recompute the performance metrics. We notice from Table 4, that the performance improves overall by excluding the extremes, and the best performance is obtained when \(\delta=0.8\). This indicates that the effectiveness of KD in personalized settings widely varies with the amount of data each patient possesses, and potentially across the patient types (since the dataset includes different types of seizures
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Model**} & \multicolumn{2}{c}{**w/o KD**} & \multicolumn{4}{c}{**w/ KD**} \\ \hline \multirow{2}{*}{Channel} & \multirow{2}{*}{Personalization} & \multicolumn{2}{c}{–} & \multicolumn{2}{c}{\(\delta=0.1\)} & \multicolumn{2}{c}{\(\delta=0.5\)} & \multicolumn{2}{c}{\(\delta=0.8\)} \\ \cline{3-10} & & & & & & & & \\ \cline{1-1} Selection & & f1 & auroc & f1 & auroc & f1 & auroc & f1 & auroc \\ \hline Teacher & – & 0.689 & 0.781 & – & – & – & – & – & – \\ GS-PS & \(\times\) & 0.401 & 0.755 & 0.683 & 0.766 & – & – & – & – \\ GS-DD & \(\times\) & 0.690 & 0.763 & 0.695 & 0.761 & 0.697 & 0.763 & 0.693 & 0.764 \\ PS-DD & ✓ & 0.788 & 0.814 & 0.755 & 0.777 & 0.784 & 0.829 & **0.795** & **0.829** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test Results with Different Models
that we do not currently account for) and also varies with the change of the weight of student loss \(\delta\). In our experiments, \(\delta\) is 0.8 gave the best scores on an average. A more exhaustive approach would be to compute personalized models with personalized \(\delta\), but that is beyond the scope of the current work.
**Effectiveness of Knowledge Distillation when lacking informative channels/signals** To further test the effectiveness of both personalization and KD, we select to keep we arbitrarily select to keep only signals from channels FP1, FP2 that belong to the frontal region, which are suggested to be the less informative region for epileptic seizure detection. The Gumbel-Softmax channel selection block is not involved in this section. The experiment is conducted on the rare data band, with the hypothesis that the combination of personalization and KD can help compensate for the adverse situation brought by a) lack of data, and b) lack of informative channels. With only personalization but no KD, 53.8% (35 out of 65) patients' test f1 and AUROC score still exceed 0.65, yielding fairly good performances. In the rest of not the ideal personalized situations, 90% (27 out of 30 patients) benefit from the teacher. With even the alleged least informative channels, we get 53.8% of the cases with rather promising results. For the rest of the cases, the integrated application of personalization and KD has been observed to be effective for detecting epileptic seizures. We thus see that the combination leverages the strengths of both techniques to provide highly accurate results in scarce data scenarios.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{**w/o KD**} & \multicolumn{4}{c}{**w/ KD**} \\ \cline{3-10}
**Data Bands** & **Personalization** & \multicolumn{2}{c}{–} & \multicolumn{2}{c}{\(\delta=0.1\)} & \multicolumn{2}{c}{\(\delta=0.5\)} & \multicolumn{2}{c}{\(\delta=0.8\)} \\ \cline{3-10} & & f1 & auroc & f1 & auroc & f1 & auroc & f1 & auroc \\ \hline Rare Data & ✓ & 0.786 & 0.791 & 0.783 & 0.783 & 0.790\({}^{*}\) & 0.816\({}^{*}\) & **0.798*** & **0.827*** \\ Mid Data & ✓ & **0.791** & **0.837** & 0.726 & 0.756 & 0.774 & 0.819 & 0.786 & 0.833 \\ Rich Data & ✓ & 0.790 & 0.821 & 0.749 & 0.800 & 0.786 & **0.829** & **0.801*** & 0.828\({}^{*}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: PS-DD Test Results on Different Bands
Figure 4: PS-DD Test Results (\(\delta=0.8\)) on Rare-Data Band. The right column shows the results with 4 patients with extremely sparse data and poor performances excluded (Patient ID: ”00005672”, ”00008706”, ”000006535”, ”00004596”)
**Hierarchical Clustering of Patients** We now investigate if the different patients naturally show clusters when the learnt electrode channels are used to cluster the patients. We use hierarchical clustering on the learnt selection matrices \(W\). Hierarchical clustering is a method of cluster analysis that builds a hierarchy of clusters by successively splitting or merging smaller clusters based on the Euclidean distance between clusters. The result of hierarchical clustering is shown as a dendrogram that shows the hierarchy of clusters in Figure 5. We observe that there are no clearly significant clusters emerging except for a large cluster and outliers, which could be because the patients and seizure signals in the TUSZ dataset are quite diverse. We also note that we have made no distinction between seizure types (about 6 of widely varying number of samples per type) in our analysis which might explain the single big cluster. While some of the outliers corresponded to patients with rare disease (Rasmussens' syndrome), it is unclear if the outliers show specific signature characteristics that separate them clinically from the main cluster. Further, we see that the main cluster diameter is relatively large indicating that there is significant variability in the selected channels across the different patients. In future work, we plan to pursue alternative clustering strategies and features and also mitigating the diversity by, for example, filtering out only the focal seizure signals.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & **Model** & \multicolumn{3}{c}{**Extremes \(\times\)**} & **Extremes \(\checkmark\)** \\ \hline \(\delta\) & Personalization & f1 & auroc & f1 & auroc \\ \hline
0.1 & \(\checkmark\) & 0.794 & 0.806 & 0.783 & 0.783 \\
0.5 & \(\checkmark\) & 0.813 & 0.860 & 0.790 & 0.816 \\
0.8 & \(\checkmark\) & **0.832** & **0.871** & 0.798 & 0.827 \\
1 (no KD) & \(\checkmark\) & 0.816 & 0.833 & 0.786 & 0.791 \\ \hline \hline \end{tabular}
\end{table}
Table 4: PS-DD Test Results on Rare-Data Bands
Figure 5: Hierarchical clustering of patients based on their Gumbel-Softmax channel selection patterns
Conclusions and future work
We proposed an approach to transfer the knowledge from a pre-trained GNN-based seizure detection to the case when the number of measurement electrodes is reduced. We showed that it is possible to obtain models that are (i) light-weight (requiring just a 3% of the sophisticated network), and (ii) work on reduced electrodes (requiring as low as only 10% of the original electrodes), yet offer superior performance in seizure-detection, particularly in the personalized setting. The approach resulted in patient-specific choice of the reduced set of electrodes. Our experiments demonstrated the merit of both knowledge distillation and personalization, particularly when dealing with patients with scarce data. We observe that there is a trade-off between the use of prior information (teacher) and patient-specific data: although teacher-knowledge is necessary, the relative importance should be higher on the patient-specific data for maximum performance. We believe that these results show that our approach can provide meaningful insights and guidelines in the practical setting where there is need to move from full scalp electrode measurements to reduced form factor measurements, such as personalized wearable devices. We have currently restricted our analysis to a relatively simple GNN teacher model and used the graph given by physical placement of electrodes. The quality of the teacher and the graph used both translate into the quality of the student model, and hence, we believe that a more sophisticated GNN could be employed to further improve overall performance. In the future, it would also be interesting to look into multi-class seizure classification and identify the different types of seizures. |
2305.16102 | Demystifying Oversmoothing in Attention-Based Graph Neural Networks | Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon where
increasing network depth leads to homogeneous node representations. While
previous work has established that Graph Convolutional Networks (GCNs)
exponentially lose expressive power, it remains controversial whether the graph
attention mechanism can mitigate oversmoothing. In this work, we provide a
definitive answer to this question through a rigorous mathematical analysis, by
viewing attention-based GNNs as nonlinear time-varying dynamical systems and
incorporating tools and techniques from the theory of products of inhomogeneous
matrices and the joint spectral radius. We establish that, contrary to popular
belief, the graph attention mechanism cannot prevent oversmoothing and loses
expressive power exponentially. The proposed framework extends the existing
results on oversmoothing for symmetric GCNs to a significantly broader class of
GNN models, including random walk GCNs, Graph Attention Networks (GATs) and
(graph) transformers. In particular, our analysis accounts for asymmetric,
state-dependent and time-varying aggregation operators and a wide range of
common nonlinear activation functions, such as ReLU, LeakyReLU, GELU and SiLU. | Xinyi Wu, Amir Ajorlou, Zihui Wu, Ali Jadbabaie | 2023-05-25T14:31:59Z | http://arxiv.org/abs/2305.16102v4 | # Demystifying Oversmoothing in
###### Abstract
Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon where increasing network depth leads to homogeneous node representations. While previous work has established that Graph Convolutional Networks (GCNs) exponentially lose expressive power, it remains controversial whether the graph attention mechanism can mitigate oversmoothing. In this work, we provide a definitive answer to this question through a rigorous mathematical analysis, by viewing attention-based GNNs as nonlinear time-varying dynamical systems and incorporating tools and techniques from the theory of products of inhomogeneous matrices and the joint spectral radius. We establish that, contrary to popular belief, the graph attention mechanism cannot prevent oversmoothing and loses expressive power exponentially. The proposed framework extends the existing results on oversmoothing for symmetric GCNs to a significantly broader class of GNN models. In particular, our analysis accounts for asymmetric, state-dependent and time-varying aggregation operators and a wide range of common nonlinear activation functions, such as ReLU, LeakyReLU, GELU and SiLU.
## 1 Introduction
Graph neural networks (GNNs) have emerged as a powerful framework for learning with graph-structured data [4; 8; 9; 13; 19; 32; 36] and have shown great promise in diverse domains such as molecular biology [43], physics [1] and recommender systems [38]. Most GNN models follow the _message-passing_ paradigm [12], where the representation of each node is computed by recursively aggregating and transforming the representations of its neighboring nodes.
One notable drawback of repeated message-passing is _oversmoothing_, which refers to the phenomenon that stacking message-passing GNN layers makes node representations of the same connected component converge to the same vector [5; 18; 19; 24; 26; 31; 39]. As a result, whereas depth has been considered crucial for the success of deep learning in many fields such as computer vision [16], most GNNs used in practice remain relatively shallow and often only have few layers [19; 36; 40]. On the theory side, while previous works have shown that the symmetric Graph Convolution Networks (GCNs) with ReLU and LeakyReLU nonlinearities exponentially lose expressive power, analyzing the oversmoothing phenomenon in other types of GNNs is still an open question [5; 26]. In particular, the question of whether the graph attention mechanism can prevent oversmoothing has not been settled yet. Motivated by the capacity of graph attention to distinguish the importance of different edges in the graph, some works claim that oversmoothing is alleviated in Graph Attention Networks (GATs), heuristically crediting to GATs' ability to learn adaptive node-wise aggregation operators via the attention mechanism [25]. On the other hand, it has been empirically observed that similar to the case of GCNs, oversmoothing seems inevitable for GATs [30; 31].
In this paper, we provide a definitive answer to this question -- attention-based GNNs also lose expressive power exponentially, albeit potentially at a slower exponential rate compared to GCNs. Given that attention-based GNNs can be viewed as nonlinear time-varying dynamical systems, our analysis is built on the theory of products of inhomogeneous matrices [14; 31] and the concept of joint spectral radius [29], as these methods have been long proved effective in the analysis of time-inhomogeneous markov chains and ergodicity of dynamical systems [2; 14; 33]. While classical results only apply to generic one-dimensional linear time-varying systems, we address four major challenges arising in analyzing attention-based GNNs: (1) the aggregation operators computed by attention are state-dependent, in contrast to conventional fixed graph convolutions; (2) the systems are multi-dimensional, which involves the coupling across feature dimensions; (3) the dynamics are nonlinear due to the nonlinear activation function in each layer; (4) the learnable weights and aggregation operators across different layers result in time-varying dynamical systems.
**Below, we highlight our key contributions:**
* As our main contribution, we establish that oversmoothing happens exponentially as model depth increases for attention-based GNNs, resolving the long-standing debate about whether attention-based GNNs can prevent oversmoothing.
* We analyze attention-based GNNs through the lens of nonlinear, time-varying dynamical systems. The strength of our analysis stems from its ability to exploit the inherently common connectivity structure among the typically asymmetric state-dependent aggregation operators at different attentional layers. This enables us to derive rigorous theoretical results on the ergodicity of infinite products of matrices associated with the evolution of node representations across layers. Incorporating results from the theory of products of inhomogeneous matrices and their joint spectral radius, we then establish that oversmoothing happens at an exponential rate for attention-based GNNs from our ergodicity results.
* Our analysis generalizes the existing results on oversmoothing for symmetric GCNs to a significantly broader class of GNN models with asymmetric, state-dependent and time-varying aggregation operators and nonlinear activation functions under general conditions. In particular, our analysis can accommodate a wide range of common nonlinearities such as ReLU, LeakyReLU, and even non-monotone ones like GELU and SiLU. We validate our theoretical results on three real-world datasets with two attention-based GNN architectures and five common nonlinearities.
## 2 Related Work
Oversmoothing problem in GNNsOversmoothing is a well-known problem in deep GNNs, and many techniques have been proposed in order to mitigate it practically [6; 15; 20; 21; 28; 41; 44]. On the theory side, analysis of oversmoothing has only been carried out for the graph convolution case [5; 18; 26; 39]. In particular, by viewing graph convolutions as a form of Laplacian filter, prior works have shown that for GCNs, the node representations within each connected component of a graph will converge to the same value exponentially [5; 26]. However, oversmoothing is also empirically observed in attention-based GNNs such as GATs [30; 31]. Although some people hypothesize based on heuristics that attention can alleviate oversmoothing [25], a rigorous analysis of oversmoothing in attention-based GNNs remains open [5].
Theoretical analysis of attention-based GNNsExisting theoretical results on attention-based GNNs are limited to one-layer graph attention. Recent works in this line include Brody et al. [3] showing that the ranking of attention scores computed by a GAT layer is unconditioned on the query node, and Fountoulakis et al. [11] studying node classification performance of one-layer GATs on a random graph model. More relevantly, Wang et al. [37] made a claim that oversmoothing is asymptotically inevitable in GATs. Aside from excluding nonlinearities in the analysis, there are several flaws in the proof of their main result (Theorem 2). In particular, their analysis assumes the same stationary distribution for all the stochastic matrices output by attention at different layers. This is typically not the case given the state-dependent and time-varying nature of these matrices. In fact, the main challenge in analyzing multi-layer attention lies in the state-dependent and time-varying nature of these input-output mappings. Our paper offers novel contributions to the research on attention-based GNNs by developing a rich set of tools and techniques for analyzing multi-layer
graph attention. This addresses a notable gap in the existing theory, which has primarily focused on one-layer graph attention, and paves the way for future research to study other aspects of multi-layer graph attention.
## 3 Problem Setup
### Notations
Let \(\mathbb{R}\) be the set of real numbers and \(\mathbb{N}\) be the set of natural numbers. We use the shorthands \([n]:=\{1,\ldots,n\}\) and \(\mathbb{N}_{\geq 0}:=\mathbb{N}\cup\{0\}\). We denote the zero-vector of length \(N\) by \(\mathbf{0}\in\mathbb{R}^{N}\) and the all-one vector of length \(N\) by \(\mathbf{1}\in\mathbb{R}^{N}\). We represent an undirected graph with \(N\) nodes by \(\mathcal{G}=(A,X)\), where \(A\in\{0,1\}^{N\times N}\) is the _adjacency matrix_ and \(X\in\mathbb{R}^{N\times d}\) are the _node feature vectors_ of dimension \(d\). Let \(E(\mathcal{G})\) be the set of edges of \(\mathcal{G}\). For nodes \(i,j\in[N]\), \(A_{ij}=1\) if and only if \(i\) and \(j\) are connected with an edge in \(\mathcal{G}\), i.e., \((i,j)\in E(\mathcal{G})\). For each \(i\in[N]\), \(X_{i}\in\mathbb{R}^{d}\) represents the feature vector for node \(i\). We denote the _degree matrix_ of \(\mathcal{G}\) by \(D_{\mathrm{deg}}=\mathrm{diag}(A\mathbf{1})\) and the set of all neighbors of node \(i\) by \(\mathcal{N}_{i}\).
Let \(\|\cdot\|_{2}\), \(\|\cdot\|_{\infty}\), \(\|\cdot\|_{F}\) be the \(2\)-norm, \(\infty\)-norm and Frobenius norm, respectively. We use \(\|\cdot\|_{\max}\) to denote the matrix max norm, i.e., for a matrix \(M\in\mathbb{R}^{m\times n}\), \(\|M\|_{\max}:=\max\limits_{ij}|M_{ij}|\). We use \(\leq_{\mathrm{ew}}\) to denote element-wise inequality. Lastly, for a matrix \(M\), we denote its \(i^{th}\) row by \(M_{i}\). and \(j^{th}\) column by \(M_{\cdot j}\).
### Graph attention mechanism
We adopt the following definition of graph attention mechanism. Given node representation vectors \(X_{i}\) and \(X_{j}\), we first apply a shared learnable linear transformation \(W\in\mathbb{R}^{d\times d^{\prime}}\) to each node, and then use an attention function \(\Psi:\mathbb{R}^{d^{\prime}}\times\mathbb{R}^{d^{\prime}}\rightarrow\mathbb{R}\) to compute a raw attention coefficient
\[e_{ij}=\Psi(W^{\top}X_{i},W^{\top}X_{j})\]
that indicates the importance of node \(j\)'s features to node \(i\). Then the graph structure is injected into the mechanism by performing masked attention, where for each node \(i\), we only compute its attention to its neighbors. To make coefficients easily comparable across different nodes, we normalize \(e_{ij}\) among all neighboring nodes \(j\) of node \(i\) using the softmax function to get the normalized attention coefficients:
\[P_{ij}=\mathrm{softmax}_{j}(e_{ij})=\frac{\exp(e_{ij})}{\sum_{k\in\mathcal{N} _{i}}\exp(e_{ik})}\,.\]
The matrix \(P\), where the entry in the \(i^{th}\) row and the \(j^{th}\) column is \(P_{ij}\), is a row stochastic matrix. We refer to \(P\) as an _aggregation operator_ in message-passing.
### Attention-based GNNs
Having defined the graph attention mechanism, we can now write the update rule of a single graph attentional layer as
\[X^{\prime}=\sigma(PXW)\,,\]
where \(X\) and \(X^{\prime}\) are are the input and output node representations, respectively, \(\sigma(\cdot)\) is a pointwise nonlinearity function, and the aggregation operator \(P\) is a function of \(XW\).
As a result, the output of the \(t^{th}\) graph attentional layers can be written as
\[X^{(t+1)}=\sigma(P^{(t)}X^{(t)}W^{(t)})\qquad t\in\mathbb{N}_{\geq 0}, \tag{1}\]
where \(X^{(0)}=X\) is the input node features, \(W^{(t)}\in\mathbb{R}^{d^{\prime}\times d^{\prime}}\) for \(t\in\mathbb{N}\) and \(W^{(0)}\in\mathbb{R}^{d\times d^{\prime}}\). For the rest of this work, without loss of generality, we assume that \(d=d^{\prime}\).
The above definition is based on _single-head_ graph attention. _Multi-head_ graph attention uses \(K\in\mathbb{N}\) weight matrices \(W_{1},\ldots,W_{K}\) in each layer and averages their individual single-head outputs [11; 36]. Without loss of generality, we consider single graph attention in our analysis in Section 4, but we note that our results automatically apply to the multi-head graph attention setting since \(K\) is finite.
### Measure of oversmoothing
We adopt the following notion of oversmoothing proposed in Rusch et al. [31]:
**Definition 1**.: _For an undirected and connected graph \(\mathcal{G}\), \(\mu:\mathbb{R}^{N\times d}\to\mathbb{R}_{\geq 0}\) is called a node similarity measure if it satisfies the following axioms:_
1. \(\exists c\in\mathbb{R}^{d}\) _such that_ \(X_{i}=c\) _for all node_ \(i\) _if and only if_ \(\mu(X)=0\)_, for_ \(X\in\mathbb{R}^{N\times d}\)_;_
2. \(\mu(X+Y)\leq\mu(X)+\mu(Y)\)_, for all_ \(X,Y\in\mathbb{R}^{N\times d}\)_._
_Then oversmoothing with respect to \(\mu\) is defined as the layer-wise exponential convergence of the node-similarity measure \(\mu\) to zero, i.e. for \(t\in\mathbb{N}\), with constants \(C_{1}\), \(C_{2}>0\),_
\[\mu(X^{(t)})\leq C_{1}e^{-C_{2}t}. \tag{2}\]
We establish our results on oversmoothing for attention-based GNNs using the following node similarity measure:
\[\mu(X):=\|X-\mathbf{1}\gamma_{X}\|_{F},\text{ where }\gamma_{X}=\frac{\mathbf{1} ^{\top}X}{N}\,. \tag{3}\]
**Proposition 1**.: \(\|X-\mathbf{1}\gamma_{X}\|_{F}\) _is a node similarity measure._
The proof of the above proposition is provided in Appendix B. Other common node similarity measures include the Dirichlet energy [5; 30; 31]. Our measure is mathematically equivalent to the measure \(\inf\limits_{Y=\mathbf{1}c^{\top},c\in\mathbb{R}^{d}}\{\|X-Y\|_{F}\}\) defined in Oono and Suzuki [26], but our form is more direct to compute. One way to see the equivalence is to consider the orthogonal projection into the space perpendicular to \(\mathrm{span}\{\mathbf{1}\}\), denoted by \(B\in\mathbb{R}^{(N-1)\times N}\). Then our definition of \(\mu\) satisfies \(\|X-\mathbf{1}\gamma_{x}\|_{F}=\|BX\|_{F}\), where the latter quantity is exactly the measure defined in [26].
### Assumptions
We make the following assumptions (in fact, quite minimal) in deriving our results:
1. The graph \(\mathcal{G}\) is connected and has a self-loop at each node.
2. The attention function \(\Psi(\cdot,\cdot)\) is continuous.
3. The sequence \(\{\|\prod_{t=0}^{k}|W^{(t)}|\|_{\max}\}_{k=0}^{\infty}\) is bounded.
4. The point-wise nonlinear activation function \(\sigma(\cdot)\) satisfies \(0\leq\frac{\sigma(x)}{x}\leq 1\) for \(x\neq 0\) and \(\sigma(0)=0\).
We note that all of these assumptions are either standard or quite general. Specifically, \(\mathbf{A1}\) is a standard assumption for theoretical analysis on graphs. For graphs with more than one connected components, the same results apply to each connected component. \(\mathbf{A1}\) can also be replaced with requiring the graph \(\mathcal{G}\) to be connected and non-bipartite. Self-loops and non-bipartiteness both ensure that long products of stochastic matrices corresponding to aggregation operators in different graph attentional layers will eventually become strictly positive.
The assumptions on the GNN architecture \(\mathbf{A2}\) and \(\mathbf{A4}\) can be easily verified for commonly used GNN designs. For example, the attention function \(\mathrm{LeakyReLU}(a^{\top}[W^{\top}X_{i}||W^{\top}X_{j}]),a\in\mathbb{R}^{2d^ {\prime}}\) used in the GAT [36], where \([\cdot||\cdot]\) denotes concatenation, is a specific case that satisfies \(\mathbf{A2}\). As for \(\mathbf{A4}\), one way to satisfy it is to have \(\sigma\) be \(1\)-Lipschitz and \(\sigma(x)\leq 0\) for \(x<0\) and \(\sigma(x)\geq 0\) for \(x>0\). Then it is easy to verify that most of the commonly used nonlinear activation functions such as ReLU, LeakyReLU, GELU, SiLU, ELU, tanh all satisfy \(\mathbf{A4}\).
Lastly, \(\mathbf{A3}\) is to ensure boundedness of the node representations' trajectories \(X^{(t)}\) for all \(t\in\mathbb{N}_{\geq 0}\). Such regularity assumptions are quite common in the asymptotic analysis of dynamical systems, as is also the case for the prior works analyzing oversmoothing in symmetric GCNs [5; 26].
## 4 Main Results
In this section, we lay out a road-map for deriving our main results, highlighting the key ideas of the proofs. The complete proofs are provided in the Appendices.
We start by discussing the dynamical system formulation of attention-based GNNs in Section 4.1. By showing the boundedness of the node representations' trajectories, we prove the existence of a common connectivity structure among aggregation operators across different graph attentional layers in Section 4.2. This implies that graph attention cannot fundamentally change the graph connectivity, a crucial property that will eventually lead to oversmoothing. In Section 4.3, we develop a framework for investigating the asymptotic behavior of attention-based GNNs by introducing the notion of ergodicity and its connections to oversmoothing. Then utilizing our result on common connectivity structure among aggregation operators, we establish ergodicity results for the systems associated with attention-based GNNs. In Section 4.4, we introduce the concept of the joint spectral radius for a set of matrices [29] and employ it to deduce exponential convergence of node representations to a common vector from our ergodicity results. Finally, we present our main result on oversmoothing in attention-based GNNs in Section 4.5 and comment on oversmoothing in GCNs in comparison with attention-based GNNs in Section 4.6.
### Attention-based GNNs as nonlinear time-varying dynamical systems
The theory of dynamical systems concerns the evolution of some state of interest over time. By viewing the model depth \(t\) as the time variable, the input-output mapping at each graph attentional layer \(X^{(t+1)}=\sigma(P^{(t)}X^{(t)}W^{(t)})\) describes a nonlinear time-varying dynamical system. The attention-based aggregation operator \(P^{(t)}\) is state-dependent as it is a function of \(X^{(t)}W^{(t)}\). Given the notion of oversmoothing defined in Section 3.4, we are interested in characterizing behavior of \(X^{(t)}\) as \(t\to\infty\).
If the activation function \(\sigma(\cdot)\) is the identity map, then repeated application of (1) gives
\[X^{(t+1)}=P^{(t)}\dots P^{(0)}XW^{(0)}\dots W^{(t)}\,.\]
The above linear form would enable us to leverage the rich literature on the asymptotic behavior of the products of inhomogeneous row-stochastic matrices (see, e.g., [14; 33]) in analyzing the long-term behavior of attention-based GNNs. Such a neat expansion, however, is not possible when dealing with a nonlinear activation function \(\sigma(\cdot)\). To find a remedy, let us start by observing that element-wise application of \(\sigma\) to a vector \(y\in\mathbb{R}^{d}\) can be written as
\[\sigma(y)=\operatorname{diag}\left(\frac{\sigma(y)}{y}\right)y\,, \tag{4}\]
where \(\operatorname{diag}\left(\frac{\sigma(y)}{y}\right)\) is a diagonal matrix with \(\sigma(y_{i})/y_{i}\) on the \(i^{th}\) diagonal entry. Defining \(\sigma(0)/0:=1\) along with the assumption \(\sigma(0)=0\) in **A4**, it is easy to check that the above identity still holds for vectors with zero entries.
We can use (4) to write the \(i^{th}\) column of \(X^{(t+1)}\) as
\[X^{(t+1)}_{\cdot i}=\sigma(P^{(t)}(X^{(t)}W^{(t)})_{\cdot i})=D^{(t)}_{i}P^{(t )}(X^{(t)}W^{(t)})_{\cdot i}=D^{(t)}_{i}P^{(t)}\sum_{j=1}^{d}W^{(t)}_{ji}X^{(t) }_{\cdot j}\,, \tag{5}\]
where \(D^{(t)}_{i}\) is a diagonal matrix. It follows from the assumption on the nonlinearities **A4** that
\[\operatorname{diag}(\mathbf{0})\leq_{\text{ew}}D^{(t)}_{i}\leq_{\text{ew}} \operatorname{diag}(\mathbf{1})\,.\]
We define \(\mathcal{D}\) to be the set of all possible diagonal matrices \(D^{(t)}_{i}\) satisfying the above inequality:
\[\mathcal{D}:=\{\operatorname{diag}(\mathbf{d}):\mathbf{d}\in\mathbb{R}^{N}, \mathbf{0}\leq_{\text{ew}}\mathbf{d}\leq_{\text{ew}}\mathbf{1}\}.\]
Using (5) recursively, we arrive at the following formulation for \(X^{(t+1)}_{\cdot i}\):
\[X^{(t+1)}_{\cdot i}=\sum_{j_{t+1}=i,\,(j_{t},\dots,j_{0})\in[d]^{t+1}}\left( \prod_{k=0}^{t}W^{(k)}_{j_{t+1}j_{k}}\right)D^{(t)}_{j_{t+1}}P^{(t)}...D^{(0)} _{j_{1}}P^{(0)}X^{(0)}_{j_{0}}\,. \tag{6}\]
### Common connectivity structure among aggregation operators across different layers
We can use the formulation in (6) to show the boundedness of the node representations' trajectories \(X^{(t)}\) for all \(t\in\mathbb{N}_{\geq 0}\), which in turn implies the boundedness of the input to graph attention in each layer, \(X^{(t)}W^{(t)}\).
**Lemma 1**.: _Under assumptions_ **A3**_-_**A4**_, there exists \(C>0\) such that \(\|X^{(t)}\|_{\max}\leq C\) for all \(t\in\mathbb{N}_{\geq 0}\)._
For a continuous \(\Psi(\cdot,\cdot)\)1, the following lemma is a direct consequence of Lemma 1, suggesting that the graph attention mechanism cannot fundamentally change the connectivity pattern of the graph.
Footnote 1: More generally, for \(\Psi(\cdot,\cdot)\) that outputs bounded attention scores for bounded inputs.
**Lemma 2**.: _Under assumptions_ **A2**_-_**A4**_, there exists \(\epsilon>0\) such that for all \(t\in\mathbb{N}_{\geq 0}\) and for any \((i,j)\in E(\mathcal{G})\), we have \(P_{ij}^{(t)}\geq\epsilon\)._
One might argue that Lemma 2 is an artifact of the continuity of the softmax function. The softmax function is, however, often favored in attention mechanisms because of its trainability in back propagation compared to discontinuous alternatives such as hard thresholding. Besides trainability issues, it is unclear on a conceptual level whether it is reasonable to absolutely drop an edge from the graph as is the case for hard thresholding. Lemma 2 is an important step towards the main convergence result of this work, which states that all the nodes will converge to the same representation vector at an exponential rate. We define the family of row-stochastic matrices satisfying Lemma 2 below.
**Definition 2**.: _Let \(\epsilon>0\). We define \(\mathcal{P}_{\mathcal{G},\epsilon}\) to be the set of row-stochastic matrices satisfying the following conditions:_
1. \(\epsilon\leq P_{ij}\leq 1\)_, if_ \((i,j)\in E(\mathcal{G})\)_,_
2. \(P_{ij}=0,\) _if_ \((i,j)\notin E(\mathcal{G})\)_,_
### Ergodicity of infinite products of matrices
_Ergodicity_, in its most general form, deals with the long-term behavior of dynamical systems. The oversmoothing phenomenon in GNNs defined in the sense of (2) concerns the convergence of all rows of \(X^{(t)}\) to a common vector at an exponential rate. To this end, we define ergodicity in our analysis as the convergence of infinite matrix products to a rank-one matrix with identical rows.
**Definition 3** (Ergodicity).: _Let \(B\in\mathbb{R}^{(N-1)\times N}\) be the orthogonal projection onto the space orthogonal to \(\operatorname{span}\{\mathbf{1}\}\). A sequence of matrices \(\{M^{(n)}\}_{n=0}^{\infty}\) is ergodic if_
\[\lim_{t\to\infty}B\prod_{n=0}^{t}M^{(n)}=0\,.\]
We will take advantage of the following properties of the projection matrix \(B\) already established in Blondel et al. [2]:
1. \(B\mathbf{1}=0\);
2. \(\|Bx\|_{2}=\|x\|_{2}\) for \(x\in\mathbb{R}^{N}\) if \(x^{\top}\mathbf{1}=0\);
3. For \(M\in\mathbb{R}^{N\times N}\), there exists a unique matrix \(\tilde{M}\in\mathbb{R}^{(N-1)\times(N-1)}\) such that \(BM=\tilde{M}B\,.\)
We can use the existing results on the ergodicity of infinite products of inhomogeneous stochastic matrices [14, 33] to show that any sequence of matrices in \(\mathcal{P}_{\mathcal{G},\epsilon}\) is ergodic.
**Lemma 3**.: _Fix \(\epsilon>0\). Consider a sequence of matrices \(\{P^{(t)}\}_{t=0}^{\infty}\) in \(\mathcal{P}_{\mathcal{G},\epsilon}\). That is, \(P^{(t)}\in\mathcal{P}_{\mathcal{G},\epsilon}\) for all \(t\in\mathbb{N}_{\geq 0}\). Then \(\{P^{(t)}\}_{t=0}^{\infty}\) is ergodic._
The main proof strategy for Lemma 3 is to make use of the Hilbert projective metric and the Birkhoff contraction coefficient. These are standard mathematical tools to prove that an infinite product of inhomogeneous stochastic matrices is ergodic. We refer interested readers to the textbooks [14, 33] for a comprehensive study of these subjects.
Despite the nonlinearity of \(\sigma(\cdot)\), the formulation (6) enables us to express the evolution of the feature vector trajectories as a weighted sum of products of matrices of the form \(DP\) where \(D\in\mathcal{D}\) and \(P\in\mathcal{P}_{\mathcal{G},\epsilon}\). We define the set of such matrices as
\[\mathcal{M}_{\mathcal{G},\epsilon}:=\{DP:D\in\mathcal{D},P\in\mathcal{P}_{ \mathcal{G},\epsilon}\}\,.\]
A key step in proving oversmoothing for attention-based GNNs under our assumptions is to show the ergodicity of the infinite products of matrices in \(\mathcal{M}_{\mathcal{G},\epsilon}\). In what follows, we lay out the main ideas of the proof, and refer readers to Appendix F for the details.
Consider a sequence \(\{D^{(t)}P^{(t)}\}_{t=0}^{\infty}\) in \(\mathcal{M}_{\mathcal{G},\epsilon}\), that is, \(D^{(t)}P^{(t)}\in\mathcal{M}_{\mathcal{G},\epsilon}\) for all \(t\in\mathbb{N}_{\geq 0}\). For \(t_{0}\leq t_{1}\), define
\[Q_{t_{0},t_{1}}:=D^{(t_{1})}P^{(t_{1})}\dots D^{(t_{0})}P^{(t_{0})}\]
and
\[\delta_{t}=\|D^{(t)}-I_{N}\|_{\infty}\,,\]
where \(I_{N}\) denotes the \(N\times N\) identity matrix. The common connectivity structure among \(P^{(t)}\)'s established in Section 4.2 allows us to show that long products of matrices \(DP\) from \(\mathcal{M}_{\mathcal{G},\epsilon}\) will eventually become a contraction in \(\infty\)-norm. More precisely, we can show that there exists \(T\in\mathbb{N}\) and \(0<c<1\) such that for all \(t\in\mathbb{N}_{\geq 0}\),
\[\|Q_{t,t+T}\|_{\infty}\leq 1-c\delta_{t}.\]
Next, define \(\beta_{k}:=\prod_{t=0}^{k}(1-c\delta_{t})\) and let \(\beta:=\lim_{k\to\infty}\beta_{k}\). Note that \(\beta\) is well-defined because the partial product is non-increasing and bounded from below. We can use the above contraction property to show the following key lemma.
**Lemma 4**.: _Let \(\beta_{k}:=\prod_{t=0}^{k}(1-c\delta_{t})\) and \(\beta:=\lim_{k\to\infty}\beta_{k}\)._
1. _If_ \(\beta=0\)_, then_ \(\lim_{k\to\infty}Q_{0,k}=0\,;\)__
2. _If_ \(\beta>0\)_, then_ \(\lim_{k\to\infty}BQ_{0,k}=0\,.\)__
The ergodicity of sequences of matrices in \(\mathcal{M}_{\mathcal{G},\epsilon}\) immediately follows from Lemma 4.
**Lemma 5**.: _Any sequence \(\{D^{(t)}P^{(t)}\}_{t=0}^{\infty}\) in \(\mathcal{M}_{\mathcal{G},\epsilon}\) is ergodic._
RemarkThe proof techniques developed in [5, 26] are restricted to symmetric matrices hence cannot be extended to more general family of GNNs, as they primarily rely on matrix norms for convergence analysis. Analyses solely using matrix norms are often too coarse to get meaningful results when it comes to asymmetric matrices. For instance, while the matrix \(2\)-norm and matrix eigenvalues are directly related for symmetric matrices, the same does not generally hold for asymmetric matrices. Our analysis, on the other hand, exploits the inherently common connectivity structure among these matrices in deriving the ergodicity results in Lemma 3-5.
### Joint spectral radius
The last step in proving our main results is to show that oversmoothing happens at an exponential rate in attention-based GNNs under assumptions **A1**-**A4**, using the ergodicity results established in the previous section. To this end, we introduce the notion of _joint spectral radius_, which is a generalization of the classical notion of spectral radius of a single matrix to a set of matrices. The notion was initially proposed by Gian-Carlo Rota and Gilbert Strang [29] in 1960 and later popularized by the work of Ingrid Daubechies and Jeffrey Lagarias [7]. Since then, it has found a wide range of applications, including the study of wavelets, combinatorial words theory, and multi-agent systems. We refer interested readers to the textbook [17] for a comprehensive study of the subject.
**Definition 4** (Joint Spectral Radius).: _For a collection of matrices \(\mathcal{M}\), the joint spectral radius \(\operatorname{JSR}(\mathcal{M})\) is defined to be_
\[\operatorname{JSR}(\mathcal{M})=\limsup_{k\to\infty}\sup_{M_{1},M_{2},\dots,M _{k}\in\mathcal{M}}\|M_{1}M_{2}...M_{k}\|^{\frac{1}{k}}\,,\]
_and it is independent of the norm used._
In plain words, the joint spectral radius measures the maximal asymptotic growth rate that can be obtained by forming long products of matrices taken from the set \(\mathcal{M}\). To analyze the convergence rate of products of matrices in \(\mathcal{M}_{\mathcal{G},\epsilon}\) to a rank-one matrix with identical rows, we investigate the dynamics induced by the matrices on the subspace orthogonal to \(\mathrm{span}\{\mathbf{1}\}\). More precisely, with the notion of ergodicity in Definition 3 and the goal of studying the convergence rate of a matrix product \(BM_{1}M_{2}\ldots M_{k}\) where each \(M_{i}\in\mathcal{M}_{\mathcal{G},\epsilon}\), we use the third property of the orthogonal projection \(B\) established in Section 4.3 to write
\[BM_{1}M_{2}\ldots M_{k}=\tilde{M}_{1}\tilde{M}_{2}...\tilde{M}_{k}B\,,\]
where each \(\tilde{M}_{i}\) is the unique matrix in \(\mathbb{R}^{(N-1)\times(N-1)}\) that satisfies \(BM_{i}=\tilde{M}_{i}B\). To analyze products of such matrices \(\tilde{M}_{i}\), let us define \(\tilde{\mathcal{M}}_{\mathcal{G},\epsilon}:=\{\tilde{M}:BM=\tilde{M}B,M\in \mathcal{M}_{\mathcal{G},\epsilon}\}\). We can use the ergodiciy result developed in the previous section, Lemma 5, to show that the joint spectral radius of \(\tilde{\mathcal{M}}_{\mathcal{G},\epsilon}\) is strictly less than \(1\).
**Lemma 6**.: _Let \(0<\epsilon<1\). Under assumptions_ **A1**_-_**A4**_, \(\mathrm{JSR}(\tilde{\mathcal{M}}_{\mathcal{G},\epsilon})<1\)._
It follows from the definition of the joint spectral radius that if \(\mathrm{JSR}(\tilde{\mathcal{M}}_{\mathcal{G},\epsilon})<1\), for any \(\mathrm{JSR}(\tilde{\mathcal{M}}_{\mathcal{G},\epsilon})<q<1\), there exists a \(C\) for which
\[\|\tilde{M}_{1}\tilde{M}_{2}...\tilde{M}_{k}y\|\leq Cq^{k}\|y\| \tag{7}\]
for all \(y\in\mathbb{R}^{N-1}\) and \(\tilde{M}_{1},\tilde{M}_{2},...,\tilde{M}_{k}\in\tilde{\mathcal{M}}_{\mathcal{ G},\epsilon}\).
### Main Theorem
Applying (7) to the recursive expansion of \(X_{\cdot i}^{(t+1)}\) in (6) using the \(2\)-norm, we can prove the exponential convergence of \(\mu(X^{(t)})\) to zero for the similarity measure \(\mu(\cdot)\) defined in (3), which in turn implies the convergence of node representations to a common representation at an exponential rate. This completes the proof of the main result of this paper, which states that oversmoothing defined in (2) is unavoidable for attention-based GNNs.
**Theorem 1** (Oversmoothing happens exponentially in attention-based GNNs).: _Under assumptions_ **A1**_-_**A4**_, \(\mathrm{JSR}(\tilde{\mathcal{M}}_{\mathcal{G},\epsilon})<1\) and for any \(q\) satisfying \(\mathrm{JSR}(\tilde{\mathcal{M}}_{\mathcal{G},\epsilon})<q<1\), there exists \(C_{1}(q)>0\) such that_
\[\mu(X^{(t)})\leq C_{1}q^{t}\,\forall t\geq 0\,,\]
_where \(\mu(X)=\|X-\frac{\mathbf{1}\mathbf{1}^{\top}X}{N}\|_{F}\). As a result, node representations \(X^{(t)}\) exponentially converge to the same value as the model depth \(t\rightarrow\infty\)._
Theorem 1 establishes that oversmoothing is asymptotically inevitable for attention-based GNNs with general nonlinearities. Despite similarity-based importance assigned to different nodes via the aggregation operator \(P^{(t)}\), _such attention-based mechanisms are yet unable to fundamentally change the connectivity structure of \(P^{(t)}\)_, resulting in node representations converging to a common vector at an exponential rate. Our results hence indirectly support the emergence of alternative ideas for changing the graph connectivity structure such as edge-dropping [15; 28] or graph-rewiring [21], in an effort to mitigate oversmoothing.
### Comparison with the GCN
Computing or approximating the joint spectral radius for a given set of matrices is known to be hard in general [35], yet it is straightforward to lower bound \(\mathrm{JSR}(\mathcal{M}_{\mathcal{G},\epsilon})\) as stated in the next proposition.
**Proposition 2**.: _Let \(\lambda\) be the second largest eigenvalue of \(D_{\mathrm{deg}}^{-1/2}AD_{\mathrm{deg}}^{-1/2}\). Then under assumptions_ **A1**_-_**A4**_, it holds that \(\lambda\leq\mathrm{JSR}(\tilde{\mathcal{M}}_{\mathcal{G},\epsilon})\)._
A direct consequence of the above result is that the upper bound \(q\) on the convergence rate that we get for graph attention in Theorem 1 is at least as large as \(\lambda\). On the other hand, previous work has already established that in the graph convolution case, the convergence rate of \(\mu(X^{(t)})\) is \(O(\lambda^{t})\)[5; 26]. It is thus natural to expect attention-based GNNs to potentially have better expressive power at finite depth than GCNs, even though they both inevitably suffer from oversmoothing. This is also evident from the numerical experiments that we present in the next section.
Numerical Experiments
In this section, we validate our theoretical findings via numerical experiments using the three commonly used benchmark datasets: Cora, CiteSeer and PubMed [42]. More details about the experiments are provided in Appendix J.
For each dataset, we trained a \(128\)-layer single-head GAT and a \(128\)-layer GCN with the random walk graph convolution \(D_{\text{des}}^{-1}A\), each having 32 hidden dimensions and trained using the standard features and splits. The GCN with the random walk graph convolution is a special type of attention-based GNNs where the attention function is constant. For each GNN model, we considered various nonlinear activation functions: ReLU, LeakyReLU (with three different negative slope values: \(0.01\), \(0.4\) and \(0.8\)) and GELU. Here, we chose GELU as an illustration of the generality of our assumption on nonlinearities, covering even non-monotone activation functions such as GELU. We ran each experiment \(10\) times. Figure 1 shows the evolution of \(\mu(X^{(n)})\) in log-log scale on the largest connected component of each graph (which accounts for \(91.8\%/63.7\%/100\%\) of total nodes in Cora/CiteSeer/PubMed, respectively) as we forward pass the input \(X\) into a trained model. The solid curve is the average over \(10\) runs and the band indicates one standard deviation around the average.
We observe that, as predicted by our theory, oversmoothing happens at an exponential rate for both GATs and GCNs, regardless of the choice of nonlinear activation functions in the GNN architectures. Notably, GCNs exhibit a significantly faster rate of oversmoothing compared to GATs. This aligns the observation made in Section 4.6, expecting a potentially better expressive power for GATs than GCNs at finite depth. Furthermore, the exponential convergence rate of oversmoothing varies among GNNs with different nonlinear activation functions. From a theory perspective, as different activation functions constitute different subsets of \(\mathcal{M}_{\mathcal{G},\epsilon}\) and different sets of matrices have different joint spectral radii, it is not surprising that the choice of nonlinear activation function would affect the convergence rate. In particular, among the nonlinearities we considered, ReLU in fact magnifies oversmoothing the second most. As a result, although ReLU is often the default choice for the standard implementation of many GNN architectures [10; 19], one might wish to consider switching to other nonlinearities to better mitigate oversmoothing.
## 6 Conclusion
Oversmoothing is one of the central challenges in developing more powerful GNNs. In this work, we reveal new insights on oversmoothing in attention-based GNNs by rigorously providing a negative answer to the open question of whether graph attention can implicitly prevent oversmoothing. By analyzing the graph attention mechanism within the context of nonlinear time-varying dynamical
Figure 1: Evolution of \(\mu(X^{(n)})\) (in log-log scale) on the largest connected component of three benchmark datasets: Cora, Citeseer, and PubMed. Oversmoothing happens exponentially in both GCNs and GATs with the rates varying depending on the choice of activation function. Notably, GCNs demonstrate faster rates of oversmoothing compared to GATs.
systems, we establish that attention-based GNNs lose expressive power exponentially as model depth increases.
We upper bound the convergence rate for oversmoothing under very general assumptions on the nonlinear activation functions. One may try to tighten the bounds by refining the analysis separately for each of the commonly used activation functions. Future research should also aim to improve the design of graph attention mechanisms based on our theoretical insights and utilize our analysis techniques to study other aspects of multi-layer graph attention.
## Acknowledgments
Xinyi Wu would like to thank Jennifer Tang and William Wang for helpful discussions. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing computing resources that have contributed to the research results reported within this paper.
This research has been supported in part by ARO MURI W911NF-19-0217, ONR N00014-20-1-2394, and a Vannevar Bush Fellowship from the Office of the Secretary of Defense.
|
2310.15865 | Using Causality-Aware Graph Neural Networks to Predict Temporal
Centralities in Dynamic Graphs | Node centralities play a pivotal role in network science, social network
analysis, and recommender systems. In temporal data, static path-based
centralities like closeness or betweenness can give misleading results about
the true importance of nodes in a temporal graph. To address this issue,
temporal generalizations of betweenness and closeness have been defined that
are based on the shortest time-respecting paths between pairs of nodes.
However, a major issue of those generalizations is that the calculation of such
paths is computationally expensive. Addressing this issue, we study the
application of De Bruijn Graph Neural Networks (DBGNN), a causality-aware graph
neural network architecture, to predict temporal path-based centralities in
time series data. We experimentally evaluate our approach in 13 temporal graphs
from biological and social systems and show that it considerably improves the
prediction of both betweenness and closeness centrality compared to a static
Graph Convolutional Neural Network. | Franziska Heeg, Ingo Scholtes | 2023-10-24T14:23:10Z | http://arxiv.org/abs/2310.15865v1 | # Using Causality-Aware Graph Neural Networks to Predict Temporal Centralities in Dynamic Graphs
###### Abstract
Node centralities play a pivotal role in network science, social network analysis, and recommender systems. In temporal data, static path-based centralities like closeness or betweenness can give misleading results about the true importance of nodes in a temporal graph. To address this issue, temporal generalizations of betweenness and closeness have been defined that are based on the shortest time-respecting paths between pairs of nodes. However, a major issue of those generalizations is that the calculation of such paths is computationally expensive. Addressing this issue, we study the application of De Bruijn Graph Neural Networks (DBGNN), a causality-aware graph neural network architecture, to predict temporal path-based centralities in time series data. We experimentally evaluate our approach in 13 temporal graphs from biological and social systems and show that it considerably improves the prediction of both betweenness and closeness centrality compared to a static Graph Convolutional Neural Network.
## 1 Motivation
Node centralities are important in the analysis of complex networks, with applications in network science, social network analysis, and recommender systems. An important class of centrality measures are _path-based centralities_ like, e.g. betweenness or closeness centrality [1; 2], which are based on the shortest paths between all nodes. While centralities in static networks are important, we increasingly have access to time series data on temporal graphs with time-stamped edges. Due to the timing and ordering of those edges, the paths in a static time-aggregated representation of such time series data can considerably differ from _time-respecting paths_ in the corresponding temporal graph. In a nutshell, two time-stamped edges \((u,v;t)\) and \((v,w;t^{\prime})\) only form a time-respecting path from node \(u\) via \(v\) to \(w\) iff for the time stamps \(t\) and \(t^{\prime}\) we have \(t<t^{\prime}\), i.e. time-respecting paths must minimally respect the arrow of time. Moreover, we often consider scenarios where we need to additionally account for a _maximum time difference_\(\delta\) between time-stamped edges, i.e. we require \(0<t^{\prime}-t\leq\delta\)[3]. Several works have shown that temporal correlations in the sequence of time-stamped edges can significantly change the _causal_ topology of a temporal graph, i.e. which nodes can possibly influence each other
via time-respecting paths, compared to what one would expect based on the static topology of edges [4; 5; 6].
An important consequence of this is that static path-based centralities like closeness or betweenness can give misleading results about the true importance of nodes in temporal graphs. To address this issue, temporal generalizations of betweenness and closeness centrality have been defined that are based on the shortest time-respecting paths between pairs of nodes [7; 8; 9; 10]. A major issue of those generalizations is that the calculation of time-respecting paths as well as the resulting centralities is computationally expensive [11; 12; 13]. Addressing this issue, a number of recent works developed methods to approximate temporal betweenness and closeness centralities in temporal graphs [13]. Additionally, few works have used deep (representation) learning techniques to predict computationally expensive path-based centralities in _static_ networks [14; 15].
Research Gap and ContributionsTo the best of our knowledge, no prior works have considered the application of time-aware graph neural networks to predict path-based centralities in temporal graphs. Closing this gap, our work makes the following contributions:
* We introduce the problem of predicting temporal betweenness and closeness centralities of nodes in temporal graphs. We consider a situation where we have access to a training graph as well as ground truth temporal centralities and seek to predict the centralities of nodes in a future observation of the same system, which does not necessarily consist of the exact same set of nodes.
* To address this problem, we introduce a deep learning method that utilizes De Bruijn Graph Neural Networks (DBGNN), a recently proposed causality-aware graph neural network architecture [16] that is based on higher-order graph models of time-respecting paths, which capture correlations in the sequence of time-stamped edges. An overview of our approach in a toy example of a temporal graph is shown in Figure 1.
* We compare our proposed method to a Graph Convolutional Network (GCN), which only considers a static, time-aggregated weighted graph that captures the frequency and topology of edges.
* We experimentally evaluate both models in 13 time temporal graphs from biological and social systems. Our results show that the application of the time-aware DBGNN architecture considerable improves the prediction of both betweenness and closeness centrality compared to a static GCN model.
In summary, we show that the prediction of temporal centralities is an interesting temporal graph learning problem, which could be included in community benchmarks [17]. Moreover, our study highlights the potential of causality-aware deep learning architectures for node-level regression tasks in temporal graphs. Finally, our results are a promising step towards the approximation of temporal centralities in large data sets, with potential applications in social network analysis and recommender systems.
## 2 Background and Related Work
In the following, we provide the background of our work. We first introduce temporal graphs and define time-respecting (or causal) paths. We then cover generalizations of path-based centralities for nodes in temporal graphs. We finally discuss prior works that have studied the prediction, or approximation, of path-based centralities both in static and temporal graphs. This will motivate the research gap that is addressed by our work.
Dynamic Graphs and Causal PathsApart from static graphs \(G=(V,E)\) that capture the topology of edges \(E\subseteq V\times V\) between nodes \(V\), we increasingly have access to time-stamped interactions that can be modelled as _temporal graphs or networks_[18; 19; 3]. We define a temporal graph as \(G^{\mathcal{T}}=(V,E^{\mathcal{T}})\) where \(V\) is the set of nodes and \(E^{\mathcal{T}}\in V\times V\times\mathbb{R}\) is a set of (possibly directed) time-stamped edges, i.e. an edge \((v,w;t)\in E^{\mathcal{T}}\) describes an interaction fron node \(v\) to \(w\) occurring at time \(t\). In our work, we assume that interactions are _instantaneous_, i.e. \((v,w;t)\in E^{\mathcal{T}}\) does not imply that \((v,w;t^{\prime})\in E^{\mathcal{T}}\) for all \(t^{\prime}>t\). Hence, we do not specifically consider _growing networks_, where the time-stamp \(t\) is the creation of an edge. For a temporal network \(G^{\mathcal{T}}=(V,E^{\mathcal{T}})\) it is common to consider a static, time-aggregated and weighted graph representation \(G=(V,E)\)
where \((v,w)\in E\) iff \((v,w;t)\in E^{\mathcal{T}}\) for some time stamp \(t\) and for the edge weights we define \(w(v,w)=|\{t\in\mathbb{R}:(v,w;t)\in E^{\mathcal{T}}\}|\), i.e. the number of occurrences of time-stamped edges.
An important difference to paths in static graphs is that, in temporal networks, the temporal ordering of edges determines what we call _time-respecting or causal paths_[20, 19, 3]. For a temporal graph \(G^{\mathcal{T}}=(V,E^{\mathcal{T}})\) we define a _time-respecting or causal path_ of length \(l\) as sequence of nodes \(v_{0},\ldots,v_{l}\) such that the following two conditions hold:
* \(\exists\;t_{1},\ldots,t_{l}\;:\;(v_{i-1},v_{i};t_{i})\in E^{\mathcal{T}}\) for \(i=1,\ldots,l\) ;
* \(0<t_{i}-t_{i-1}\leq\delta\) for some \(\delta\in\mathbb{R}\).
In contrast to definitions of time-respecting paths that only require interactions to occur in ascending temporal order, i.e. \(0<t_{i}-t_{j}\) for \(j<i\)[20, 21], we additionally impose a maximum "waiting time" \(\delta\)[5, 19]. This implies that we only consider time-respecting paths where subsequent interactions occur within a time interval that is often defined by the processes that we study on temporal networks [12, 22]. In line with the definition for static networks, we define a _shortest time-respecting path_ between two nodes \(v\) and \(w\) as a (not necessarily unique) time-respecting path of length \(l\) such that all other time-respecting paths from \(v\) to \(w\) have length \(l^{\prime}\geq l\). In static graphs a shortest path from \(v\) to \(w\) is necessarily a _simple_ path, i.e. a path where no node occurs more than once in the sequence \(v_{1},\ldots,v_{l}\). This is not neccessarily true for shortest time-respecting path, since -due to the maximum waiting time \(\delta\)- we may be forced to move between (possibly the same) nodes to continue a time-respecting path. Due to the definition of time-respecting paths with limited waiting time \(\delta\), we obtain a _temporal-topological_ generalization of shortest paths to temporal graphs that accounts for the temporal ordering and timing of interactions. We note that there exist definitions of _fastest paths_
Figure 1: Overview of proposed approach to predict temporal centralities of nodes in a temporal graph: We consider a time-based split in a training and test graph (left). Calculating time-respecting paths in the training split enables us to (1) compute temporal node centralities, and (2) fit a \(k\)-th order De Bruijn graph model for time-respecting paths. The weighted edges in such a \(k\)-th order De Bruijn graph capture the frequencies of time-respecting paths of length \(k\) (see time-respecting path of length one (red) and two (magenta)). (3) We then use these centralities and the k-th order models to train a De Bruijn graph neural network (DBGNN), which allows us to (4) predict temporal node centralities in the test graph.
that only account for the temporal rather than the topological distance [5], which we however do not consider in our work.
The definition of time-respecting paths above has the important consequence that the connectivity of nodes via time-respecting paths in a temporal network can be considerably different from paths in the corresponding time-aggregated static network. As an example, for a temporal network with two time-stamped edges \((u,v;t)\) and \((v,w;t^{\prime})\) the time-aggregated network contains a path from \(u\) via \(v\) to \(w\), while a time-respecting path from \(u\) via \(v\) to \(w\) can only exist iff \(0<t^{\prime}-t\leq\delta\). In other words, while connectivity in static graphs is _transitive_, i.e. the existence of edges (or paths) connecting \(u\) to \(v\) and \(v\) to \(w\) implies that there exists a path that transitively connects \(u\) to \(w\), the same does not hold for time-respecting paths. A large number of works have shown that this difference between paths in temporal and static graphs influences connectivity and reachability [4], the evolution of dynamical processes like diffusion or epidemic spreading [23, 6, 24, 25], cluster patterns [23, 26, 27], as well as the controllability of dynamical processes [28].
Temporal CentralitiesAnother interesting question is how the time dimension of temporal graphs influences the importance or _centrality_ of nodes [8]. To this end, several works have generalized centrality measures originally defined for static graphs to temporal networks. For our purpose we limit ourselves to generalizations of betweenness and closeness centrality, which are defined based on the shortest paths between nodes. In a static network, a node \(v\) has high _betweenness centrality_ if there are many shortest paths that pass through \(v\)[2] and it has high _closeness centrality_ if the overall distance to all other nodes is small [1]. We omit those standard definitions here due to space constraints but include them in appendix C.
Analogously to betweenness centrality for static graphs, for a temporal graph \(G=(V,E^{T})\) we define the _temporal betweenness centrality_ of node \(v\in V\) as
\[c_{B}^{temp}(v)=\sum_{s\neq v\neq t\in V}\frac{\sigma_{s,t}(v)}{\sigma_{s,t}}\]
where \(\sigma_{s,t}\) is the number of shortest _time-respecting_ paths from node \(s\) to \(t\). Following our definition above, we consider two time-respecting paths to be the same if the sequence of traversed nodes is identical, i.e. \(\sigma_{s,t}\) counts paths that traverse the same set of nodes at different times only once.
To calculate the _temporal closeness centrality_ we define the temporal distance \(d(u,v)\) between two nodes \(u,v\in V\) as the length of a shortest time-respecting paths from \(u\) to \(v\) and thus obtain
\[c_{C}^{temp}(v)=\frac{1}{\sum_{u\in V}d(u,v)}.\]
Even though the definitions above closely follow those for static networks, it has been shown that the temporal centralities of nodes can differ considerably from their counterparts in static time-aggregated networks [8, 27]. These findings highlight the importance of a _time-aware_ network analysis, which consider both the timing and temporal ordering of links in temporal graphs.
Approximating Path-based CentralitiesWhile path-based centralities have become an important tool in network analysis, a major issue is the computational complexity of the underlying all-pairs shortest path calculation in large graphs. For static networks, this issue can be partially alleviated by smart algorithms that speed up the calculation of betweenness centralities [29]. Even with these algorithms, calculating path-based centralities in large graphs is a challenge. Hence, a number of works considered approaches to calculate fast approximations, e.g. based on a random sampling of paths [30, 31, 32]. Another line of studies either used standard, i.e. not graph-based, machine learning techniques to leverage correlations between different centrality scores [15, 14], or used neural graph embeddings in synthetic scale-free networks to approximate the ranking of nodes [33].
Existing works on the approximation of path-based node centralities in time series data have generally focussed on a fast updating of _static_ centralities in _evolving graphs_ where edges are added or deleted [34, 35], rather than considering _temporal node centralities_. For the calculation of temporal closeness or betweenness centralities, the need to calculate shortest _time-respecting paths_ between all pairs of nodes is a major computational challenge. In particular, the calculation of time-respecting paths with a maximum waiting time constraint, which is the definition considered in our work, has recently been shown to be an NP-hard problem [12]. Considering the approximate estimation of temporal
betweenness and closeness centrality in temporal graphs, [36] generalizes static centralities to higher-order De Bruijn graphs, which capture the time-respecting path structure of a temporal graph. [13] recently proposed a sampling-based estimation of temporal betweenness centralities. To the best of our knowledge, no prior works have considered the application of deep graph learning to predict temporal node centralities in temporal graphs, which is the gap addressed by our work.
## 3 A Causality-Aware GNN Architecture to Predict Temporal Centralities
In the following, we first present higher-order De Bruijn graph models for time-respecting paths in temporal networks. We then describe the proposed deep learning architecture to predict temporal betweenness and closeness centrality.
Higher-Order De Bruijn Graph Models of Time-respecting pathsEach time-respecting path gives rise to an ordered sequence \(v_{0},v_{1},\ldots,v_{l}\) of traversed nodes. Let us consider a \(k\)-th order Markov chain model, where \(P(v_{i}|v_{i-k},\ldots,v_{i-1})\) is the probability that a time-respecting path continues to node \(v_{i}\), conditional on the \(k\) previously traversed nodes. A first-order Markov chain model can be defined based on the frequencies of edges (i.e. paths of length \(k=1\)) captured in a weighted time-aggregated graph, where
\[P(v_{i}|v_{i-1}):=\frac{w(v_{i-1},v_{i})}{\sum_{j}w(v_{i-1},v_{j})}.\]
While such a first-order model is justified if the temporal graph exhibits no patterns in the temporal ordering of time-stamped edges, a number of works have shown that empirical data exhibit patterns that require higher-order Markov models for time-respecting paths [23; 25; 26]. To address this issue, for \(k>1\) we can define a \(k\)-th order Markov chain model based on the frequencies of time-respecting paths of length \(k\) as
\[P(v_{i}|v_{i-k},\ldots,v_{i-1})=\frac{w(v_{i-k},\ldots,v_{i})}{\sum_{j}w(v_{i- k},\ldots,v_{i-1},v_{j})},\]
where \(w(v_{0},\ldots,v_{k})\) counts the number of time-respecting path \(v_{0},\ldots,v_{k}\) in the underlying temporal graph. For a temporal graph \(G^{\mathcal{T}}=(V,E^{\mathcal{T}})\), this approach defines a static \(k\)_-th order De Bruijn graph model_\(G^{(k)}=(V^{(k)},E^{(k)})\) with
* \(V^{(k)}=\{(v_{0},\ldots,v_{k-1})\mid v_{0},\ldots,v_{k-1}\text{ is a causal walk of length }k-1\text{ in }G^{\mathcal{T}}\}\)
* \((u,v)\in E^{(k)}\) iff \[\begin{array}{l}(i)\;\;v=(v_{1},\ldots,v_{k})\text{ with }v_{i}=u_{i}\text{ for }i=1,\ldots,k-1\\ (ii)\;u\bigoplus v=(u_{0},\ldots,u_{k-1},v_{k})\text{ is a causal path of length }k\text{ in }G^{\mathcal{T}}.\end{array}\]
We call this \(k\)-th order model a _De Bruijn graph model_ of time-respecting paths, since it is a generalization of a \(k\)-dimensional De Bruijn graph [37], with the additional constraint that an edge only exists iff the underlying temporal network has a corresponding time-respecting path. For \(k=1\) the first-order De Bruijn graph corresponds to the commonly used static, time-aggregated graph \(G=(V,E)\) of a temporal graph \(G^{T}\), where edge can be considered time-respecting paths of length one and which neglects information on time dimension. For \(k>1\) we obtain _static but time-aware higher-order generalizations of time-aggregated graphs_, which are sensitive to the timing and ordering of time-stamped edges. Each node in such a \(k\)-th order De Bruijn graph represents a time-respecting path of length \(k-1\), while edges represent time-respecting paths of length \(k\). Edge weights correspond to the number of observations of time-respecting paths of length \(k\) (cf. fig. 1).
De Bruijn Graph Neural Networks for Temporal Centrality PredictionOur approach to predict temporal betweenness and closeness centrality uses the recently proposed De Bruijn Graph Neural Networks (DBGNN), a deep learning architecture that builds on \(k\)-th order De Bruijn graphs [16]. The intuition behind this approach is that, by using message passing in multiple (static) \(k\)-th order De Bruijn graph models of time-respecting paths, we obtain a _causality-aware learning algorithm_ that considers both the graph topology as well as the temporal ordering and timing of interactions.
Our proposed method is summarized in fig. 1. Considering time series data on a temporal graph, we first perform a time-based split of the data into a training and test graph. We then calculate temporal
closeness and betweenness centralities of nodes in the training graph and consider a supervised node-level regression problem, i.e. we use temporal centralities of nodes in the training graph to train a DBGNN model. To this end, we construct \(k\)-th order De Bruijn graph models for multiple orders \(k\), based on the statistics of time-respecting paths of lengths \(k\). The maximum order is determined by the temporal correlation length (i.e. the Markov order) present in a temporal graph and can be determined by statistical model selection techniques [38].
Using the update rule defined in Eq. (1) of [16], we simultaneously perform message passing in all \(k\)-th order De Bruijn graphs. For each \(k\)-th order De Bruijn graph this yields a (hidden) representation of \(k\)-th order nodes. To aggregate the resulting representation to actual (first-order) nodes in the temporal graph, we perform message passing in an additional bipartite graph, where each \(k\)-th order node \((v_{0},\ldots,v_{k-1})\) is connected to first-order node \(v_{k-1}\) (cf. Eq (2) in [16] and fig. 1). Taking a node regression perspective, we use a final dense linear layer with a single output. We use the trained model to predict the temporal centralities of nodes in the test graph. Since the subset of nodes and edges that are active in the training and test graph can differ, our model must be able to generalize to temporal graphs with different nodes as well as different graph topologies. To address this, we train our models in an inductive fashion by choosing a suitably large number of dimensions for the one-hot encodings during the training phase. Upon acceptance of the manuscript, we will make our code publicly available via Zenodo.
## 4 Experimental Results
With our experimental evaluation we seek to answer the following research question:
* How does the predictive power of a causality-aware DBGNN model compare to that of a standard GCN that ignores the temporal dimension of dynamic graphs?
* How does the predictive power differ between temporal betweenness and closeness centrality and how does it vary across different temporal networks?
* How does the computational efficiency of the DBGNN-based prediction of temporal node centralities compare to that of a time-neglecting GCN architecture?
* Does the DBGNN architecture generate node embeddings that facilitate interpretability?
Experimental setupWe experimentally evaluate the performance of the DBGNN architecture by predicting temporal centralities in 13 empirical temporal graphs. Since a maximum order detection in those data sets yields a maximum of two (see table 7 in appendix A), we limit the DBGNN architecture to \(k=2\). To calculate edge weights of the DBGNN model, we count time-stamped edges for weights of the first-order De Bruijn graph as well as time-respecting paths of length two for weights of the second-order De Bruijn graph (cf. fig. 1). Adopting the approach in [16] we use one message passing layer with 16 hidden dimensions for each order \(k\) and one additional bipartite message passing layer with 8 hidden dimensions. We use a sigmoid activation function for the higher-order layers and an Exponential Linear Unit (ELU) activation function for the bipartite layer.
As a baseline model, we use a Graph Convolutional Neural Network (GCN) [39], which we apply to the weighted time-aggregated representation of the temporal graphs. For the GCN model, we use two message passing layers with 16 and 8 hidden dimensions and a sigmoid activation function, respectively. As input features, we use a one-hot encoding (OHE) of nodes for both architectures. In the case of the DBGNN architecture we apply OHE to nodes in all (higher-order) layers. Addressing a node regression task, we use a final dense linear layer with a single output and an ELU activation function, and use mean squared error (MSE) as loss function for both architectures. We train both models based on the (ground-truth) temporal node centralities in the training data, using 5000 epochs with an ADAM optimizer, different learning rates, and weight decay of \(5\cdot 10^{-4}\). We additionally tested the use of dropout layers for both architectures, but found the results to be worse. In table 10 and table 11 we summarize the architecture and the hyperparameters for both models.
Data setsWe use 13 data sets on temporal graphs from different contexts, including human contact patterns based on (undirected) proximity or face-to-face relations, time-stamped (directed) E-Mail communication networks, as well as antenna interactions between ants in a colony. An overview of the data sets along with a short description, key characteristics and the source is given in table 1. All data sets are publicly available from the online data repositories netzschleuder [40] and SNAP [41].
Evaluation procedureTo evaluate our models, we first fit the pre-trained models to the test graph, i.e. we apply the trained GCN model to the weighted time-aggregated test graph and the trained DBGNN model to the De Bruijn graphs for the test data. We then use the trained models to predict temporal closeness and betweenness centralities and compare those predictions to ground truth centralities, which we obtain by exhaustively calculating all time-respecting paths in the test data. Figure 1 provides an illustration of our evaluation approach. We calculate the mean absolute error between predicted and ground truth centralities. We further use Kendall-Tau and Spearman rank correlation to compare a node ranking based on predicted centralities with a ranking obtained from ground truth centralities. We evaluate predictions in terms of mean absolute error as well as Spearman and Kendall-Tau rank correlation coefficient. Since centrality scores are often used to identify a small set of most central nodes, we further calculate the number of hits in the set of nodes with the top ten predicted centralities. Since we repeated each experiment \(30\) times, we report the mean and the standard deviation of all scores. We further repeated all experiments for three different learning rates between \(0.1\) and \(0.001\) and only report the best mean scores for each model individually.
Discussion of resultsThe results of our experiments for temporal betweenness and closeness centralities are shown in table 2 and table 3, respectively. Considering **RQ1**, we find that our causality-aware DBGNN-based architecture significantly outperforms a standard GCN model for all 13 data sets and for all evaluation metrics in the case of temporal closeness centrality (with the exception of the MAE score in sp-hypertext, where we observe no significant difference). We further observe a large relative increase of the Spearman rank correlation coefficient averaging to 117 % across all data, ranging between 32 % for sp-hypertext and 436 % for eu-email-4. For temporal betweenness centrality, we find that the proposed causality-aware architecture significantly outperforms a GCN-based prediction in terms of Spearman and Kendall-Tau rank correlation for seven of the 13 data sets, while we observe no significant difference for five and better performance of the GCN model for a single data set. On average across all 13 data sets, the DBGNN architecture yields an increase in Spearman correlation by 39 %. For the seven cases where DBGNN outperforms GCN, we find relative increases in Spearman rank correlation between 18 and 97 %. For sp-highschool, where a GCN model outperforms a DBGNN-based prediction, the relative increase is 17 %.
Regarding **RQ2** we observe that the performance of a time-neglecting GCN-based prediction of temporal closeness and betweenness centrality are comparable. On the contrary, the time-aware DBGNN model performs significantly better for temporal closeness compared to temporal betweenness. Moreover, we find a large variation of predictive performance across different data sets. To further investigate the differences between data sets, in Figure 2 and Figure 3 in appendix C we plot the temporal and static closeness and betweenness centralities of nodes. The results indicate that the timing and temporal ordering of interactions translates to larger differences between temporal and static betweenness centrality, compared to closeness centrality, which could explain our observation.
A potential criticism of our approach could be that the size of a higher-order De Bruijn graph model can be considerably larger than for a standard first-order graph, thus possibly making them computationally expensive. To address this issue, and to answer **RQ3**, we investigate the scalability of our approach for all of the 13 data sets. For this, we calculate both the training and the inference time of our DBGNN-based architecture with those of a simpler GCN model. The results in table 5 and table 6 in appendix A show that the computational requirements of the DBGNN and GCN model are comparable, both during training and the inference time. We attribute this to the fact that the
\begin{table}
\begin{tabular}{l l l|c c c c} \hline \hline data set & Description & \multicolumn{1}{c|}{Ref} & Nodes & Edges & Temporal Edges & Directed & \(\delta\) \\ \hline ants-1-1 & Ant Antenna interactions, colony 1 - filming 1 & [42] & 89 & 947 & 1,911 & True & 30 sec \\ ants-1-2 & Ant Antenna interactions, colony 1 - filming 2 & [42] & 72 & 862 & 1,820 & True & 30 sec \\ ants-2-1 & Ant Antenna interactions, colony 2 - filming 1 & [42] & 71 & 636 & 975 & True & 30 sec \\ ants-2-2 & Ant Antenna interactions, colony 2 - filming 2 & [42] & 69 & 769 & 1,917 & True & 30 sec \\ company-emails & E-Mail exchanges in manufacturing company & [43] & 167 & 5,784 & 82,927 & True & 60 mins \\ eu-email-2 & E-Mail exchanges in EU institution (dept 2) & [44] & 162 & 1,772 & 46,772 & True & 60 mins \\ eu-email-3 & E-Mail exchanges in EU institution (dept 3) & [44] & 89 & 1,506 & 12,216 & True & 60 mins \\ eu-email-4 & E-Mail exchanges in EU institution (dept 4) & [44] & 142 & 1,375 & 48,141 & True & 60 mins \\ sp-hospital & Face-to-face interactions in a hospital & [24] & 75 & 1,139 & 32,424 & False & 60 mins \\ sp-hypertext & Face-to-face interactions at conference & [45] & 113 & 2,498 & 20,818 & False & 60 mins \\ sp-workplace & Face-to-face interactions in a workspace & [46] & 92 & 755 & 9,827 & False & 60 mins \\ sp-highschool & Face-to-face interactions in a highschool & [47] & 327 & 5,818 & 188,508 & False & 60 mins \\ hagle & Human proximity recorded by smart devices & [48] & 274 & 2,899 & 28,244 & False & 1 min \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of time series data sets used in the experiment evaluation
DBGNN architecture utilizes a compact, _static but time-aware_ De Bruijn graph representation of potentially large time series, rather than requiring a representation of all time-stamped edges.
A potential advantage of our method is that it can facilitate fast approximate predictions of temporal centralities. The exact calculation of those centralities is computationally expensive as it requires to exhaustively calculate shortest time-respecting paths (with a given maximum time difference) between all pairs of nodes [49]. Highlighting this, in table 4 in the appendix we report the time required to calculate (ground truth) temporal closeness and betweenness centralities based on shortest time-respecting paths between all pairs of nodes in the test networks. Importantly, while our approach requires to fit a \(k\)-th order De Bruijn graph model in the test set, this procedure only requires to calculate time-respecting paths of exactly length \(k\), which is a much simpler problem.
Considering **RQ4**, another aspect of our approach to use a time-aware but _static_ graph neural network is that the hidden layer activations yield _static_ embeddings that are based on the _causal topology_ of a dynamic graph. This causal topology is influenced by (i) the topology of time-stamped links, and (ii) their timing and temporal ordering. To explain the favorable performance of our model, we hypothesize that nodes for which our model learns similar embeddings also have more similar temporal centralities, compared to the embeddings generated by a GCN model. To test this hypothesis, we apply a dimensionality reduction to the node activations generated by the last 8-dimensional bipartite layer in the DBGNN architecture, comparing it to the representation obtained from the last message passing layer of a GCN model. In appendix D we show the resulting embeddings for one representative prediction of temporal closeness using the DBGNN (left) and the GCN model (right) in the eu-email-4 data, where an additional color gradient highlights ground truth closeness centralities of nodes in the test data. The resulting plot clearly shows that the time-aware DBGNN architecture is able to capture the ranking of nodes, while the time-neglecting GCN model is not.
## 5 Conclusion
In summary, we investigate the problem of predicting temporal betweenness and closeness centralities in temporal graphs. To this end, we use a recently proposed causality-aware graph neural network architecture, which relies on higher-order De Bruijn graph models of time-respecting paths. An empirical study in which we compare our approach with a time-neglecting graph neural network demonstrates the potential of our method. A comparative analysis in 13 empirical temporal networks
\begin{table}
\begin{tabular}{l|c c c c|c c c c} \hline \hline Experiment & MAE & Spearman & Kendall & hikn10 & MAE & Spearman & Kendall & hikn10 \\ \hline ants-1-1 & **202.743 \(\pm\)**0.435 & **0.748 \(\pm\)**0.019 & **0.571 \(\pm\)**0.018 & **6.633 \(\pm\)**0.964 & 207.561 \(\pm\) 2.603 & 0.500 \(\pm\) 0.145 & 0.357 \(\pm\) 0.105 & 2.900 \(\pm\) 1.197 \\ ants-2-2 & **8.932 \(\pm\)**0.37 & **0.830 \(\pm\)**0.021 & **0.650 \(\pm\)**0.023 & **6.767 \(\pm\)**0.568 & 14.558 \(\pm\) 1.373 & 0.421 \(\pm\) 0.165 & 0.303 \(\pm\) 1.24 & 4.800 \(\pm\) 1.229 \\ ants-2-1 & **2.196 \(\pm\)**0.117 & **0.485 \(\pm\)**0.05 & **0.349 \(\pm\)**0.023 & 4.54 \(\pm\) 0.777 & 3.941 \(\pm\) 0.877 & 0.246 \(\pm\) 0.199 & 0.186 \(\pm\) 0.121 & 3.22 \(\pm\) 0.632 \\ ants-2-2 & 25.091 \(\pm\) 0.693 & **0.699 \(\pm\)**0.047 & **0.529 \(\pm\)**0.042 & 5.767 \(\pm\) 1.223 & 25.541 \(\pm\) 1.825 & 0.554 \(\pm\) 0.111 & 0.378 \(\pm\) 0.093 & 5.1 \(\pm\) 1.101 \\ company-mails & **41.393 \(\pm\)**0.942 & **0.833 \(\pm\)**0.013 & **0.707 \(\pm\)**0.016 & 3.233 \(\pm\) 0.971 & 5.305 \(\pm\) 1.485 & 0.601 \(\pm\) 0.009 & 0.491 \(\pm\) 0.099 & 1.6 \(\pm\) 1.075 \\ e-email-4 & **3.915 \(\pm\)**0.165 & 0.388 \(\pm\)**0.059 & 0.290 \(\pm\) 0.206 & 3.003 \(\pm\) 1.377 & 5.671 \(\pm\) 2.52 & 0.332 \(\pm\) 0.14 & 0.254 \(\pm\) 0.121 & 4.1 \(\pm\) 1.955 \\ eu-email-2 & 3.072 \(\pm\) 0.14 & 0.511 \(\pm\) 0.061 & 0.391 \(\pm\) 0.018 & 2.807 \(\pm\) 1.332 & 3.424 \(\pm\) 0.174 & 0.421 \(\pm\) 0.097 & 0.325 \(\pm\) 0.077 & 4.6 \(\pm\) 1.506 \\ e-email-3 & **2.212 \(\pm\)**0.094 & **0.566 \(\pm\)**0.023 & **0.446 \(\pm\)**0.021 & 3.53 \(\pm\) 1.377 & 3.308 \(\pm\) 0.47 & 0.295 \(\pm\) 0.182 & 0.230 \(\pm\) 0.134 & 4.4 \(\pm\) 2.055 \\ sp-hoptrial & 48.827 \(\pm\) 1.377 & 0.743 \(\pm\) 0.036 & 0.546 \(\pm\) 0.033 & 5.14 \(\pm\) 1.296 & **3.874 \(\pm\)**3.474 & 0.733 \(\pm\) 0.029 & 0.579 \(\pm\) 0.032 & 5.6 \(\pm\) 1.578 \\ sp-hoptrial & **10.802 \(\pm\)**0.277 \(\pm\)**1.872 & 0.792 \(\pm\) 0.024 & 0.605 \(\pm\) 0.027 & 5.367 \(\pm\) 1.299 & **3.745 \(\pm\)**3.364 & 0.802 \(\pm\) 0.027 & 0.617 \(\pm\) 0.027 & 5.5 \(\pm\) 1.088 \\ sp-norqlence & 52.506 \(\pm\) 1.479 & 0.629 \(\pm\) 0.041 & 0.454 \(\pm\) 0.023 & 4.097 \(\pm\) 0.491 & **4.145 \(\pm\)**0.58 & 0.723 \(\pm\) 0.065 & 0.517 \(\pm\) 0.058 & 5.1 \(\pm\) 1.101 \\ sp-highschool & **72.697 \(\pm\)**0.169 & 0.743 \(\pm\) 0.018 & 0.505 \(\pm\) 0.050 & **4.567 \(\pm\)**1.104 & 145.323 \(\pm\) 0.512 & **0.860 \(\pm\)**0.010 & **0.678 \(\pm\)**0.002 & 1.200 \(\pm\) 0.422 \\ haggle & **16.635 \(\pm\)**0.399 & **0.755 \(\pm\)**0.005 & **0.618 \(\pm\)**0.006 & 5.4 \(\pm\) 0.983 & 19.266 \(\pm\) 1.043 & 0.637 \(\pm\) 0.034 & 0.520 \(\pm\) 0.043 & 6.8 \(\pm\) 1.229 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for prediction of temporal betweenness centrality
\begin{table}
\begin{tabular}{l|c c c c|c c c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Experiment}} & \multirow{2}{*}{MAE} & \multicolumn{2}{c}{DBGNN} & \multicolumn{2}{c}{GCN} \\ \cline{2-10} & MAE & Spearman & Kendall & hikn10 & MAE & Spearman & Kendall & hikn10 \\ \hline ants-1-1 & **4.306 \(\pm\)**0.269 & **0.931 \(\pm\)**0.005 & **0.790 \(\pm\)**0.008 & **8.533 \(\pm\)**0.629 & 10.514 \(\pm\) 1.082 & 0.422 \(\pm\) 0.092 & 0.288 \(\pm\) 0.066 & 3.100 \(\pm\) 1.101 \\ ants-1-2 & **1.466 \(\pm\)**0.083 & **0.966 \(\pm\)**0.005 & **0.841 \(\pm\)**0.012 & **6.667 \(\pm\)**0.479 & 5.829 \(\pm\) 0.662 & 0.311 \(\pm\) 0.138 & 0.215 \(\pm\) 0.095 & 2.800 \(\pm\) 0.0919 \\ ants-2-1 & **0.852 \(\pm\)**0.026 & **0.981 \(\pm\)**0.003 & **0.985 \(\pm\)**0.01 & **8.400 \(\pm\)**0.
highlights differences between static and temporal centralities that are likely due to the underlying temporal patterns, and shows that our model is generally better at predicting temporal closeness compared to betweenness. An evaluation of scalability reveals that our model offers training and inference times that are comparable to those of a simpler GCN model, while yielding better performance. We finally investigate the (static) embeddings produced by the last message passing layer of our architecture and show that they better capture temporal centralities compared to a GCN model.
While we hope that our work is of interest for the Temporal Graph Learning community, this workshop paper necessarily leaves open questions that we will address in future work. A first limitation is that -due to time constraints- we did not perform a comprehensive hyperparameter exploration. This affects multiple aspects of our analysis, such as the investigation of multiple values for the maximum time difference \(\delta\), the maximum order \(k\) of the De Bruijn graphs used in the DBGNN architecture, as well as the number and width of graph convolutional layers. While we do report optimal values across three learning rates for all models, a more thorough investigation of this hyperparameter must be considered future work. A second open issue is the comparison of our approach to a more comprehensive choice of baselines. This comparison should include additional time-neglecting GNN architectures like Graph Attention Networks (GAT) [50], Graph Isomorphism Networks (GINs) [51] or embedding techniques like DeepWalk [52] or node2vec [53]. Moreover, we miss a comparison against other time-aware representation learning techniques like, e.g., HONEM [54] or EVO [55]. While we did not utilize additional node features like, e.g., degrees, static centralities, or embeddings, we expect that including such features could further improve our results.
In the light of these open issues, the investigation presented for the first time in this workshop paper must necessarily appear preliminary. We nevertheless hope that our work shows that the prediction of temporal centralities is an interesting problem for temporal graph learning, which could potentially be included in community-based benchmarks like TGB [17]. Moreover our study highlights the potential of using _compact, static but time-aware graph neural network architectures_ for node-level regression tasks in time series data on temporal graphs. We thus hope that our work is of interest for the Temporal Graph Learning community and we would appreciate feedback and suggestions.
Ingo Scholtes acknowledges support by the Swiss National Science Foundation (SNF), grant number 176938. The authors would also like to thank Lisi Qarkaxhija for the valuable feedback on the code of the DBGNN model.
|
2308.01469 | VertexSerum: Poisoning Graph Neural Networks for Link Inference | Graph neural networks (GNNs) have brought superb performance to various
applications utilizing graph structural data, such as social analysis and fraud
detection. The graph links, e.g., social relationships and transaction history,
are sensitive and valuable information, which raises privacy concerns when
using GNNs. To exploit these vulnerabilities, we propose VertexSerum, a novel
graph poisoning attack that increases the effectiveness of graph link stealing
by amplifying the link connectivity leakage. To infer node adjacency more
accurately, we propose an attention mechanism that can be embedded into the
link detection network. Our experiments demonstrate that VertexSerum
significantly outperforms the SOTA link inference attack, improving the AUC
scores by an average of $9.8\%$ across four real-world datasets and three
different GNN structures. Furthermore, our experiments reveal the effectiveness
of VertexSerum in both black-box and online learning settings, further
validating its applicability in real-world scenarios. | Ruyi Ding, Shijin Duan, Xiaolin Xu, Yunsi Fei | 2023-08-02T23:13:49Z | http://arxiv.org/abs/2308.01469v1 | # VertexSerum: Poisoning Graph Neural Networks for Link Inference
###### Abstract
Graph neural networks (GNNs) have brought superb performance to various applications utilizing graph structural data, such as social analysis and fraud detection. The graph links, e.g., social relationships and transaction history, are sensitive and valuable information, which raises privacy concerns when using GNNs. To exploit these vulnerabilities, we propose VertexSerum, a novel graph poisoning attack that increases the effectiveness of graph link stealing by amplifying the link connectivity leakage. To infer node adjacency more accurately, we propose an attention mechanism that can be embedded into the link detection network. Our experiments demonstrate that VertexSerum significantly outperforms the SOTA link inference attack, improving the AUC scores by an average of \(9.8\%\) across four real-world datasets and three different GNN structures. Furthermore, our experiments reveal the effectiveness of VertexSerum in both black-box and online learning settings, further validating its applicability in real-world scenarios.
## 1 Introduction
Graph Neural Networks (GNNs) have been widely adopted in various domains, such as financial fraud detection [25], social network analysis [19], and heart-failure prediction [6], thanks to their capabilities to model high-dimensional features and complex structural relationships between entities [30]. However, with the increasing use of graph data, concerns about data privacy are also growing [1, 7, 27]. This is particularly relevant in industries such as finance and healthcare, where sensitive relationships are often embedded in graph-structured data.
Recently, there has been a rise in privacy attacks on GNNs [11, 28] that infer the existence of links between nodes in graphs by only querying the graph model, thus posing a threat to the confidentiality of GNNs. For a graph node pair, the similarity of their posterior distributions (abbreviated as "posteriors" [11]) is measured to deduce the link existence. For instance, in federated learning scenario [10], where different parties keep private data locally but contribute to the GNN training in the cloud based on their data, a malicious contributor can infer the link belonging to other contributors by querying trained GNN models. In this context, the risks of link information leakage lie in the joint training of GNNs and the available GNN inference APIs on graph data.
In this work, we identified a limitation of the existing link-inferring attacks: they do not perform well if the interested node pairs are from the same category (intra-class). This is due to the high similarity of the posterior distributions between node pairs in the same category. To overcome this limitation, we propose a novel approach to significantly improve link inference attacks, particularly on intra-class node pairs, by allowing a malicious contributor to poison the graph during GNN training in an unnoticeable way.
This paper proposes a novel privacy-breaching data poisoning attack on GNNs, **VertexSerum1**, with a new analysis strategy. The attack aims to amplify the leakage of private link information by modifying nodes/vertices. This work makes the following contributions:
Footnote 1: The name is inspired by Veritaserum in the Harry Potter series.
1. We propose a new evaluation metric, intra-class AUC score, for link inference attacks, by considering only node pairs from the same class. This new metric overcomes the bias of the prior works that do not differentiate between inter-class and intra-class, and brings valuable insights for our approach.
2. We introduce the first privacy-breaching data poisoning attack on GNNs, which injects adversarial noise into a small portion (\(<\) 10%) of the training graph to amplify the graph's link information leakage. We constructively employ a self-attention-based network to train the link detector and propose a pre-training strategy to overcome the overfitting issue of limited training data.
3. We demonstrate the effectiveness of the proposed link inference attack on popular GNN structures and graph datasets. The attack improves the link stealing AUC score by \(9.8\%\) compared to the SOTA method in [11].
4. We consider the practicality of applying VertexSerum by evaluating its homophily noticeability of the poisoned graph and the victim model accuracy. The experimental results show that VertexSerum increases model privacy leakage without affecting the GNN performance.
## 2 Background and Related Work
### Graph Neural Networks
Graph Neural Networks (GNNs) are widely used in semi-supervised graph node classification tasks [30]. A graph, denoted as \(G\)=\((V,E)\), has a topology with a set of nodes \(V\) and edges/links \(E\). This work focuses on undirected homogeneous graphs, commonly studied in graph theory and network analysis [5, 6, 16, 19, 25, 29]. A link between node \(u\) and \(v\) is represented by \((u,v)\in E\), while its absence is \((u,v)\notin E\). For each node, it has features \(x\) and corresponding categorical label \(y\) for a classification task. Together with the graph, node features and labels compose the dataset used for GNN training and validation, denoted as \(D\)=\(\{G,\mathbf{X},\mathbf{Y}\}\). After training, a neural network model for the graph is generated, denoted as \(f\), where the model output \(f(u)\) represents the posterior probabilities of node \(u\) for the classes. The main GNN architectures for node classification include Graph Convolutional Network (GCN) [13], Graph SAmple and aggreGatE (GraphSAGE) [9], and Graph Attention Network(GAT) [24]. These models, with different neural network architectures, all learn to aggregate feature information from a node's local neighborhood, whose receptive field is bounded by the model depth. Different from previous works that do not differentiate between nodes in the graph for evaluation, we specifically analyze the intra-class node pairs, which refer to nodes in the same class.
### Link Inference Attack
GNNs, like other machine learning models, are susceptible to various privacy attacks that compromise the confidentiality of sensitive information within the data. These include membership inference attacks [15], adversarial graph injection attacks [20], graph modification attacks [32], and link privacy attacks [11, 28]. Stealing Link Attack [11] was the first link privacy attack, where the graph structure information is inferred from the prediction results of the GNN model, i.e., posterior distributions of nodes. Another attack, LinkTeller [28], takes into account the influence propagation during GNN training for link inference. However, LinkTeller requires the attacker to have access to the graph's node features \(\mathbf{X}\), a much stronger attack model than ours where the attacker only accesses the posterior distributions of interested nodes, a more realistic scenario.
### Enhance Privacy Leakage via Data Poisoning
Data poisoning is an effective method to manipulate the behavior of the victim model during training by intentionally introducing malicious training samples into the benign dataset [31]. The recent work [3] poisons the training dataset with a small number of crafted samples, with incorrect labels, which results in a trained model that overfits the training data, significantly increasing the success rate of membership inference attacks. Inspired by the previous _membership_ leakage amplification by data poisoning, on conventional deep learning models, this work shows that properly crafted data poisoning is also able to amplify _link_ leakage of the graph in GNNs, posing a significant privacy threat to GNNs. Data poisoning on GNNs can be achieved by modifications made to node features, node labels, or the graph structure. We choose to poison node features with small perturbations to make the attack stealthy. Our attack is more effective than the state-of-the-art link inference attacks [11, 28] with a specific focus on intra-class inference.
## 3 Observations and Insights
### Link Inference Attack Does Not Always Work
Previous research of link inference attacks on GNNs has demonstrated good performance in predicting the existence of links among overall node pairs [11]. The GNN model is queried, and the similarity of the posterior distributions of the node pair is calculated for a link detector, which returns the prediction of whether a link exists between these two nodes. Although the performance on overall node pairs tends to be good, when considering only intra-class node pairs, i.e., to infer the link existence of node pairs from the same class, the effectiveness is much lower. This is due to several reasons: 1 Though it is common to select equal numbers of linked and unlinked node pairs for evaluation, the distribution of inter-class and intra-class node pairs in both sets are highly unbalanced: while the majority of linked node pairs are intra-class, most of the unlinked node pairs are inter-class; 2 the posterior distributions of intra-class nodes are much more similar than those of inter-class nodes. We demonstrate the characteristic of node pairs
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Benchmark & R\({}_{linked}\) & R\({}_{unlinked}\) & AUC\({}_{all}\) & AUC\({}_{1}\) \\ \hline Cora & 0.81 : 0.19 & 0.18 : 0.82 & 0.907 & 0.874 \\ Citeseer & 0.74 : 0.26 & 0.18 : 0.82 & 0.987 & 0.912 \\ AMZPhoto & 0.83 : 0.27 & 0.16 : 0.84 & 0.919 & 0.813 \\ AMZComputer & 0.78 : 0.22 & 0.21 : 0.79 & 0.913 & 0.826 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Node pairs’ distribution analysis. R is the ratio of intra-class node pairs vs. inter-class, among all linked node pairs and unlinked node pairs. AUC reflects the success rate of link reference attacks, where AUC\({}_{all}\) considers overall node pairs and AUC\({}_{1}\) considers only node pairs from intra-class, e.g., in class 1.
distribution in Table 1. If we only consider node pairs from the same classes, their posterior distribution will be similar regardless of whether they are linked or not. The different success rates of the link inference attack on node pairs from the entire graph and only one class are reflected by the AUC scores, presented in the third and fourth columns of Table 1, and also visualized in Figure 1. As the visualization shows, in the top row across three different datasets, the linked node pairs and unlinked node pairs are easily distinguishable, from the _overall node pairs_; while the bottom row shows that for intra-class node pairs, the two distributions are not easily separable, indicating the difficulty for link inference. To address this issue, we propose a new metric, _intra-class AUC score_, to evaluate the link inference attack's performance in the same classes, as presented in Column 5 of Table 1.
### Graph Poisoning Threat to GNNs
Data poisoning on Graph neural networks can be achieved on various entries. For example, in social networks, an adversarial user can create fake accounts or modify their profile deliberately. As GNNs applied to these graphs must be frequently retrained or fine-tuned, an attack surface is created for malicious parties to compromise the GNN performance or privacy by crafting malicious data. Specifically in federated learning, a common structural graph is used by distributed contributors to provide data for training, malicious parties may upload carefully poisoned data into the graph in a stealthy and unobtrusive way. Graph poisoning attacks are easy to conduct, difficult to detect, and highly effective in compromising GNNs. Our proposed attack shows that by data poisoning, the link leakage of intra-class nodes can be significantly amplified, and link inference can be effectively accomplished.
## 4 VertexSerum - The Proposed Attack
In this section, we illustrate our proposed privacy-breaching data poisoning attack - VertexSerum. The overview of the attack procedure is presented in Figure 2.
### Threat Model
**Adversary's Goal.** The attack targets GNN-based classifiers, which utilize node features and the graph topology to predict the labels for querying nodes. The attacker aims to deduce the connectivity between any pair of nodes \((u,v)\) belonging to class \(k\) by querying a pre-trained GNN model.
**Adversary's Knowledge.** We assume the attacker has limited access to the vendor's GNN as they can only acquire the interested nodes' posteriors through queries. We also assume that the attacker has access to a portion of the graph, as in federated learning, where the attacker acts as a distributed contributor to provide data for the training dataset, which can be intentionally poisoned. Note these assumptions align with "Attack-3" in the state-of-the-art link inference attack [11] and are practical. We limit the portion of the graph that the attacker can manipulate, such as \(10\%\) of the entire graph, which is more practical and realistic in federated learning settings.
### Inspiration from ML Poisoning
In conventional machine learning (ML) regime, poisoning the training dataset with tainted data can expose user data privacy [3, 22], e.g., by injecting label-tampered samples to the training data, forcing the victim model to overfit on specific features of each sample, thereby exacerbating its membership leakage. However, the potential of such data poisoning schemes have not been explored in attacking the link privacy of GNNs. This work bridges this knowledge gap by crafting samples in training dataset to strengthen GNN model's attention on node connections, making the model to produce more similar outputs for linked nodes and increase the dissimilarity between unlinked nodes. Rather than generating abnormal labels which may be detected by outlier detection tools, we induce poisoned features with small perturbations with Projected Gradient Descent
Figure 1: Visualization on link inference, overall vs. intra-class. We randomly sampled 200 node pairs (100 linked + 100 unlinked) from all nodes (all) and only the second class (1). The dots are the PCA projections of the similarities of node pair posteriors, where dots in red represent linked pairs and dots in gray represent unlinked pairs. The more apart the two distributions are, the easier link inference can be.
Figure 2: Overview of VertexSerum with Self-Attention Detector.
(PGD), allowing us to achieve attack stealthiness.
### Attack Flow of VertexSerum
VertexSerum aims to steal the true link information of interested node pairs. The attack is carried out between a model vendor \(\mathcal{V}\) and a malicious contributor \(\mathcal{A}\). The vendor has access to the entire graph dataset \(D\)={\(G,\mathbf{X},\mathbf{Y}\)} and trains a downstream task with a public training algorithm \(\mathcal{T}\)2. The adversary contributes a small portion of the dataset, \(D_{p}\)={\(G_{p},\mathbf{X}_{p},\mathbf{Y}_{p}\)}, containing a partial graph \(G_{p}\), which is used for both generating the poisoning sub-graph and training the link detector. The attack steps are:
Footnote 2: We assume the GNN type is open to the adversary for the ease of evaluation. We also demonstrate the effectiveness of VertexSerum in Section 5.7, when the adversary has no clue of the GNN model.
1. The adversary chooses a target class \(k\) from the label space \(\mathbf{Y}\). The attack goal is to predict the link existence between nodes \(u,v\), i.e., if \((u,v)\in E\), when \(y_{u}\)=\(y_{v}\)=\(k\).
2. Following the steps in Lines 1-6 of Algorithm 1, the adversary _generates a partial dataset \(D^{\prime}_{p}\) with a poisoned graph_\(G^{\prime}_{p}\) by analyzing a shadow model trained on \(G_{p}\), as depicted in the shadow part in Figure 2, and _sends it to the vendor_.
3. The vendor trains a GNN model for downstream tasks \(f_{\theta}\leftarrow\mathcal{T}(D\cup D^{\prime}_{p})\) on the poisoned graph \(G\cup G^{\prime}_{p}\).
4. The adversary queries the GNN model, \(f_{\theta}\), with the possessed poisoned partial graph \(G^{\prime}_{p}\) and generates similarities of posteriors. _Binary link detectors_ are constructed to infer link existence, as shown in the right bottom part of Figure 2 and detailed in Lines 8-10 of Algorithm 1.
5. The adversary makes a guess \(\hat{z}=(u,v)\) with the link detectors (Line 11-13).
Our attack utilizes data poisoning to breach the confidentiality of GNNs: the poisoned graph \(G^{\prime}_{p}\) is used in the victim GNN model training, with an objective to amplify the model privacy leakage.
### Requirements of the Poisoning Nodes
For Step 2 of the attack, to generate a graph that enhances the model's aggregation on linked nodes, we design a specific poisoned graph \(G^{\prime}_{p}\) that makes the GNN model \(f_{\theta}\) focus more on adjacency. Next, we outline requirements for successful node poisoning:
1. **Intact Community.** The adversary should ensure that the node classification accuracy for the victim task is not evidently affected, so that the poisoned graph is less likely to be rejected by the vendor for GNN training. Besides, misclassified nodes can negatively impact passing information to adjacent linked nodes, leading to an overall lower aggregation capability for the GNN model.
2. **Node Attraction and Repulsion.** The poisoned samples should simultaneously promote the similarity of the GNN outputs on linked nodes (attraction) and the dissimilarity on unlinked nodes (repulsion). This requires a balance between the attraction and repulsion of node features when poisoning the dataset.
3. **Adversarial Robustness.** Adversarial training techniques [17, 21] can improve a model's robustness against adversarial samples, where the model tolerates small input perturbations and outputs similar predictions. In VertexSerum, we utilize adversarial training to increase the model's adversarial robustness, guiding linked nodes with similar features to produce similar posteriors.
### Crafting Poisoning Features via PGD
To meet these requirements, we propose a graph poisoning method optimized with projected gradient descent (PGD). We adopt the shadow training methods [11, 18], where the attacker will first train a shadow GNN (\(f^{sh}_{\theta}\)) on the possessed partial graph \(G_{p}\). The optimal perturbation to add on node features is found based on the gradient of the loss function shown in Eq. 1.
\[L=\alpha L_{attraction}+\beta L_{repulsion}+\lambda L_{CE} \tag{1}\]
The loss function includes three terms, with \(\alpha,\beta,\lambda\) as positive coefficients to balance attraction and repulsion:
1. The attraction loss penalizes the euclidean distance of posteriors on two linked nodes. The PGD will find node features that reduce the distance between linked nodes. \[L_{attraction}=-\sum_{(u,v)\in E}(f_{\theta}^{sh}(u)-f_{\theta}^{sh}(v))^{2}\] (2)
2. The repulsion term computes the cosine similarity between unlinked nodes. The rationale is that cosine is bounded so as to avoid an overlarge dissimilarity term. The PGD will find the node features that reduce the similarity between unlinked nodes. \[L_{repulsion}=\sum_{\begin{subarray}{c}u,v\in V,u\neq v,\\ (u,v)\notin E\end{subarray}}(1-cos(f_{\theta}^{sh}(u),f_{\theta}^{sh}(v)))^{2}\] (3)
3. The cross-entropy term \(L_{CE}\) serves as a regularization in the loss function. Its goal is to improve the victim model's adversarial robustness to amplify link leakage.
The previous poisoning attack includes regularization of perturbations, such as the L1 norm, during optimization. However, we observed that this term is not necessary for the PGD process if we have a small updating step size \(\epsilon\). By only optimizing Eq. 1, the generated perturbation is already effective and unnoticeable.
### Self-attention Link Detector
In Step 4 of the attack, the adversary trains a link detector using the posteriors of the partial graph by querying the pre-trained vendor model. Previous work [11] used a Multi-Layer Perceptron (MLP) to analyze the similarity features of the node pair posteriors. However, the dense structure of MLP is often inadequate to capture the complex dependencies among similarity features. Furthermore, since the attacker only has a small part (\(<10\%\)) of the graph, training an MLP is prone to be unstable due to overfitting. Moreover, since VertexSerum introduces more complex characteristics such as attraction and repulsion during poisoning, the underlying patterns in the similarity features are expected to be more informative. To address these issues, we propose improvement to the MLP model with a Multihead Self-attention [23] link detector, which can efficiently use information by selectively attending to different parts in the input similarity features. We follow the same construction of similarity features as the previous method [11], consisting of eight distances and four entropy features between two nodes. To ensure stability of the self-attention detector on a small dataset, we initialize its first embedding layer with the first fully-connected layer from the MLP. The experimental results in Table 2 in next section show that the introduction of self-attention improves the attack AUC score by an average of \(7.2\%\) with the standard deviation dropping by \(35\%\).
## 5 Experiments
### Experimental Setup
**Datasets:** We evaluate the effectiveness of VertexSerum on four publicly available datasets: Citeseer [13], Cora [13], Amazon Photo Dataset [14], and Amazon Computer Dataset [14]. These datasets cover different daily-life scenarios and are widely used as benchmarks for evaluating graph neural networks. The first two datasets are citation networks where nodes represent publications, and links indicate citations among them. The last two datasets are co-purchase graphs from Amazon, where nodes represent products, and edges represent the co-purchased relations of products. Our benchmarks scale from (3k nodes + 11k edges) for Cora to (14k nodes + 492k edges) for AMZComputer. We assume the vendor's model is trained on \(80\%\) of the nodes and evaluated on the remaining in the graph.
Since we assume the attacker only contributes a small portion of the graph for training, i.e., \(G^{\prime}_{p}\), we sample \(10\%\) nodes among the training dataset. To train the link detector, we collect all linked node pairs and randomly sample the same number of unlinked node pairs in \(G^{\prime}_{p}\). Similarity features are computed based on these node pairs, following [11], together with corresponding link information. We split this dataset into \(80\%\) for training and \(20\%\) for validation.
**Metric:** ROC-AUC is a commonly used evaluation metric for binary classification tasks and has also been applied in previous works on link inference [11, 28]. It measures the ability of the link detector to distinguish between linked and unlinked node pairs. A higher AUC indicates superior performance of the link detector in identifying linked node pairs from unlinked ones.
In addition to overall AUC, we also evaluate the intra-class AUC. Overall AUC measures the ability of the link detector to identify linked node pairs among all classes, while intra-class AUC measures its ability only in one class. As mentioned in Section 3.1, a successful link inference attack should have a high overall AUC as well as a high intra-class AUC. Without loss of generality, we set Class 1 as target class to evaluate performance of the link inference attack.
**Models:** We evaluate VertexSerum on three commonly used GNN structures: GCN [13], GraphSAGE [9], and GAT [24]. Deep Graph Library (DGL) is used for model implementation [26]. We construct a 3-layer MLP as the baseline link detector, with the first layer containing 64 hidden neurons which is also the initialization for the self-attention link detector. The self-attention detector is of a 16-head attention structure with an input dimension of 64. For initialization, we train MLP for 50 epochs with a learning rate of 0.001. We then fine-tune the self-attention detector with a learning rate of 0.0001, using the cross-entropy loss and Adam optimizer [12]. We run experiments 10 times and report the average and standard deviation of AUC scores.
### Graph Visualization
Figure 3 displays part of the poisoned graph of VertexSerum on a 3-layer GraphSAGE model trained on the Cora dataset with different distortions \(N\epsilon\). By injecting poisoned samples into the partial graph while maintaining the topology, the PGD objective loss induces corresponding attraction and repulsion forces between nodes, resulting in increased attention to linked nodes. As the distortion increases from \(0\) to \(1\), the node colors shift to demonstrate attraction to linked nodes and repulsion to unlinked nodes.
### Attack Performance
We evaluate the effectiveness of VertexSerum (VS), including both the poisoning method and the self-attention-based (ATTN) link detector. The prior stealing link attack (SLA) [11] serves as the SOTA method for us to compare, as it shares a similar threat model with our attack. SLA uses similarity features and an MLP-based link detector to attack a graph neural network, without poisoning. We compare the performance of different attack strategies and link detector structures, and report intra-class AUC scores in Table 2.
VertexSerum with the attention detector significantly improves the performance of link inference attacks for all datasets and GNN models. Compared to the method using SLA with MLP, our attack has an average improvement of \(9.8\%\) on AUC scores. Note that the self-attention-based link detector significantly improves the attack performance even without poisoning datasets (see the two rows of "SLA + ATTN " in Table 2). This is because the multi-head attention structure models the dependencies between elements in similarity features, better exposing the link existence during inference. On the other hand, using VertexSerum with MLP alone does not improve the detection performance on some datasets, such as Citeseer and AMZPhoto. From our consideration, VertexSerum enforces GNN to learn more about the connections between nodes, adding more hidden information to the similarity feature, for which MLPs lack the capability to capture. However, by combining VertexSerum with our proposed self-attention link detector, the poisoning works effectively towards increasing the link leakage.
We also demonstrate the intra-class AUC scores by varying the target class, taking the Cora dataset with the GraphSAGE model in Figure 4 as an example. We can draw the same conclusion as above on the link inference attack. Not only the self-attention detector can greatly outperform the MLP detector, but the poisoning also boosts link detection as well. Further, we demonstrate that VertexSerum can still preserve the highest effectiveness of link inference on over
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} \hline Model & \multicolumn{2}{c|}{GCN} & \multicolumn{2}{c|}{GAT} & \multicolumn{2}{c}{GraphSAGE} \\ \hline Dataset & Citeseer & Cora & Citeseer & Cora & Citeseer & Cora \\ \hline SLA + MLP[11] & 0.914\(\pm\)0.008 & 0.874\(\pm\)0.018 & 0.969\(\pm\)0.002 & 0.845\(\pm\)0.011 & 0.972\(\pm\)0.002 & 0.854\(\pm\)0.009 \\ \hline SLA + ATTN & 0.951\(\pm\)0.064 & 0.903\(\pm\)0.067 & 0.980\(\pm\)0.003 & 0.868\(\pm\)0.029 & 0.976\(\pm\)0.007 & 0.931\(\pm\)0.029 \\ \hline VS + MLP & 0.892\(\pm\)0.006 & 0.912\(\pm\)0.065 & 0.913\(\pm\)0.005 & 0.856\(\pm\)0.017 & 0.949\(\pm\)0.007 & 0.859\(\pm\)0.027 \\ \hline VS + ATTN(*) & **0.978\(\pm\)0.033** & **0.927\(\pm\)0.023** & **0.997\(\pm\)0.002** & **0.924\(\pm\)0.022** & **0.994\(\pm\)0.006** & **0.957\(\pm\)0.007** \\ \hline Dataset & AMZPhoto & AMZComputer & AMZPhoto & AMZComputer & AMZPhoto & AMZComputer \\ \hline SLA + MLP[11] & 0.813\(\pm\)0.015 & 0.826\(\pm\)0.018 & 0.881\(\pm\)0.007 & 0.820\(\pm\)0.046 & 0.873\(\pm\)0.015 & 0.883\(\pm\)0.004 \\ \hline SLA + ATTN & 0.917\(\pm\)0.037 & 0.956\(\pm\)0.007 & 0.963\(\pm\)0.011 & 0.889\(\pm\)0.066 & 0.972\(\pm\)0.009 & 0.978\(\pm\)0.005 \\ \hline VS + MLP & 0.780\(\pm\)0.007 & 0.849\(\pm\)0.009 & 0.917\(\pm\)0.006 & 0.852\(\pm\)0.033 & 0.873\(\pm\)0.032 & 0.898\(\pm\)0.004 \\ \hline VS + ATTN(*) & **0.939\(\pm\)0.018** & **0.962\(\pm\)0.011** & **0.990\(\pm\)0.008** & **0.919\(\pm\)0.031** & **0.987\(\pm\)0.006** & **0.985\(\pm\)0.006** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of the average AUC with standard deviation for different attacks on the four datasets. The best results are highlighted in bold. (*) denotes our proposed method.
Figure 4: The AUC score along each target class. We take a case study on Cora dataset (7 classes in total) with GraphSAGE as the GNN model.
Figure 3: A visualization of nodes and edges belonging to the target class from the original (\(N\epsilon=0\)) and poisoned (\(N\epsilon>0\)) partial graphs. Node color represents the low-dimensional embedding of the GNN model’s output, i.e., the node posteriors. **Color’s similarity** indicates **posteriors’ similarity**.
all classes. We show the overall AUC scores in Table 3, assuming the GNN model is based on GraphSAGE. Besides the elevated attack success, we can explicitly observe the overall AUC scores are higher than the intra-class AUC scores. This also affirms our observation discussed in Section 3.1 that evaluation on overall node pairs yields higher performance than that on intra-class node pairs.
### Attack Stealthiness
We evaluate the stealthiness of VertexSerum from two perspectives: homophily unnoticeability and model accuracy. Homophily unnoticeability is an important metric for graph adversarial attacks and is defined as the node-centric homophily distribution shifting between the clean and poisoned graph being upper-bounded by a threshold, which ensures that the malicious nodes are not easily detectable by the database administrators [4]. We visualize the homophily distribution of the benign and poisoned graphs in Figure 5. It is clear that VertexSerum can effectively preserve the homophily while still conducting effective poisoning. The lower tables in Figure 5 present the model accuracy before and after poisoning, demonstrating that VertexSerum only introduces small accuracy degradation/improvement. Since from the vendor's perspective, the new accuracy is achieved after the re-training, thus, the trivial difference ensures stealthiness, i.e., the vendor will not stop using the poisoned graph due to poor performance.
### Ablation Study
#### 5.5.1 Influence of the Depth of GNN
We conduct an evaluation of our attack on the GraphSAGE model with varying numbers of layers (depth) in the GNN \(f_{\theta}\).The results are shown in Figure 6, where the blue line illustrates the attack AUC scores, while the pink dashed lines indicate the training and testing accuracy. As the number of layers increases, the GNN aggregates information from neighborhoods across multiple hops progressively, leading to overly similar output representations on linked nodes, known as over-smoothing [2].
When GNNs have only one layer, the attack is harder because of the lack of aggregated information between linked nodes. VertexSerum shows good performance when the number of layers is greater than 1, as more hops of neighbors are taken into consideration. Meanwhile, the model training and testing accuracy decreases as the number of layers increases, because of over-smoothing, where the representations of nodes become similar after multi-layer message passing. Consequently, the attack performance slightly drops, due to the underperformance of model accuracy. This is a concerning observation since the attack success rate is bound to the model accuracy. A well-performed model is also highly vulnerable to link inference attacks.
#### 5.5.2 Impact of Different Loss Terms
In designing our PGD objective loss in Eq. 1, we consider a trade-off between the attraction loss, repulsion loss, and cross-entropy loss by controlling the corresponding regularization strength terms \(\alpha,\beta,\) and \(\lambda\). We compare the attack performance using different tuples of regularization weights in Table 4. We find that the optimal choice is
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{\(\beta=0.01\)} & \multicolumn{2}{c|}{\(\beta=0.1\)} & \multicolumn{2}{c}{\(\beta=1\)} \\ \cline{2-7} & \(\lambda=0.1\) & \(\lambda=1\) & \(\lambda=0.1\) & \(\lambda=1\) & \(\lambda=0.1\) & \(\lambda=1\) \\ \hline \(\alpha=0.1\) & 0.914 & 0.942 & 0.931 & 0.943 & 0.954 & 0.953 \\ \hline \(\alpha=1\) & 0.952 & **0.963** & 0.954 & 0.953 & 0.946 & 0.945 \\ \hline \(\alpha=10\) & 0.949 & 0.947 & 0.950 & 0.949 & 0.925 & 0.926 \\ \hline \hline \end{tabular}
\end{table}
Table 4: AUC scores of VertexSerum Attack on GraphSAGE for Cora Dataset with different regularization strengths.
Figure 5: Homophily analysis on graph poisoning. Cora and AMZPhoto are selected as the case study. The top histogram plots show the node homophily before and after the poisoning attack, where high coincidence on distribution means two graphs have high homophily. The lower tables demonstrate various model accuracies on the graphs before and after poisoning, showing that the accuracy is barely affected by the poisoning.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & Cora & Citeseer & AMZPhoto & AMZComputer \\ \hline SLA+MLP [11] & 0.907\(\pm\)0.001 & 0.987\(\pm\)0.001 & 0.919\(\pm\)0.020 & 0.913\(\pm\)0.043 \\ SLA+ATTN & 0.994\(\pm\)0.008 & **0.995\(\pm\)0.001** & 0.947\(\pm\)0.005 & 0.962\(\pm\)0.005 \\ VS+MLP & 0.945\(\pm\)0.003 & 0.978\(\pm\)0.013 & 0.946\(\pm\)0.010 & 0.900\(\pm\)0.055 \\ VS+ATTN & **0.997\(\pm\)0.012** & 0.994\(\pm\)0.001 & **0.956\(\pm\)0.001** & **0.968\(\pm\)0.004** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of the overall AUC scores for different tasks on GraphSAGE model, by inferring the link between node pairs from all classes.
Figure 6: Performance of our attack on the GraphSAGE model with varying numbers of layers. The blue line represents the attack AUC scores, while the pink dashed lines indicate the training and testing accuracies.
\((\alpha,\beta,\lambda)\)=\((1,0.01,1)\), where the repulsion weight is much smaller than the others. This is due to the imbalance between the number of linked and unlinked node pairs, which leads to a high repulsion loss, and this choice balances the effect of the repulsion loss and attraction loss.
### Online Poisoning on GNNs
Graph neural networks in practice are not always trained offline, but multiple contributors may provide data at different times for online training. This is particularly relevant in scenarios such as recommendation systems, where models are frequently updated with incoming user behavior data. In this section, we investigate a training scenario where the vendor's model is trained batch-by-batch as the data arrives. We divide the dataset into eight batches, each representing a different contributor. We select one of the contributors as the adversary and use VertexSerum to poison the corresponding partial graph. The model is updated in order as the contributors arrive, and we evaluate the attack performance when the adversarial contributor arrives at different times.
Figure 7 presents the attack AUC when the adversarial batch arrives at different times during online training. We observe that poisoning the early batches is more effective than poisoning the last batch. This is likely because the early batches have a long-term effect on fitting the online model, while the poisoning data in the last round is only fitted during the last update. Further, the poisoning attack on offline training yields better results. Since the poisoning exists throughout the offline training, the model fitting on the benign batches is also consistent throughout the training, akin to poisoning at an early time. Overall, VertexSerum is effective for both online and offline training on GNNs.
### Transferability in the Black-Box Setting
In previous evaluations, we assume that the attacker has prior knowledge of the vendor model's architecture and training process, which is a gray-box setting. In this section, we extend our evaluation to the black-box setting, where the attacker has no knowledge of the victim model's architecture and configuration. We investigate the transferability of VertexSerum, where the attacker trains the subgraph using a different model from the vendor model. For instance, the attacker may train the subgraph using GAT when the vendor model is trained using GraphSAGE. Figure 8 shows the results under the black-box setting. We find that even without knowledge of the vendor model structure, the attacker can still achieve high performance using VertexSerum. Interestingly, the attacker achieves the highest AUC scores when using GAT as the shadow model to generate the poison example. We hypothesize that GAT has a higher generalizability in estimating the real boundary of the vendor model, making the poison samples from GAT more effective.
## 6 Defense
There are two potential directions to defend against the VertexSerum attack. The first approach is to blur the perturbation. Our poisoning samples are similar to adversarial samples, which are clean features with small added noise. Thus, it is possible to slightly change the training samples through preprocessing methods such as denoising or augmentation, without harming the model accuracy. The second approach is to increase the GNN's robustness against the link stealthy attack. One way to achieve this is to build GNNs with certified robustness using differential privacy [8]. Alternatively, the vendor can train the GNN with an appropriate depth to avoid over-smoothing or over-fitting.
## 7 Conclusions
In this paper, we investigate the vulnerability of graph neural networks to privacy leakage amplified by data poisoning. We propose VertexSerum, with data poisoning and self-attention link detector, a link inference attack with significantly better attack performance on intra-class nodes. We conduct extensive evaluations on different attack settings, including gray-box, offline training, online training, and black-box. As graph neural networks become increasingly popular, our findings pose a new challenge to confidentiality of the structural datasets using GNNs. The work serves as a cautionary note to model vendors, informing them of possible privacy exposure of their training datasets and calling for more follow-on work to build robust GNNs against such privacy-breaching attacks.
Figure 8: The attack performance when the vendor model is unknown and trained on Cora Dataset, where the attacker uses arbitrary GNN structures to train the shadow model.
Figure 7: Performance of our attack on the GraphSAGE model under the online training setting. The blue line in the plot represents the attack AUC scores, and the x-axis represents different poisoning time during online training. |
2303.14470 | Compacting Binary Neural Networks by Sparse Kernel Selection | Binary Neural Network (BNN) represents convolution weights with 1-bit values,
which enhances the efficiency of storage and computation. This paper is
motivated by a previously revealed phenomenon that the binary kernels in
successful BNNs are nearly power-law distributed: their values are mostly
clustered into a small number of codewords. This phenomenon encourages us to
compact typical BNNs and obtain further close performance through learning
non-repetitive kernels within a binary kernel subspace. Specifically, we regard
the binarization process as kernel grouping in terms of a binary codebook, and
our task lies in learning to select a smaller subset of codewords from the full
codebook. We then leverage the Gumbel-Sinkhorn technique to approximate the
codeword selection process, and develop the Permutation Straight-Through
Estimator (PSTE) that is able to not only optimize the selection process
end-to-end but also maintain the non-repetitive occupancy of selected
codewords. Experiments verify that our method reduces both the model size and
bit-wise computational costs, and achieves accuracy improvements compared with
state-of-the-art BNNs under comparable budgets. | Yikai Wang, Wenbing Huang, Yinpeng Dong, Fuchun Sun, Anbang Yao | 2023-03-25T13:53:02Z | http://arxiv.org/abs/2303.14470v1 | # Compacting Binary Neural Networks by Sparse Kernel Selection
###### Abstract
Binary Neural Network (BNN) represents convolution weights with 1-bit values, which enhances the efficiency of storage and computation. This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed: their values are mostly clustered into a small number of codewords. This phenomenon encourages us to compact typical BNNs and obtain further close performance through learning non-repetitive kernels within a binary kernel subspace. Specifically, we regard the binarization process as kernel grouping in terms of a binary codebook, and our task lies in learning to select a smaller subset of codewords from the full codebook. We then leverage the Gumbel-Sinkhorn technique to approximate the codeword selection process, and develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords. Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
## 1 Introduction
It is crucial to design compact Deep Neural Networks (DNNs) which allow the model deployment on resource-constrained embedded devices, since most powerful DNNs including ResNets [12] and DenseNets [15] are storage costly with deep and rich building blocks piled up. Plenty of approaches have been proposed to compress DNNs, among which network quantization [17, 51, 53] is able to reduce memory footprints as well as accelerate the inference speed by converting full-precision weights to discrete values. Binary Neural Networks (BNNs) [3, 16] belong to the family of network quantization but they further construct the parameter representations to binary values (\(\pm 1\)). In this way, the model is largely compressed. More importantly, floating-point additions and multiplications in conventional DNNs are less required and mostly reduced to bit-wise operations that are well supported by fast inference accelerators [37], particularly when activations are binarized as well. To some extent, this makes BNNs more computationally efficient than other compression techniques,, network pruning [11, 13, 30] and switchable models [44, 49, 50].
Whilst a variety of methods are proposed to improve the performance of BNNs, seldom is there a focus on discussing how the learnt binary kernels are distributed in BNNs. A recent work SNN [45] demonstrates that, by choosing typical convolutional BNN models [32, 36, 37] well trained on ImageNet and displaying the distribution of the \(3\times 3\) kernels along all possible \(2^{3\times 3}\) binary values (_a.k.a._ codewords), these kernels nearly obey the power-law distribution: only a small portion of codewords are activated for the most time. Such a phenomenon is re-illustrated in Figure 1(b). This observation motivates SNN to restrict the size of the codebook by removing those hardly-selected codewords. As a result, SNN is able to compact BNN further since indexing the kernels with a smaller size of codebook results in a compression ratio of \(\log_{2}(n)/\log_{2}(N)\), where \(n\) and \(N\) are separately the sizes of the compact and full codebooks.
However, given that the size of codebook is limited (only \(512\)), the sub-codebook degenerates during training since codewords are likely to become repetitive. Therefore, we believe the clustering property of kernels can be further exploited during the training of BNNs. To do so, we reformulate the binary quantization process as a grouping task that selects, for each kernel, the nearest codeword from a binary sub-codebook which is obtained by selecting optimal codewords from the full one. To pursue an optimal solution and retain the non-repetitive occupancy of the selected codewords, we first convert the sub-codebook selection problem to a permutation learning task. However, learning the permutation matrix is non-differential since the permutation matrix is valued with only 0/1 entries. Inspired by the idea in [34], we introduce the Gumbel-Sinkhorn op
eration to generate a continuous and differential approximation of the permutation matrix. During training, we further develop Permutation Straight-Through Estimator (PSTE), a novel method that tunes the approximated permutation matrix end-to-end while maintaining the binary property of the selected codewords. The details are provided in SS 3.2 and SS 3.3. We further provide the complexity analysis in SS 3.4.
Extensive results on image classification and object detection demonstrate that our architecture noticeably reduces the model size as well as the computational burden. For example, by representing ResNet-18 with 0.56-bit per weight on ImageNet, our method brings in 214\(\times\) saving of bit-wise operations, and 58\(\times\) reduction of the model size. Though state-of-the-art BNNs have achieved remarkable compression efficiency, we believe that further compacting BNNs is still beneficial, by which we can adopt deeper, wider, and thus more expressive architectures without exceeding the complexity budget than BNNs. For example, our 0.56-bit ResNet-34 obtains 1.7% higher top-1 accuracy than the state-of-the-art BNN on ResNet-18, while its computational costs are lower and the storage costs are almost the same.
Existing methods [24, 25] (apart from SNN [45]) that also attempt to obtain more compact models than BNNs are quite different with ours as will be described in SS 2. One of the crucial points is that their codewords are sub-vectors from (flattened) convolution weights across multiple channels, whereas our each codeword corresponds to a complete kernel that maintains the spatial dimensions (weight and height) of a single channel. The reason why we formulate the codebook in this way stems from the observation in Figure 1(b), where the kernels are sparsely clustered. Differently, as shown in Figure 1(a), the codewords are nearly uniformly activated if the codebook is constructed from flattened sub-vectors, which could because the patterns of the input are spatially selective but channel-wise uniformly distributed. It is hence potential that our method may recover better expressivity of BNNs by following this natural characteristic. In addition, we optimize the codewords via non-repetitive selection from a fixed codebook, which rigorously ensures the dissimilarity between every two codewords and thus enables more capacity than the product quantization method used in [24], as compared in Figure 1(c)(d). On ImageNet with the same backbone, our method exceeds [24] and [25] by \(6.6\%\) and \(4.5\%\) top-1 accuracies, respectively.
## 2 Related Work
**BNNs.** Network quantization methods [7, 17, 51, 53] convert network weights to low-bit values and are appealing for resource-limited devices given the superiority in efficiency. As an extreme solution of quantization, BNNs [3, 3, 16, 37] represent weights and activations with 1-bit (\(\pm 1\)) values, bringing 32\(\times\) storage compression ratio and 58\(\times\) practical computational reduction on CPU as reported by [37]. BNNs usually adopt a non-differentiable sign function during the forward pass and the Straight-Through Estimator (STE) [3] for gradient back-propagation. Many attempts are proposed to narrow the performance gap between BNNs and their real-valued counterparts. XNOR-Net [37] adopts floating-point parameters as scaling factors to reduce the quantization error. Bi-Real [31] proposes to add ResNet-like shortcuts to reduce the information loss during binarization. ABC-Net [28] linearly combines multiple binary weight bases to further approximate full-precision weights. ReActNet [32] generalizes activation functions to capture the distribution reshape and shift. New architectures for BNNs can be searched [39] or designed [4] to further improve the trade-off between performance and efficiency.
**Compacting BNNs.** Our work focuses on an orthogonal venue and investigates how to compact BNNs further. Previously, SNN [45] reveals that binary kernels learnt at convolutional layers of a BNN model are likely to be distributed
Figure 1: Codebook distributions under different decomposition approaches. Statistics in both sub-figures are collected from the same BNN model (XNOR-Net [37] upon ResNet-18 [12]) well-trained on ImageNet [5]. In (a), each codeword is a flattened sub-vector (of size \(1\times 9\)). In (b), each codeword is a \(3\times 3\) convolution kernel. The codebook in either sub-figure consists of \(2^{9}=512\) different codewords. Upper tables provide the total percentages of the top-\(n\) most frequent codewords. In (c), we observe that the sub-codebook highly degenerates during training, since codewords tend to be repetitive when being updated independently. While in (d), the diversity of codewords preserves, which implies the superiority of our selection-based learning.
over kernel subsets. Based on this, SNN randomly samples layer-specific binary kernel subsets and refines them during training. However, the optimization of SNN is easy to attain repetitive binary kernels, _i.e_., degenerated subsets, leading to a noticeable performance drop compared with conventional BNN. Another method sharing a similar motivation with us is the fractional quantization (FleXOR) [25], which encrypts the sub-vectors of flattened weights to low-dimensional binary codes. Yet, FleXOR cannot track which weights share the same encrypted code, and thus it is needed to decrypt the compressed model back to the full BNN for inference. In our method, the reconstruction of the corresponding BNN is unnecessary, since the computation can be realized in the space of codewords, leading to further reduction of bit-wise computations as detailed in SS 3.4. Another research close to our paper is SLBF [24] that applies the idea of stacking low-dimensional filters [48] and the product quantization [19, 42, 9] to BNNs. Similar to [25], this method splits the (flattened) weights into sub-vectors as codewords along the channel direction. As already demonstrated in SS 1, our method leverages kernel-wise codebook creation by selection-based codeword optimization, yielding much lower quantization errors than [24, 25].
## 3 Sparse Kernel Selection
In this section, we introduce how to compact and accelerate BNN further by Sparse Kernel Selection, abbreviated as **Sparks**. Towards this goal, we first formulate the quantization process as grouping convolution kernels into a certain binary codebook. We then show that a more compact sub-codebook can be learnt end-to-end via Gumbel-Sinkhorn ranking. To enable the optimization of the ranking while keeping the binary property, we further propose the Permutation Straight-Through Estimator (PSTE) technique with the convergence analysis. Finally, we contrast the complexity of the model with BNN, and demonstrate that our method is able to not only compress BNN further but also accelerate the speed of BNN during inference.
### Binarization below 1-Bit
Prior to going further, we first provide the notations and necessary preliminaries used in our paper. Let \(\mathbf{W}\in\mathbb{R}^{C_{\text{out}}\times C_{\text{in}}\times K\times K}\) be the convolution weights, where \(C_{\text{out}}\) and \(C_{\text{in}}\) are the numbers of the output and input channels, respectively, and \(K\) is the kernel size. As discussed in SS 1, we are interested in the quantization for each specific kernel, denoted with a lowercase letter as \(\mathbf{w}\in\mathbb{R}^{K\times K}\). The quantization process aims to map full-precision weights to a smaller set of discrete finite values. Specifically, we conduct the function \(\mathrm{sign}\) for each kernel, resulting in \(\hat{\mathbf{w}}=\mathrm{sign}(\mathbf{w})\in\mathbb{B}\) where \(\mathbb{B}=\{-1,+1\}^{K\times K}\) is the set consisting of 1-bit per weight. In what follows, \(\mathbb{B}\) is called the (full) codebook, and each element in \(\mathbb{B}\) is a codeword.
Generally, the quantization can be rewritten as an optimization problem \(\hat{\mathbf{w}}=\mathrm{arg}\min_{\mathbf{u}\in\mathbb{B}}\|\mathbf{u}-\mathbf{w}\|_{2}\) that grouping each kernel \(\mathbf{w}\) to its nearest codeword in \(\mathbb{B}\), where \(\|\cdot\|_{2}\) denotes the \(\ell_{2}\) norm. We state this equivalence in the form below, and the proof is provided in Appendix.
**Property 1**: _We denote \(\mathbb{B}=\{-1,+1\}^{K\times K}\) as the codebook of binary kernels. For each \(\mathbf{w}\in\mathbb{R}^{K\times K}\), the binary kernel \(\hat{\mathbf{w}}\) can be derived by a grouping process:_
\[\hat{\mathbf{w}}=\mathrm{sign}(\mathbf{w})=\mathrm{arg}\min_{\mathbf{u}\in\mathbb{B}}\| \mathbf{u}-\mathbf{w}\|_{2}. \tag{1}\]
Since the codebook size \(|\mathbb{B}|=2^{K\times K}\), the memory complexity of BNN is equal to \(K\times K\). Given Equation 1, one may want to know if we can further reduce the complexity of BNN by, for example, sampling a smaller subset of the codebook \(\mathbb{B}\) to replace \(\mathbb{B}\) in Equation 1. This is also motivated by Figure 1(b) where the learnt kernels of BNNs are sparsely clustered into a small number of codewords. In this way, each kernel is represented below \(K^{2}\)-bits and thus averagely, each weight is represented less than 1-bit. We thus recast the grouping as
\[\hat{\mathbf{w}}=\operatorname*{arg}\min_{\mathbf{u}\in\mathbb{U}}\|\mathbf{u}-\mathbf{w}\|_{2 },\text{ s.t. }\mathbb{U}\subseteq\mathbb{B}. \tag{2}\]
We denote \(|\mathbb{U}|=n\) and \(|\mathbb{B}|=N\). By Equation 2, each binary kernel occupies \(\log_{2}(n)\) bits as it can be represented by an index in \(\{1,2,\cdots,n\}\). Thus we obtain a compression ratio of \(\log_{2}(n)/\log_{2}(N)\).
Different choice of the sub-codebook \(\mathbb{U}\) from \(\mathbb{B}\) potentially delivers different performance. How can we determine the proper selection we prefer? One possible solution is to optimize codewords and kernels simultaneously by making use of the product quantization method [42, 24]. Nevertheless, this method updates each codeword independently and is prone to deriving repetitive codewords if the optimization space is constrained (as values are limited to \(\pm 1\)). As a consequence, it will hinder the diversity of the codebook and limit the expressivity of the model, which will be verified in SS 4.2. Another straightforward method would be sampling the most frequent \(n\) codewords from a learnt target BNN (depicted in Figure 1(b)). Yet, such a solution is suboptimal since it solely depends on the weight distribution of the eventual model without the involvement of the specific training dynamics. In the following subsections, we will propose to tackle the sub-codebook selection via an end-to-end approach while retaining the non-repeatability of the codewords.
### Sub-Codebook Selection via Permutation
We learn a permutation of codewords in \(\mathbb{B}\) according to their effects on the target loss function, so that the selection of the first \(n\) codewords is able to optimize the final performance of the target task. The designed permutation learning keeps the binary property of the selected codewords.
For convenience, we index codewords in \(\mathbb{B}\) as a matrix column by column, formulated as \(\mathbf{B}=[\mathbf{u}_{1};\mathbf{u}_{2};\cdots;\mathbf{u}_{N}]\in\{\pm 1\}^{K^{2}\times N}\), where each codeword in \(\mathbb{B}\) is flattened to a \(K^{2}\)-dimensional vector. Similarly, we convert \(\mathbb{U}\) as \(\mathbf{U}=[\mathbf{u}_{s_{1}};\mathbf{u}_{s_{2}};\cdots,\mathbf{u}_{s_{n}}]\in\{\pm 1\}^{K^{2} \times n}\), where \(s_{i}\in\{1,2,\cdots,N\}\) is the index of the \(i\)-th selected codeword. We denote the selection matrix as \(\mathbf{V}\in\{0,1\}^{N\times n}\), then
\[\mathbf{U}=\mathbf{BV}, \tag{3}\]
where the entries of \(\mathbf{V}\) satisfy \(\mathbf{V}_{s_{i},i}=1\) for \(i=1,\cdots,n\) and are zeros otherwise. The selection by Equation 3 is permutation-dependent; in other words, if we permute the element of \(\mathbb{B}\), we may obtain different \(\mathbb{U}\). Hence, how to select \(\mathbb{U}\) becomes how to first permute \(\mathbb{B}\) and then output \(\mathbb{U}\) by Equation 3. We denote \(\mathbb{P}_{N}\) the set of \(N\)-dimensional permutation matrices: \(\mathbb{P}_{N}=\{\mathbf{P}\in\{0,1\}^{N\times N}\,|\,\mathbf{P}\mathbf{1}_{N}=\mathbf{1}_{N},\mathbf{P}^{\top}\mathbf{1}_{N}=\mathbf{1}_{N}\}\), where \(\mathbf{1}_{N}\) is an \(N\)-dimensional column vector of ones. The optimization problem in Equation 2 is transformed into
\[\hat{\mathbf{w}}=\operatorname*{arg\,min}_{\mathbf{u}\in\mathbb{U}}\|\mathbf{u}-\mathbf{w}\|_ {2},\text{ s.t. }\mathbf{U}=\mathbf{BPV},\mathbf{P}\in\mathbb{P}_{N}, \tag{4}\]
where \(\mathbf{V}\) is fixed as a certain initial selection.
Now, the goal is how to determine a proper permutation matrix \(\mathbf{P}\). Basically, we can design a neural network to output \(\mathbf{P}\), and then embed it into the off-the-shelf CNN for the downstream task. Unfortunately, this pipeline fails as the permutation matrix \(\mathbf{P}\) is discrete, whose values are occupied with 0 or 1, making the permutation network non-differentiable. Joining the recent advancement of permutation learning, we leverage the method proposed by [1] that approximates the permutation matrix by its continuous and differentiable relaxation--the Sinkhorn operator [41].
Given a matrix \(\mathbf{X}\in\mathbb{R}^{N\times N}(N=|\mathbb{B}|)\), the Sinkhorn operator over \(\mathcal{S}(\mathbf{X})\) is proceeded as follow,
\[\mathcal{S}^{0}(\mathbf{X}) =\exp(\mathbf{X}), \tag{5}\] \[\mathcal{S}^{k}(\mathbf{X}) =\mathcal{T}_{c}\left(\mathcal{T}_{r}(S^{k-1}(\mathbf{X}))\right),\] (6) \[\mathcal{S}(\mathbf{X}) =\lim_{k\rightarrow\infty}\mathcal{S}^{k}(\mathbf{X}), \tag{7}\]
where \(\mathcal{T}_{r}(\mathbf{X})=\mathbf{X}\oslash(\mathbf{X}\mathbf{1}_{N}\mathbf{1}_{N}^{\top})\) and \(\mathcal{T}_{c}(\mathbf{X})=\mathbf{X}\oslash(\mathbf{1}_{N}\mathbf{1}_{N}^{\top}\mathbf{X})\) are the row-wise and column-wise normalization operators, and \(\oslash\) denotes the element-wise division. For stability purpose, both normalization operators are calculated in the log domain in practice. The work by [41] proved that \(\mathcal{S}(\mathbf{X})\) belongs to the Birkhoff polytope--the set of doubly stochastic matrices.
Through adding a temperature \(\tau\), it can be proved that \(\lim_{\tau\to 0^{+}}\mathcal{S}(\mathbf{X}/\tau)=\operatorname*{arg\,max}_{\mathbf{P} \in\mathbb{P}_{N}}\|\mathbf{P}-\mathbf{X}\|_{2}\) holds almost surely [34]. It means we obtain an approximated permutation matrix \(\mathcal{S}^{k}(\mathbf{X})\) (that is closest to \(\mathbf{X}\)) with sufficiently large \(k\) and small \(\tau\). Inspired by [18], we also add a Gumbel noise to make the result follow the Gumbel-Matching distribution \(\mathcal{G}.\mathcal{M}.(\mathbf{X})\), namely, \(\mathcal{S}^{k}((\mathbf{X}+\epsilon)/\tau)\), where \(\epsilon\) is sampled from standard i.i.d. Gumbel distribution.
By substituting the Gumbel-Sinkhorn matrix into Equation 3, we characterize the sub-codebook selection as
\[\mathbf{U}=\mathbf{B}\mathcal{S}^{k}((\mathbf{X}+\epsilon)/\tau)\mathbf{V}, \tag{8}\]
where \(\mathbf{V}\) is fixed as a certain initial selection as mentioned, \(\mathbf{X}\) is regarded as a learnable parameter, \(k\) and \(\tau\) are hyperparameters. For \(\mathbf{V}\), we can simply let the entries to be zeros unless \(\mathbf{V}_{i,i}=1\), where \(i=1,\cdots,n\), which indicates selecting the first \(n\) columns from \(\mathbf{B}\mathcal{S}^{k}((\mathbf{X}+\epsilon)/\tau)\).
### Learning by PSTE
Recalling that both \(k\) and \(\tau\) are finitely valued, the Gumbel-Sinkhorn matrix \(\mathbf{P}_{\mathrm{GS}}=\mathcal{S}^{k}((\mathbf{X}+\epsilon)/\tau)\) is not strictly a permutation matrix with 0/1 entries. This will violate the binary property of \(\mathbf{U}\) by Equation 8, making the binarization of Equation 2 meaningless. To address this issue, we derive the exact permutation matrix \(\mathbf{P}_{\mathrm{real}}\) of \(\mathbf{P}_{\mathrm{GS}}\) by making use of the Hungarian algorithm [35] during the forward pass. By treating the \(\mathbf{P}_{\mathrm{GS}}\) as a reward matrix, deriving \(\mathbf{P}_{\mathrm{real}}\) becomes an assignment problem that can be solved by the Hungarian method in polynomial time. We summarize the forward update of the convolution kernel \(\mathbf{w}_{c}\in\mathbb{R}^{K^{2}}\) for each input and output channel as follow,
\[\mathbf{P}_{\mathrm{real}} =\mathrm{Hungarian}(\mathbf{P}_{\mathrm{GS}}), \tag{9}\] \[\mathbf{U} =\mathbf{B}\mathbf{P}_{\mathrm{real}}\mathbf{V},\] (10) \[\hat{\mathbf{w}}_{c} =\operatorname*{arg\,min}_{\mathbf{u}\in\mathbb{U}}\|\mathbf{u}-\mathbf{w}_{c} \|_{2}, \tag{11}\]
where \(\mathrm{Hungarian}(\cdot)\) denotes the Hungarian algorithm.
In the backward pass, we transfer the gradient of the exact permutation matrix directly to the Gumbel-Sinkhorn matrix. This is inspired by the Straight-Through Estimator (STE) technique [3] in previous literature. We call our method PSTE for its specification to permutation learning here. The backward pass is depicted below,
\[\mathbf{g}(\mathbf{w}_{c,i}) \approx\ \begin{cases}\mathbf{g}(\hat{\mathbf{w}}_{c,i}),&\text{if }\mathbf{w}_{c,i}\in(-1,1)\,,\\ 0,&\text{otherwise},\end{cases} \tag{12}\] \[\mathbf{g}(\mathbf{u}_{j}) =\sum_{c=1}^{C_{\mathrm{in}}\times C_{\mathrm{out}}}\mathbf{g}( \hat{\mathbf{w}}_{c})\cdot\mathbb{I}_{\mathbf{u}_{j}=\operatorname*{arg\,min}_{\mathbf{u }\in\mathbb{U}}\|\mathbf{u}-\mathbf{w}_{c}\|_{2}},\] (13) \[\mathbf{g}(\mathbf{P}_{\mathrm{real}}) =\ \mathbf{B}^{\top}\mathbf{g}(\mathbf{U})\mathbf{V}^{\top},\] (14) \[\mathbf{g}(\mathbf{P}_{\mathrm{GS}}) \approx\ \mathbf{g}(\mathbf{P}_{\mathrm{real}}), \tag{15}\]
where \(\mathbf{g}(\cdot)\) computes the gradient. \(\mathbf{w}_{c,i}\) and \(\hat{\mathbf{w}}_{c,i}\) denote the \(i\)-th entries of \(\mathbf{w}_{c}\) and \(\hat{\mathbf{w}}_{c}\), respectively, with \(i=1,2,\cdots,K^{2}\). \(\mathbb{I}_{\{\cdot\}}\) defines the indicator function. Particularly, Equation 12 follows the idea of STE and Equation 13
assigns the gradient of the binary weight to its nearest codeword. In practice, all forward and backward passes can be implemented by matrix/tensor operations, and thus our method is computationally friendly on GPUs.
An overall framework including the forward and backward processes is illustrated in Figure 2.
**Convergence analysis.** Besides using STE to update \(\mathbf{w}\) in Equation 12, we approximate the gradient of the Gumbel-Sinkhorn matrix \(\mathbf{P}_{\mathrm{GS}}\) with \(\mathbf{P}_{\mathrm{real}}\) in Equation 15, which, inevitably, will cause variations in training dynamics. Fortunately, we have the following theorem to guarantee the convergence for sufficiently large \(k\) and small \(\tau\).
**Lemma 1**: _For sufficiently large k and small \(\tau\), we define the entropy of a doubly-stochastic matrix \(\mathbf{P}\) as \(h(\mathbf{P})=-\sum_{i,j}P_{i,j}\log P_{i,j}\), and denote the rate of convergence for the Sinkhorn operator as \(r\left(0<r<1\right)\)1. There exists a convergence series \(s_{\tau}\) (\(s_{\tau}\to 0\) when \(\tau\to 0^{+}\)) that satisfies_
Footnote 1: The Sinkhorn operator has a rate of convergence \(r\) bounded by a value lower than \(1\) as proved by [22].
\[\|\mathbf{P}_{\mathrm{real}}-\mathbf{P}_{\mathrm{GS}}\|_{2}^{2}=\mathcal{O}\big{(}s_{ \tau}^{2}+r^{2k}\big{)}. \tag{16}\]
**Theorem 1**: _Assume that the training objective \(f\) w.r.t. \(\mathbf{P}_{\mathrm{GS}}\) is \(L\)-smooth, and the stochastic gradient of \(\mathbf{P}_{\mathrm{real}}\) is bounded by \(\mathbb{E}\|\mathbf{g}(\mathbf{P}_{\mathrm{real}})\|_{2}^{2}\leq\sigma^{2}\). Denote the rate of convergence for the Sinkhorn operator as \(r\left(0<r<1\right)\) and the stationary point as \(\mathbf{P}_{\mathrm{GS}}^{*}\). Let the learning rate of PSTE be \(\eta=\frac{c}{\sqrt{f}}\) with \(c=\sqrt{\frac{f(\mathbf{P}_{\mathrm{GS}}^{0})-f(\mathbf{P}_{\mathrm{GS}}^{*})}{L \sigma^{2}}}\). For a uniformly chosen \(\mathbf{u}\) from the iterates \(\{\mathbf{P}_{\mathrm{real}}^{0},\cdots,\mathbf{P}_{\mathrm{real}}^{T}\}\), concretely \(\mathbf{u}=\mathbf{P}_{\mathrm{real}}^{t}\) with the probability \(p_{t}=\frac{1}{T+1}\), it holds in expectation over the stochasticity and the selection of \(\mathbf{u}\) :_
\[\mathbb{E}\|\nabla f(\mathbf{u})\|_{2}^{2}=\mathcal{O}\left(\sigma\sqrt{\frac{f( \mathbf{P}_{\mathrm{GS}}^{0})-f(\mathbf{P}_{\mathrm{GS}}^{*})}{T/L}}+L^{2}\big{(}s_{ \tau}^{2}+r^{2k}\big{)}\right)\,. \tag{17}\]
Note that in Theorem 1, the objective function \(f\) could be a non-convex function, which accords with the case when using a neural network. Proofs for Lemma 1 and Theorem 1 are provided in Appendix.
### Complexity Analysis During Inference
**Storage**. We consider the convolutional layer with \(3\times 3\) kernels. In a conventional binary convolutional layer, the weights requires \(C_{\text{out}}\times C_{\text{in}}\times K\times K\) bits. For our method, we only store the sub-codebook \(\mathbb{U}\) and the index of each kernel by Equation 2. Storing \(\mathbb{U}\) needs \(n\times K\times K\) bits, where \(n=|\mathbb{U}|\). Since \(n\leq N=2^{K\times K}\ll C_{\text{out}}\times C_{\text{in}}\) for many popular CNNs (_e.g._, ResNets [12]), particularly if all layers share the same \(\mathbb{U}\) which is the case of our implementation, only the indexing process counts majorly. The indexing process needs \(\log_{2}(n)\) bits for a kernel, hence indexing all kernels takes \(C_{\text{out}}\times C_{\text{in}}\times\log_{2}(n)\). As a result, the ratio of storage saving by our method is \(\log_{2}(n)/(K\times K)=\log_{2}(n)/\log_{2}(N)\leq 1\) compared to a conventional BNN.
**Computation**. We adopt BOPs to measure the computational costs and follow the calculation method in [31, 32, 33, 37] where each BOP represents two bit-wise operations. In a conventional BNN, the convolution operation between the input feature maps (\(C_{\text{in}}\times H\times W\)) and weights (\(C_{\text{out}}\times C_{\text{in}}\times K\times K\)) takes \(\mathrm{BOPs}=H\times W\times C_{\text{in}}\times K^{2}\times C_{\text{out}}\), where \(H\) and \(W\) are the height and width of the feature map, respectively. For our method, the kernel grouping process implies that some weights will share the same value, which enables us to further reduce \(\mathrm{BOPs}\). To do so, we pre-calculate the convolutions between the input feature maps (\(C_{\text{in}}\times H\times W\)) and each codeword (\(K\times K\)). For facility, we reshape the codeword as \(1\times C_{\text{in}}\times K\times K\) by repeating the second dimension over \(C_{\text{in}}\) times. Then, the pre-convolution for all codewords gives rise to a tensor \(\mathcal{T}\) (\(n\times C_{\text{in}}\times H\times W\)) and costs \(\mathrm{BOPs}_{1}=H\times W\times C_{\text{in}}\times K^{2}\times n\). We reconstruct the convolution result between the input feature maps and the convolution weights given the pre-calculated tensor \(\mathcal{T}\). Specifically, for each output channel, we query indices of the input channels _w.r.t._\(\mathbb{U}\), collect feature maps from \(\mathcal{T}\) according to the indices, and sum the feature maps as the final result. This process consumes \(\mathrm{BOPs}_{2}=C_{\text{out}}\times(C_{\text{in}}\times H\times W-1)/2\). Therefore, Sparks needs \(\mathrm{BOPs}_{1}+\mathrm{BOPs}_{2}\) which is far less than \(\mathrm{BOPs}\) when \(K=3\), as \(n<C_{out}\) in general.
Figure 2: **Left:** A schematic overview of the optimization process. Full-precision weights are binarized by grouping to the nearest codeword of the sub-codebook \(\mathbb{U}\), which is obtained by Gumbel-Sinkhorn and optimized by PSTE. Forward and backward passes illustrate how the network calculates and updates. **Right**: Relationship of notations for a better understanding. The horizontal axis and vertical axis stand for the values of \(k\) and \(\tau\), respectively. Only parameters in the green region are actually calculated during training, while the others are only for the understanding purpose. All these parameters are NOT needed during inference.
## 4 Experiments
Our method is evaluated on two tasks: image classification and object detection (in Appendix). For image classification, we contrast the performance of our Sparks with state-of-the-art (SOTA) methods on CIFAR10 [23] and ImageNet (ILSVRC2012) [5] following the standard data splits.
**Implementation.** We follow the standard binarization in ReActNet [32] and perform a two-stage training. First, the network is trained from scratch with binarized activations and real-valued weights. Second, the network takes the weights from the first step and both weights and activations are binarized. As suggested by [31, 37], we keep the weights and activations in the first convolutional and the last fully-connected layers to be real-valued. More implementation details (_e.g._, learning rate, epoch) are in the Appendix.
We speed up our training twice by exploiting the symmetry of the binary kernels: for each codeword in the codebook \(\|\mathbb{B}\|\), its "opposite" term (with opposite signs) is also contained in \(\|\mathbb{B}\|\). Speed-up details are contained in Appendix. Hyper-parameters are \(k=10\) and \(\tau=10^{-2}\) by default.
On ImageNet, Sparks needs 30.2 hours to train ResNet-18 based on 8 V100s, and the BNN baseline [32] needs 24.5 hours. The computation overhead is acceptable. Training Sparks can be easily accelerated as presented in Appendix
Calculations of storage and BOPs savings are based on the measurements used in [31, 37]. Specifically, compared to full-precision convolutional layers with 32-bit weights, using 1-bit weights and activations gains up to a \(\sim\)32\(\times\) storage saving; in addition, the convolution operation could be implemented by the bit-wise xnor operation followed by a popcount operation, which leads to a \(\sim\)64\(\times\) computational saving. Throughout our results, we provide the amount of bit-wise parameters in all binarized layers as the storage.
### Comparisons with SOTA Methods
**Evaluation on CIFAR10.** Table 1 provides the performance comparisons with SOTA BNNs on CIFAR10. Moreover, SLBF [24] and FleXOR [25] that derive weights below 1-bit are re-implemented upon the same backbone (ReActNet) and same binarization settings (both weights and activations are binarized) as our method for a fair comparison. By setting \(n\) to 32, 64, and 128, we obtain networks of 0.56-bit, 0.67-bit, and 0.78-bit, respectively. Clearly, our approach is able to achieve accuracies close to standard BNNs with a much lower cost of storage and BOPs on both ResNet-18 and VGG-small. FleXOR also compacts BNNs but delivers no BOPs reduction. SLBF reduces both storage and BOPs but suffers from more accuracy drops as it conducts convolution in the space of multi-channel codewords, leading to much larger quantization errors, as previously described in Figure 1. Our Sparks remarkably outperforms SLBF and FleXOR under almost the same compression level of storage (_e.g._ for ResNet-18, our 0.56-bit gains accuracy 91.5%, while 0.60-bit FleXOR and 0.55-bit SLBF yield 89.8% and 89.3%, respectively), which verify the effectiveness of our proposed method.
**Evaluation on ImageNet.** In Table 2, we compare Sparks with SOTA methods upon ResNet-18 on ImageNet. Consistent with the results on CIFAR10, our method is able to achieve competitive classification accuracy compared to SOTA BNNs, with a dramatic drop in model size and computation time. Under comparable bit-widths, our 0.56-bit method exceeds 0.55-bit SLBF and 0.6-bit FleXOR by \(6.6\%\) and \(4.5\%\), respectively, with even fewer BOPs. Thanks to the remarkable benefit in model compression, we can apply our Sparks on wider and deeper models while staying the same complexity budget as BNNs. For example, in Table 3, we follow ABC-Net [28] and apply our
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Method & Bit-width & Accuracy & Storage & BOPs & Method & Bit-width & Accuracy & Storage & BOPs \\
**(ResNet-18)** & (W/A) & Top-1 (\%) & (Mbit) & (\(\times 10^{8}\)) & **(VGG-small)** & (W/A) & Top-1 (\%) & (Mbit) & (\(\times 10^{8}\)) \\ \hline Full-precision & 32/32 & 94.8 & 351.5(\(\times\)1) & 350.3(\(\times\)1) & Full-precision & 32/32 & 94.1 & 146.2(\(\times\)1) & 386.6(\(\times\)1) \\ \hline XNOR-Net [37] & \(1/1\) & 90.2 & 11.0(32\(\times\)) & 5.47(64\(\times\)) & XNOR-Net [37] & \(1/1\) & 89.8 & 4.57(32\(\times\)) & 6.03(64\(\times\)) \\ Bi-RealNet [31] & \(1/1\) & 90.2 & 11.0(32\(\times\)) & 5.47(64\(\times\)) & LAB [14] & \(1/1\) & 87.7 & 4.57(32\(\times\)) & 6.03(64\(\times\)) \\ RAD [6] & \(1/1\) & 90.5 & 11.0(32\(\times\)) & 5.47(64\(\times\)) & RAD [6] & \(1/1\) & 90.0 & 4.57(32\(\times\)) & 6.03(64\(\times\)) \\ IR-Net [36] & \(1/1\) & 91.5 & 11.0(32\(\times\)) & 5.47(64\(\times\)) & IR-Net [36] & \(1/1\) & 90.4 & 4.57(32\(\times\)) & 6.03(64\(\times\)) \\ RBNN [26] & \(1/1\) & 92.2 & 11.0(32\(\times\)) & 5.47(64\(\times\)) & RBNN [26] & \(1/1\) & 91.3 & 4.57(32\(\times\)) & 6.03(64\(\times\)) \\ ReActNet [32] & \(1/1\) & 92.3 & 11.0(32\(\times\)) & 5.47(64\(\times\)) & SLB [47] & \(1/1\) & 92.0 & 4.57(32\(\times\)) & 6.03(64\(\times\)) \\ \hline SLBF [24] & 0.55/1 & 89.3\(\pm\)0.5 & 60.5(58\(\times\)) & 2.94(119\(\times\)) & SLBF [24] & 0.53/1 & 89.4\(\pm\)0.4 & 2.42(60\(\times\)) & 3.17(122\(\times\)) \\ FleXOR [25] & 0.80/1 & 90.9\(\pm\)0.2 & 8.80(40\(\times\)) & 5.47(64\(\times\)) & FleXOR [25] & 0.80/1 & 90.6\(\pm\)0.1 & 3.66(40\(\times\)) & 6.03(64\(\times\)) \\ FleXOR [25] & 0.60/1 & 89.8\(\pm\)0.3 & 6.60(53\(\times\)) & 5.47(64\(\times\)) & FleXOR [25] & 0.60/1 & 89.2\(\pm\)0.2 & 2.74(53\(\times\)) & 6.03(64\(\times\)) \\ \hline Sparks (ours) & 0.78/1 & 92.2\(\pm\)0.1 & 8.57(41\(\times\)) & 3.96 (88\(\times\)) & Sparks (ours) & 0.78/1 & 91.7\(\pm\)0.2 & 3.55(41\(\times\)) & 3.46(112\(\times\)) \\ Sparks (ours) & 0.67/1 & 92.0\(\pm\)0.2 & 7.32(48\(\times\)) & 2.97(118\(\times\)) & Sparks (ours) & 0.67/1 & 91.6\(\pm\)0.1 & 3.05(48\(\times\)) & 1.94(199\(\times\)) \\ Sparks (ours) & 0.56/1 & 91.5\(\pm\)0.3 & 6.10(88\(\times\)) & 1.63(215\(\times\)) & Sparks (ours) & 0.56/1 & 91.3\(\pm\)0.3 & 2.54(88\(\times\)) & 1.13(342\(\times\)) \\ Sparks (ours) & 0.44/1 & 90.8\(\pm\)0.2 & 4.88(72\(\times\)) & 0.97(041\(\times\)) & Sparks (ours) & 0.44/1 & 90.8\(\pm\)0.3 & 2.03(72\(\times\)) & 0.74(522\(\times\)) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons of top-1 accuracies with state-of-the-art methods on the CIFAR10 dataset.
0.56-bit model on ResNet-18 by using three branches of convolutions and a single branch of activation, denoted as Sparks-wide; we also adopt 0.56-bit and 0.44-bit Sparks on ResNet-34 (almost twice in depth compared to ResNet-18), both denoted as Sparks-deep. We observe that all our variants surpass ReActNet-18 (currently the best ResNet-18-based BNN) with almost the same or even less cost in model complexity. Specifically, our 0.44-bit Sparks-deep defeats ReActNet-18 in accuracy with the least cost of complexity. To better visualize the trade-off between performance and efficiency, we contrast our models against existing methods with varying storage and BOPs in Figure 3. We leave the comparison with SNN [45] in Figure 4, since we re-implemented SNN under a fair two-stage pipeline, which improves the absolute accuracy of SNN by \(6.9\%\sim 8.1\%\).
### Ablation Studies
**Validity of Gumbel-Sinkhorn.** We test the advantage of applying the Gumbel-Sinkhorn technique to codewords selection. Three baselines to construct the sub-codebook are
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & Bit-width & \multicolumn{2}{c}{Accuracy (\%)} & Storage & BOPs \\ & (W/A) & Top-1 & Top-5 & (Mbit) & (\(\times 10^{9}\)) \\ \hline Full-precision & \(32/32\) & 69.6 & 89.2 & 351.5 & 107.2(1\(\times\)) \\ \hline BNN [16] & \(1/1\) & 42.2 & 69.2 & 11.0(32\(\times\)) & 1.70(63\(\times\)) \\ XNOR-Net [37] & \(1/1\) & 51.2 & 73.2 & 11.0(32\(\times\)) & 1.70(63\(\times\)) \\ Bi-RealNet [31] & \(1/1\) & 56.4 & 79.5 & 11.0(32\(\times\)) & 1.68(64\(\times\)) \\ IR-Net [36] & \(1/1\) & 58.1 & 80.0 & 11.0(32\(\times\)) & 1.68(64\(\times\)) \\ LNS [10] & \(1/1\) & 59.4 & 81.7 & 11.0(32\(\times\)) & 1.68(64\(\times\)) \\ RBNN [26] & \(1/1\) & 59.9 & 81.9 & 11.0(32\(\times\)) & 1.68(64\(\times\)) \\ Ensemble-BNN [52] & \((1/1)\)\(\times\)6 & 61.0 & - & 65.9(53\(\times\)) & 10.6(10\(\times\)) \\ ABC-Net [28] & \((1/1)\)\(\times\)5\({}^{2}\) & 65.0 & 85.9 & 274.5(1\(\times\)) & 42.5(2\(\times\)) \\ Real-to-Bin [33] & \(1/1\) & 65.4 & 86.2 & 11.0(32\(\times\)) & 1.68(64\(\times\)) \\ ReActNet [32] & \(1/1\) & 65.9 & 86.4 & 11.0(32\(\times\)) & 1.68(64\(\times\)) \\ \hline SLBF [24] & \(0.55/1\) & 57.7 & 80.2 & 6.05(58\(\times\)) & 0.92(117\(\times\)) \\ SLBF [24] & \(0.31/1\) & 52.5 & 76.1 & 3.41(103\(\times\)) & 0.98(110\(\times\)) \\ FlexOR [25] & \(0.80/1\) & 62.4 & 83.0 & 8.80(40\(\times\)) & 1.68(64\(\times\)) \\ FleXOR [25] & \(0.60/1\) & 59.8 & 81.9 & 6.60(53\(\times\)) & 1.68(64\(\times\)) \\ \hline Sparks (ours) & \(0.78/1\) & 65.5 & 86.2 & 8.57(41\(\times\)) & 1.22(88\(\times\)) \\ Sparks (ours) & \(0.67/1\) & 65.0 & 86.0 & 7.32(48\(\times\)) & 0.88(122\(\times\)) \\ Sparks (ours) & \(0.56/1\) & 64.3 & 85.6 & 6.10(88\(\times\)) & 0.50(214\(\times\)) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparisons of top-1 and top-5 accuracies with state-of-the-art methods on ImageNet based on ResNet-18. Calculation details for the storage and BOPs are provided in Appendix.
Figure 3: Trade-off between performance and complexity on ImageNet. For all methods, -18 indicates using ResNet-18 as the backbone, and -34 indicates ResNet-34. The symbol \(n\) is the sub-codebook size as defined in § 3.1; \(N_{in},N_{out},f_{1},f_{2}\) are hyper-parameters that control the complexity as defined in FleXOR [25] and SLBF [24].
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & Bit-width & \multicolumn{2}{c}{Accuracy (\%)} & Storage & BOPs \\ & & (W/A) & Top-1 & Top-5 & (Mbit) & (\(\times 10^{9}\)) \\ \hline ReActNet [32] & ResNet-18 & \(1/1\) & 65.9 & 86.4 & 11.0 & 1.68 \\ \multirow{2}{*}{Sparks-wide} & ResNet-18 & \multirow{2}{*}{(\(0.56/1\))\(\times\)3} & **66.7** & **86.9** & 18.3 & **1.50** \\ & & & & & & \\ \multirow{2}{*}{Sparks-deep} & ResNet-34 & \(0.56/1\) & **67.6** & **87.5** & 11.7 & **0.96** \\ & ResNet-34 & \(0.44/1\) & **66.4** & **86.7** & **9.4** & **0.58** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results when extending our Sparks to wider or deeper models.
considered: (1) Directly selecting the top-\(n\) frequent codewords from a learnt BNN (ReActNet-18). (2) Randomly selecting codewords. (3) Selecting codewords with an equal interval of indices, by setting \(s_{i}=\lfloor\frac{i}{n-1}\rfloor\times(N-1)+1\) in Equation 3. Figure 4 (Left) shows that these baselines are much inferior to ours, indicating the superiority of our permutation learning. We observe a severe performance cut-down at 0.56-bit of the third baseline, reflecting the unpredictable performance variation for naive selection methods.
**Comparison with product quantization.** Our proposed sub-codebook construction is related with but distinct from the product quantization [24, 42] in two aspects: (1) As already specified in Figure 1, we construct the codebook with kernel-wise codewords rather than channel-wise ones. (2) We learn to jointly select codewords from a fixed codebook instead of directly optimizing the codewords independently, which avoids the degeneration of the sub-codebook diversity. Figure 4 (Middle) illustrates the comparisons regarding these two aspects. We observe that using kernel-wise codewords largely outperforms using the channel-wise codewords, and the selection-based optimization consistently achieves better performance than the product quantization (such as SNN [45]) by a significant margin. In Figure 4 (Right), the diversity of codewords is severely degenerated when applying the product quantization, whereas our method preserves the diversity.
**Convergence**. Figure 5 displays the codewords selection process during training for 0.44-bit Sparks (ResNet-18) on ImageNet. The selection initially fluctuates but converges in the end. This explains the convergence of our algorithm, which complies with our derivations in Theorem 1.
**Practical inference on FPGA**. We utilize the hardware framework SystemC-TLM2 to model an FPGA. Our 0.56-bit Sparks achieves 1.167ms (ResNet-18) and 2.391ms (ResNet-34). The corresponding BNNs achieve 3.713ms and 7.806ms. Thus we have over three times acceleration.
Footnote 2: [https://www.accellera.org/community/systemc/about-systemc-tlm](https://www.accellera.org/community/systemc/about-systemc-tlm)
## 5 Conclusion
We propose a novel method named Sparse Kernel Selection (Sparks), which devises below 1-bit models by grouping kernels into a selected sub-codebook. The selection process is learnt end-to-end with Permutation Straight-Through Estimator (PSTE). Experiments show that our method is applicable for general tasks including image classification and object detection. One potential limitation of our research lies in that model compression requires the access of model parameters, leading to the risk of unprotected privacy.
## Acknowledgement
This work is funded by the Sino-German Collaborative Research Project Crossmodal Learning (NSFC 62061136001/DFG TRR169). W. Huang, F. Sun, and A. Yao are corresponding authors. Part of this work was done when the first author was an intern at Intel Labs China. Y. Wang and Y. Dong are supported by the Shuium Tsinghua Scholar Program.
Figure 4: Ablation studies on ImageNet with ResNet-18. All experiments adopt the same baseline for a fair comparison. **Left**: comparisons of different methods to select codewords. **Middle:** kernel-wise vs channel-wise codewords, and selection-based vs product-quantization(quant)-based learning. Note that “Product-quant, kernel-wise” refers to our implementation of SNN [45] under a fair two-stage training framework for improved performance. **Right:** sub-codebook degeneration when learning codewords with product-quantization.
Figure 5: Codewords selection during training where \(n=16\). We have visualized the final selected 16 codewords at the top of the figure, where light gray indicates \(-1\) and dark gray indicates \(+1\).
## Appendix A Proofs of Our Statements
**Property 1**: _We denote \(\mathbb{B}=\{-1,+1\}^{K\times K}\) as the dictionary of binary kernels. For each \(\mathbf{w}\in\mathbb{R}^{K\times K}\), the binary kernel \(\hat{\mathbf{w}}\) can be derived by a grouping process:_
\[\hat{\mathbf{w}}=\operatorname{sign}(\mathbf{w})=\operatorname*{arg\,min}_{\mathbf{u}\in \mathbb{B}}\|\mathbf{u}-\mathbf{w}\|_{2}.\]
Proof.We denote \(w(k_{1},k_{2})\) the entry of \(\mathbf{w}\) in the \(k_{1}\)-th row and \(k_{2}\)-th column, and similar denotation follows for \(\mathbf{u}\). We have,
\[\operatorname*{arg\,min}_{\mathbf{u}\in\mathbb{B}}\|\mathbf{u}-\mathbf{w}\|_ {2}^{2} =\operatorname*{arg\,min}_{\mathbf{u}\in\mathbb{B}}\sum_{k_{1},k_{2} }|u(k_{1},k_{2})-w(k_{1},k_{2})|^{2}\] \[=\{\operatorname*{arg\,min}_{u(k_{1},k_{2})\in\{-1,+1\}}|u(k_{1},k_{2})-w(k_{1},k_{2})|^{2}\}_{k_{1},k_{2}}\] \[=\{\operatorname{sign}(w(k_{1},k_{2}))\}_{k_{1},k_{2}}\] \[=\operatorname{sign}(\mathbf{w}),\]
which concludes the proof. \(\square\)
Before the proofs for the lemma and theorem, in Figure 6, we provide the relationship of notations that are adopted in the main paper to facilitate the understanding of our permutation learning process.
**Lemma 1**: _For sufficiently large \(k\) and small \(\tau\), we define the entropy of a doubly-stochastic matrix \(\mathbf{P}\) as \(h(\mathbf{P})=-\sum_{i,j}P_{i,j}\log P_{i,j}\), and denote the rate of convergence for the Sinkhorn operator as \(r\left(0<r<1\right)\)3. There exists a convergence series \(s_{\tau}\) (\(s_{\tau}\to 0\) when \(\tau\to 0^{+}\)) that satisfies_
Footnote 3: The Sinkhorn operator has a rate of convergence \(r\) bounded by a value lower than 1 as proved by [22].
\[\|\mathbf{P}_{\mathrm{real}}-\mathbf{P}_{\mathrm{GS}}\|_{2}^{2}=\mathcal{O}\big{(}s_{ \tau}^{2}+r^{2k}\big{)}. \tag{18}\]
Proof.Let \(\mathbf{X}_{\tau}=\mathbf{X}/\tau\) and \(\mathbf{X}_{0}=\lim_{\tau\to 0^{+}}\mathbf{X}_{\tau}\). In addition, follow the definitions in Equation 5, we denote \(\mathcal{S}^{k}(\cdot)\) the \(k\)-th iteration of the Sinkhorn output and \(\mathcal{S}(\cdot)=\lim_{k\to\infty}\mathcal{S}^{k}(\cdot)\).
As proved by [22], the Sinkhorn operator has a rate of convergence \(r\left(0<r<1\right)\) with respect to \(k\), where \(r\) always exists and is bounded by a value lower than 1. There is
\[\|\mathcal{S}^{k}(\mathbf{X}_{\tau})-\mathcal{S}(\mathbf{X}_{\tau})\|_{2}\leq r\| \mathcal{S}^{k-1}(\mathbf{X}_{\tau})-\mathcal{S}(\mathbf{X}_{\tau})\|_{2}\leq,\cdots, \leq r^{k-1}\|\mathcal{S}^{1}(\mathbf{X}_{\tau})-\mathcal{S}(\mathbf{X}_{\tau})\|_{2}.\]
By the definition of \(\mathcal{S}^{k}(\mathbf{X}_{\tau})\) in Equation 5, the values of \(\mathcal{S}^{k}(\mathbf{X}_{\tau})\) are located in \([0,1]\) for all \(k\geq 1\). In addition, \(\mathcal{S}(\mathbf{X}_{\tau})\) is a \(0/1\) matrix with exactly \(N\) ones, where \(N\) is the dimension. Therefore, \(\|\mathcal{S}^{1}(\mathbf{X}_{\tau})-\mathcal{S}(\mathbf{X}_{\tau})\|_{2}^{2}\) is well bounded and thus we obtain
\[\|\mathcal{S}^{k}(\mathbf{X}_{\tau})-\mathcal{S}(\mathbf{X}_{\tau})\|_{2}\leq C_{1}r^ {k},\]
Figure 6: Relationship of notations for a better understanding. The horizontal axis and vertical axis stand for the values of \(k\) and \(\tau\), respectively. The Gumbel noise \(\epsilon\) is omitted here. Only parameters in the green region are actually calculated during training, while the others are only for the understanding purpose. All these parameters are NOT needed during inference. Starting from a learnable matrix \(\mathbf{X}\), we adopt a temperature \(\tau\) and calculate \(\mathcal{S}^{0}(\mathbf{X}/\tau)\). We then perform the Sinkhorn operation for \(k\) iterations to further obtain \(\mathbf{P}_{\mathrm{GS}}=\mathcal{S}^{k}(\mathbf{X}/\tau)\). We apply the Hungarian algorithm to further obtain \(\mathbf{P}_{\mathrm{real}}\). For sufficiently large \(k\) and small \(\tau\), \(\mathbf{P}_{\mathrm{real}}\) equals the ideal permutation matrix.
where \(C_{1}>0\) is a constant.
As mentioned in Equation 4, \(\mathcal{S}(\mathbf{X}_{\tau})\) must be a doubly-stochastic matrix [41]. According to Lemma 3 in [34], if denoting \(f_{0}(\cdot)=\left\langle\cdot,\mathbf{X}\right\rangle_{F}\), there is \(|f_{0}(\mathcal{S}(\mathbf{X}_{0}))-f_{0}(\mathcal{S}(\mathbf{X}_{\tau}))|\leq\tau(h( \mathcal{S}(\mathbf{X}_{\tau}))-h(\mathcal{S}(\mathbf{X}_{0})))=\tau h(\mathcal{S}(\bm {X}_{\tau}))\leq\tau\max_{\mathbf{P}\in\mathcal{B}_{N}}(h(\mathbf{P}))\), where \(\mathcal{B}_{N}\) denotes the the set of doubly stochastic matrices of dimension \(N\).
As proved by Lemma 3 in [34], \(|f_{0}(\mathcal{S}(\mathbf{X}_{0}))-f_{0}(\mathcal{S}(\mathbf{X}_{\tau}))|\leq\tau\max _{\mathbf{P}\in\mathcal{B}_{N}}(h(\mathbf{P}))\) implies the convergence of \(\mathcal{S}(\mathbf{X}_{\tau})\) to \(\mathcal{S}(\mathbf{X}_{0})\) and there exists a convergence series \(s_{\tau}\) (\(s_{\tau}\to 0\) when \(\tau\to 0^{+}\)), satisfying \(\|\mathcal{S}(\mathbf{X}_{0})-\mathcal{S}(\mathbf{X}_{\tau})\|_{2}\leq C_{2}s_{\tau}\), where \(C_{2}>0\) is a constant.
Based on the triangle inequality, there is
\[\|\mathcal{S}(\mathbf{X}_{0})-\mathcal{S}^{k}(\mathbf{X}_{\tau})\|_{2}^{2}\leq(C_{2}s _{\tau}+C_{1}r^{k})^{2}\leq 2C_{2}^{2}s_{\tau}^{2}+2C_{1}^{2}r^{2k}.\]
As mentioned in SS 3.3, there is \(\mathbf{P}_{\rm GS}=\mathcal{S}^{k}(\mathbf{X}_{\tau})\) if we omit the noise term. Given the convergence property, for sufficiently large \(k\) and small \(\tau\), the Hungarian algorithm output \(\mathbf{P}_{\rm real}\) equals to the real permutation, _i.e._\(\mathbf{P}_{\rm real}=\mathcal{S}(\mathbf{X}_{0})\). In summary, we have
\[\|\mathbf{P}_{\rm real}-\mathbf{P}_{\rm GS}\|_{2}^{2}=\mathcal{O}\big{(}s_{\tau}^{2}+r ^{2k}\big{)},\]
which concludes the proof.
With the help of Lemma 1 and the inspiration of error-feedback framework [20], we now provide the detailed proof for Theorem 1.
**Theorem 1**: _Assume that the training objective \(f\) w.r.t. \(\mathbf{P}_{\rm GS}\) is \(L\)-smooth, and the stochastic gradient of \(\mathbf{P}_{\rm real}\) is bounded by \(\mathbb{E}\|\mathbf{g}(\mathbf{P}_{\rm real})\|_{2}^{2}\leq\sigma^{2}\). Denote the rate of convergence for the Sinkhorn operator as \(r\left(0<r<1\right)\) and the stationary point as \(\mathbf{P}_{\rm GS}^{t}\). Let the learning rate of PSTE be \(\eta=\frac{c}{\sqrt{T}}\) with \(c=\sqrt{\frac{f\left(\mathbf{P}_{\rm GS}^{0}\right)-f\left(\mathbf{P}_{\rm GS}^{t} \right)}{L\sigma^{2}}}\). For a uniformly chosen \(\mathbf{u}\) from the iterates \(\{\mathbf{P}_{\rm real}^{0},\cdots,\mathbf{P}_{\rm real}^{T}\}\), concretely \(\mathbf{u}=\mathbf{P}_{\rm real}^{t}\) with the probability \(p_{t}=\frac{1}{T+1}\), it holds in expectation over the stochasticity and the selection of \(\mathbf{u}\) :_
\[\mathbb{E}\|\nabla f(\mathbf{u})\|_{2}^{2}=\mathcal{O}\left(\sigma\sqrt{\frac{f( \mathbf{P}_{\rm GS}^{0})-f(\mathbf{P}_{\rm GS}^{t})}{T/L}}+L^{2}\big{(}s_{\tau}^{2}+r^ {2k}\big{)}\right)\,. \tag{19}\]
**Proof.** Since the objective function \(f\) is \(L\)-smooth, \(\mathbf{g}(\mathbf{P}_{\rm GS})=\mathbf{g}(\mathbf{P}_{\rm real})\) in our PSTE, and \(\mathbf{P}_{\rm GS}^{t+1}=\mathbf{P}_{\rm GS}^{t}-\eta\mathbf{g}(\mathbf{P}_{\rm real}^{t})\), we can obtain the following derivations,
\[f(\mathbf{P}_{\rm GS}^{t+1}) \leq f(\mathbf{P}_{\rm GS}^{t})+\big{\langle}\mathbf{P}_{\rm GS}^{t+1}- \mathbf{P}_{\rm GS}^{t},\nabla f(\mathbf{P}_{\rm GS}^{t})\big{\rangle}+\frac{L}{2}\| \mathbf{P}_{\rm GS}^{t+1}-\mathbf{P}_{\rm GS}^{t}\|_{2}^{2}\] \[=f(\mathbf{P}_{\rm GS}^{t})-\eta\langle\mathbf{g}(\mathbf{P}_{\rm real}^{t }),\nabla f(\mathbf{P}_{\rm GS}^{t})\rangle+\frac{L\eta^{2}}{2}\|\mathbf{g}(\mathbf{P} _{\rm real}^{t})\|_{2}^{2}\,.\]
We use \(\mathbb{E}\) to represent the expectation with respect to the stochasticity. Based on the bound of the stochastic gradient, _i.e._\(\mathbb{E}\|\mathbf{g}(\mathbf{P}_{\rm real}^{t})\|_{2}^{2}\leq\sigma^{2}\), and a natural property \(\langle\mathbf{x},\mathbf{y}\rangle\leq\frac{1}{2}\|\mathbf{x}\|_{2}^{2}+ \frac{1}{2}\|\mathbf{y}\|_{2}^{2}\), it holds that,
\[\mathbb{E}\left[f(\mathbf{P}_{\rm GS}^{t+1}|\mathbf{P}_{\rm GS}^{t})\right] \leq f(\mathbf{P}_{\rm GS}^{t})-\eta\big{\langle}\mathbb{E}\big{[} \mathbf{g}(\mathbf{P}_{\rm real}^{t})\big{]},\nabla f(\mathbf{P}_{\rm GS}^{t})\big{\rangle} +\frac{L\eta^{2}}{2}\mathbb{E}\|\mathbf{g}(\mathbf{P}_{\rm real}^{t})\|_{2}^{2}\] \[\leq f(\mathbf{P}_{\rm GS}^{t})-\eta\big{\langle}\nabla f(\mathbf{P}_{\rm real }^{t}),\nabla f(\mathbf{P}_{\rm GS}^{t})\big{\rangle}+\frac{L\eta^{2}\sigma^{2}}{2}\] \[=f(\mathbf{P}_{\rm GS}^{t})-\eta\big{\langle}\nabla f(\mathbf{P}_{\rm real }^{t}),\nabla f(\mathbf{P}_{\rm real}^{t})\big{\rangle}+\frac{L\eta^{2}\sigma^{2}}{2}+ \eta\big{\langle}\nabla f(\mathbf{P}_{\rm real}^{t}),\nabla f(\mathbf{P}_{\rm real}^{t})- \nabla f(\mathbf{P}_{\rm GS}^{t})\big{\rangle}\] \[\leq f(\mathbf{P}_{\rm GS}^{t})-\eta\|\nabla f(\mathbf{P}_{\rm real}^{t})\|_ {2}^{2}+\frac{L\eta^{2}\sigma^{2}}{2}+\frac{\eta^{2}}{2}\|\nabla f(\mathbf{P}_{\rm real }^{t})\|_{2}^{2}+\frac{\eta}{2}\|\nabla f(\mathbf{P}_{\rm real}^{t})-\nabla f(\mathbf{P} _{\rm GS}^{t})\|_{2}^{2}\] \[\leq f(\mathbf{P}_{\rm GS}^{t})-\frac{\eta}{2}\|\nabla f(\mathbf{P}_{\rm real }^{t})\|_{2}^{2}+\frac{L\eta^{2}\sigma^{2}}{2}+\frac{\eta L^{2}}{2}\|\mathbf{P}_{ \rm real}^{t}-\mathbf{P}_{\rm GS}^{t}\|_{2}^{2}\,.\]
With Lemma 1,
\[\mathbb{E}\left[f(\mathbf{P}_{\rm GS}^{t+1}|\mathbf{P}_{\rm GS}^{t})\right]\leq f(\mathbf{P} _{\rm GS}^{t})-\frac{\eta}{2}\|\nabla f(\mathbf{P}_{\rm real}^{t})\|_{2}^{2}+ \frac{L\eta^{2}\sigma^{2}}{2}+\frac{\eta L^{2}}{2}\big{(}C_{1}s_{\tau}^{2}+C_{2 }r^{2k}\big{)}\,,\]
where \(C_{1}>0\) and \(C_{2}>0\) are constants.
By rearranging the orders and further applying the expectation on \(\mathbf{P}_{\rm GS}^{t}\),
\[\mathbb{E}\|\nabla f(\mathbf{P}_{\rm real}^{t})\|_{2}^{2}\leq\frac{2}{\eta}\big{(} \mathbb{E}\left[f(\mathbf{P}_{\rm GS}^{t})\right]-\mathbb{E}\left[f(\mathbf{P}_{\rm GS }^{t+1})\right]\big{)}+L\eta\sigma^{2}+L^{2}\big{(}C_{1}s_{\tau}^{2}+C_{2}r^{2k} \big{)}\,.\]
Summing over \(t=0,1,\cdots,T\),
\[\sum_{t=0}^{T}\mathbb{E}\|\nabla f(\mathbf{P}_{\rm real}^{t})\|_{2}^{2} \leq\frac{2}{\eta}\sum_{t=0}^{T}\big{(}\,\mathbb{E}\left[f(\mathbf{P}_ {\rm GS}^{t})\right]-\mathbb{E}\left[f(\mathbf{P}_{\rm GS}^{t+1})\right]\big{)}+L \eta\sigma^{2}+L^{2}\sum_{t=0}^{T}\left(C_{1}s_{\tau}^{2}+C_{2}r^{2k}\right)\] \[\leq\frac{2}{\eta}\big{(}f(\mathbf{P}_{\rm GS}^{0})-f(\mathbf{P}_{\rm GS}^ {*})\big{)}+(T+1)L\eta\sigma^{2}+L^{2}\sum_{t=0}^{T}\left(C_{1}s_{\tau}^{2}+C_{ 2}r^{2k}\right).\]
For a uniformly chosen \(\mathbf{u}\) from the iterates \(\{\mathbf{P}_{\rm real}^{0},\cdots,\mathbf{P}_{\rm real}^{T}\}\), concretely \(\mathbf{u}=\mathbf{P}_{\rm real}^{t}\) with the probability \(p_{t}=\frac{1}{T+1}\). Divide the inequation by \(T+1\), and extend \(\mathbb{E}\) to represent the expectation over the stochasticity and the selection of \(\mathbf{u}\), there is
\[\mathbb{E}\|\nabla f(\mathbf{u})\|_{2}^{2}\leq\frac{2}{\eta(T+1)} \big{(}f(\mathbf{P}_{\rm GS}^{0})-f(\mathbf{P}_{\rm GS}^{*})\big{)}+L\eta\sigma^{2}+L^ {2}\big{(}C_{1}s_{\tau}^{2}+C_{2}r^{2k}\big{)}\,.\]
Substituting the learning rate \(\eta\), we finally obtain
\[\mathbb{E}\|\nabla f(\mathbf{u})\|_{2}^{2} \leq\frac{2\sigma\sqrt{LT}}{T+1}\sqrt{f(\mathbf{P}_{\rm GS}^{0})-f( \mathbf{P}_{\rm GS}^{*})}+\frac{\sigma\sqrt{L}}{\sqrt{T}}\sqrt{f(\mathbf{P}_{\rm GS}^ {0})-f(\mathbf{P}_{\rm GS}^{*})}+L^{2}\big{(}C_{1}s_{\tau}^{2}+C_{2}r^{2k}\big{)}\,.\] \[\leq 3\sigma\sqrt{\frac{f(\mathbf{P}_{\rm GS}^{0})-f(\mathbf{P}_{\rm GS}^ {*})}{T/L}}+L^{2}\big{(}C_{1}s_{\tau}^{2}+C_{2}r^{2k}\big{)}\,,\]
Therefore,
\[\mathbb{E}\|\nabla f(\mathbf{u})\|_{2}^{2}=\mathcal{O}\left(\sigma \sqrt{\frac{f(\mathbf{P}_{\rm GS}^{0})-f(\mathbf{P}_{\rm GS}^{*})}{T/L}}+L^{2}\big{(} s_{\tau}^{2}+r^{2k}\big{)}\right)\,,\]
which concludes the proof. \(\Box\)
## Appendix B Experiment Details and Analysis
### Implementation Details
**Implementation of ImageNet training.** We follow the two-step scheme (as detailed in SS 4) and the training settings in [32]. Specifically, for each step, the model is trained for 640k training iterations with batch size 512. We adopt the Adam optimizer [21] and set the initial learning rate to \(10^{-3}\). Weight decay rates in the first and second steps are \(10^{-5}\) and \(0\), respectively. For experiments on ImageNet, models are trained with 8 V100 GPUs. We follow the training settings and data augmentation strategies in [32].
**Implementation of CIFAR10 training.** For experiments on CIFAR10, each experiment is performed on a single V100 GPU. We train the network with 256 epochs in each step. We set the batch size to 256 and use an Adam optimizer [21]. The learning rate is initialized to \(5\times 10^{-4}\) and is updated by a linear learning rate decay scheduler. Results of our method on CIFAR10 are averaged over three runs.
**Implementation of selection and ablation studies.** We further detail our implementation of the codeword selection process based on Figure 4 (Middle), where we provide four experiment settings including using kernel-wise/channel-wise codewords, and selection-based/product-quantization-based learning. Implementation details for these four experiments (labeled as (a)-(d), respectively) are described as follows.
Figure 7: A supplement to Figure 1 by ranking codewords _w.r.t._ the frequency. It shows that the ranked codewords in (b) nearly follow the power-law distribution.
* _Selection, kernel-wise (our proposed method):_ each codeword is a \(3\times 3\) convolutional kernel with values being \(\pm 1\). We constantly keep the \(1^{\text{st}}\) (all \(-1\)s) and \(512^{\text{th}}\) (all \(+1\)s) codewords in the sub-codebook, as these two codewords take a large proportion. We divide the remaining 510 codewords into two halves (\(1^{\text{st}}\) half: from index 2 to 256; \(2^{\text{nd}}\) half: from index 257 to 511). Obviously, each codeword in one half has a corresponding codeword with opposite signs in another half. This technique speeds up the selection process without noticeably affecting the performance.
* _Selection, channel-wise:_ each codeword is a \(1\times 9\) sub-vector with values being \(\pm 1\), and codewords are obtained from flattened convolutional weights. We follow the speed up strategy in (a) by dividing codewords into two parts. Unlike (a), we do not constantly send the \(1^{\text{st}}\) (all \(-1\)s) and \(512^{\text{th}}\) (all \(+1\)s) codewords to the sub-codebook, since this process does not bring improvements for the channel-wise setting.
* _Product quantization, kernel-wise:_ we follow the common method of product quantization [42, 24] to learn real-valued codewords. Before training, we randomly initialize \(2^{n}\) different \(3\times 3\) real-valued codewords, with their values being \(\pm 1.0\). During training, in the forward pass, we obtain the binary codewords by applying the \(\operatorname{sign}\) function on the real-valued codewords. In the backward pass, the Straight-Through Estimator technique is adopted which copies the gradients of binary codebooks to real-valued codebooks.
* _Product quantization, channel-wise:_ the learning process follows (c), yet each codeword is a \(1\times 9\) sub-vector across multiple channels, obtained from flattened convolutional weights.
**Storage and BOPs calculation.** In Table 4, we provide details of how storages and BOPs in Table 2 are calculated. The calculation follows the analysis described in SS 3.4.
### Additional Experiment Analysis
**Power-law property.** Figure 7 illustrates the distributions when ranking codewords in Figure 1 according to the codeword frequency. It shows that in Figure 7(b), codewords nearly obey the power-law distribution.
**Codewords selection and overlaps.** In Figure 8, we further compare the codewords learning processes for the four settings in Figure 4 (Middle). As a supplement to Figure 4 (Right), Figure 9 provides the change of sub-codebooks during training of the four experiment settings (a)-(d). As codewords in (c) and (d) tend to overlap during training, the diversity is severely affected. In addition, we also conduct experiments that at each step or every several steps, we replenish the sub-codebook with random different codewords so that the sub-codebook size recovers to \(2^{n}\), but the performance is very close to directly selecting codewords at random (as already shown in Figure 4 (Left), randomly selection achieves low performance).
**Acceleration of training.** We consider two approaches that can accelerate the training process on large datasets (_e.g._ ImageNet), without causing much detriment to the performance. (1) We conduct permutation learning only in the first \(30\times 10^{3}\) training steps, and fix the selected codewords for the later training. (2) We obtain the sub-codebook by pretraining on a small dataset like CIFAR10, and directly adopt the sub-codebook for ImageNet without further permutation learning. Compared with the results of our Sparks reported in Table 4. Calculation details for the storage and BOPs as reported in Table 2. Networks are evaluated on ImageNet with ResNet-18.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{layer-name} & \multirow{2}{*}{input-w} & \multirow{2}{*}{input-h} & \multirow{2}{*}{input-c} & \multirow{2}{*}{output-w} & \multirow{2}{*}{output-h} & \multirow{2}{*}{output-c} & \multirow{2}{*}{kernel-h} & \multicolumn{4}{c}{Storage (bit)} & \multicolumn{4}{c}{BOPs} \\ & & & & & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & & & & & \\ \hline A & B & C & D & E & F & G & H & I & J & K & L & M & N & O & P & Q \\ \hline conv1 & 224 & 224 & 3 & 112 & 112 & 64 & 7 & 7 & - & - & - & - & - & - & - & - & - \\ conv2-1a & 56 & 56 & 64 & 56 & 56 & 64 & 3 & 3 & 36864 & 28672 & 24576 & 20480 & 11505054 & 15605504 & 115065504 & 64225248 \\ conv2-1b & 56 & 56 & 56 & 64 & 56 & 56 & 64 & 3 & 3 & 36864 & 28672 & 24576 & 24080 & 1151505054 & 115605054 & 115605054 & 64225248 \\ conv2-2b & 56 & 56 & 56 & 64 & 56 & 56 & 64 & 3 & 3 & 36864 & 28672 & 24576 & 24576 & 24080 & 1151505054 & 115605054 & 115605054 & 64225248 \\ conv2-2b & 56 & 56 & 56 & 64 & 56 & 56 & 64 & 3 & 3 & 3 & 36864 & 28672 & 24576 & 24576 & 24080 & 11515054 & 15605054 & 115605054 & 64225248 \\ conv3-1a & 56 & 56 & 56 & 64 & 28 & 28 & 128 & 3 & 3 & 3 & 373728 & 57344 & 49152 & 40960 & 57802752 & 57802752 & 3112576 & 17661888 \\ conv3-1b & 28 & 28 & 128 & 28 & 28 & 28 & 128 & 3 & 3 & 147456 & 114688 & 98304 & 98120 & 115605054 & 115605054 & 64225216 & 3532840 \\ conv3-2a & 28 & 28 & 28 & 128 & 28 & 28 & 28 & 3 & 3 & 147456 & 114688 & 98304 & 9120 & 115605054 & 115605054 & 64222516 & 3532840 \\ conv3-2b & 28 & 28 & 28 & 128 & 28 & 28 & 28 & 3 & 3 & 147456 & 114688 & 98304 & 9120 & 115605054 & 115605054 & 64225216 & 3532840 \\ conv4-1a & 28 & 28 & 128 & 128 & 28 & 28 & 28 & 3 & 3 & 147456 & 114688 & 98304 & 81920 & 115605054 & 115605054 & 64225216 & 3532840 \\ conv4-1a & 28 & 28 & 128 & 128 & 14 & 14 & 256 & 3 & 3 & 294912 & 2229376 & 196608 & 1638040 & 5702732 & 3112512162 & 17661340 & 10346340 \\ conv4-1b & 14 & 14 & 256 & 14 & 14 & 256 & 3 & 3 & 589824 & 458752 & 3932126 & 2768015054 & 64225152 & 35327767 & 2087088 \\ conv4-2a & 14 & 14 & 256 & 14 & 14 & 256 & 3 & 3 & 589824 & 458752 & 3932126 & 23768015 & 151605054 & 64225152 & 3532776 & 20873088 \\ conv5-1b & 14 & 14 & 256 & 7 & 7 & 512 & 3 & 3 & 1179648 & 971504 & 7864235 & 655607 & 75082752 & 176696 & 10463452 & 6552860 \\ conv5-1b & 7 & 7 & 512 & 7 & 7 & 512 & 3 & 3 & 2352926 & 1835008 & 1572864 & 1310720 & 115050504 & 5332648 & 20872960 & 13647616 \\ conv5-2b & 7 & 7 & 512 & 7 & 7 & 512 & 3 & 3 & 2359266 & 1835008 & 1572864 & 1310720 & 115050504 & 5332648 & 20872960 & 13647616 \\ fc100 & 1 & 512 & 1 & 1 & 1000 & - & - & - & - & - & - & & & & & & & & & & & \\ \hline \multicolumn{12}{c}{} & \multirow{2}{*}{**Total**} & & \multicolumn{4}{c}{10988472} & \multirow{2}{*}{8544256} & \multirow{2}{*}{73326348} & \multirow{2}{*}{6
Figure 8: Comparison of codewords learning processes when using kernel-wise/channel-wise codewords and selection-based/product-quantization-based learning. The corresponding experimental results are already provided in Figure 4 (Middle). All experiments are based on 0.44-bit settings with \(n=16\), and are performed on ImageNet upon ResNet-18. We observe that, when both using selection-based learning, kernel-wise codewords in (a) converge much faster (within \(25\times 10^{3}\) steps) than channel-wise codewords in (b); codewords with product-quantization-based learning in (c) and (d) also converge slower than (a), and are likely to overlap during training which degenerates the codebook diversity.
Figure 9: Numbers of different codewords when using selection-based/product-quantization-based learning, as a supplement to Figure 4 (Right). (a)-(d) correspond to the experiments in Figure 4 (Right) and Figure 8. Experiments are performed on ImageNet upon ResNet-18. We provide four settings, including 0.78-bit, 0.67-bit, 0.56-bit and 0.44-bit, with \(n=\) 128, 64, 32 and 16, respectively. We observe that the sub-codebook highly degenerates during training in (c) and (d), since codewords tend to be repetitive when being updated independently. While in (a) and (b), the diversity of codewords preserves, which implies the superiority of our selection-based learning.
Table 2, the performance does not decrease when using the acceleration approach (1), and decreases slightly (\(-0.4\)%, \(-0.6\)%, and \(-1.1\)% for 0.78-bit, 0.67-bit, and 0.56-bit, respectively) when using the approach (2).
**Sensitivity analysis of hyper-parameters.** In Figure 10, we compare the accuracy with different hyper-parameter settings for \(k\) and \(\tau\) in Equation 8. In the PSTE optimization, \(k\) is the iteration number and \(\tau\) is the temperature. Experiments are performed with ResNet-18 and VGG-small on CIFAR10. We observe that both hyper-parameters are insensitive around their default settings \(k=10\) and \(\tau=10^{-2}\). The parameter \(k\) is quite easy to choose since results are stable when \(k=5\sim 20\). In addition, regarding two extreme cases, setting \(\tau\) too small (_e.g._\(10^{-4}\)) will hinder the smoothness of gradient back-propagation, and assigning a too large value (_e.g._\(1\)) will enlarge the permutation approximation error, both of which may cause detriment to the final performance. Luckily, according to Figure 10, the performance is stable when changing \(\tau\) by \(10\) or \(1/10\) times around the default value, implying the high stability of our method.
**About top-n most frequent codewords:** In Figure 4, we compare sampling top-n most frequent codewords and our method. We display an example of \(0.44\)-bit top-n codewords here: \(\textbf{\hat{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{\emph{ \emph{\emph{\emph{ \emph{ }}}}}}{}}{}}{}}{}}}}}\) **It shows the top-n codewords tend to choose adjacent codewords more frequently, which could hinder the diversity of the codebook. By contrast, as shown in Figure 5, our learned \(0.44\)-bit method outputs diverse codewords, yielding better performance particularly at \(0.56\)-bit from 61.7 to 64.3.
**A two-step recipe with product quantization.** Given the purpose of attaining a compact BNN, we also conduct an intuitive two-step baseline method: at first, load parameters of a standard BNN (ReActNet-18, from the open-sourced model) as the pre-trained model; then perform product quantization on the binary weights to compress the BNN. By this method, we achieve 59.1% accuracy on ImageNet under 0.56-bit, and the sub-codebook degeneration still exists. Such result is much inferior to our Sparks under 0.56-bit (64.3%).
**Symbol table.** We list the definition and usage important symbols in Table 5.
**Whether indexing process hinders valid acceleration:** No, the indexing process of binary kernels does not hinder valid acceleration. (1) Indexing \(n\) codewords is very cheap, _e.g._, only \(32\) codewords for our \(0.56\)-bit model (\(<0.5\) nanosecond on our FPGA). (2) Indexing \(3\times 3\) pre-calculated results (\(3\times 3\) codeword \(\sqrt[6]{3}\times 3\) feature region, see SS 3.4) is also negligible based on our implementation with a
Figure 10: Sensitivity analysis for hyper-parameters including the iteration number \(k\) and the temperature \(\tau\), as adopted in Equation 8 and also clearly illustrated in Figure 6. Experiments are conducted on the CIFAR10 dataset with 0.56-bit ResNet-18 and VGG-small, respectively. Results are averaged over three runs with different random seeds. Both hyper-parameters are insensitive around the default values \(k=10\) and \(\tau=10^{-2}\).
\begin{table}
\begin{tabular}{l l} \hline \hline Symbol & Definition and Usage \\ \hline \(\boldsymbol{w}\in\mathbb{R}^{K\times K}\) & Convolutional kernel with the kernel size \(K\). \\ \(\hat{\boldsymbol{w}}\in\{-1,+1\}^{K\times K}\) & Selected binary kernel for \(\boldsymbol{w}\). There is \(\hat{\boldsymbol{w}}\in\) sub-codebook \(\mathbb{U}\subseteq\) codebook \(\mathbb{B}=\{-1,+1\}^{K\times K}\). \\ \(N\in\mathbb{Z}^{+}\) & \(N=|\mathbb{B}|\), the codebook size, \(N=512\) when \(K=3\). \\ \(n\in\mathbb{Z}^{+}\) & \(n=|\mathbb{U}|\), the sub-codebook size, _e.g._, for \(0.56\)-bit Sparks, \(n=32\). \\ \(\boldsymbol{B}\in\{\pm 1\}^{K^{2}\times N}\) & Column by column indexing from \(|\mathbb{B}|\). \\ \(\boldsymbol{U}\in\{\pm 1\}^{K^{2}\times n}\) & Column by column indexing from \(|\mathbb{U}|\). \\ \(\boldsymbol{V}\in\{0,1\}^{N\times n}\) & A pre-defined selection matrix. \\ \(k\in\mathbb{Z}^{+}\) & The number of iteration to approximate the permutation matrix in Equation 8. \\ \(\tau\in\mathbb{R}^{+}\) & A small temperature to approximate the permutation matrix in Equation 8. \\ \(\boldsymbol{X}\in\mathbb{R}^{N\times N}\) & A randomly initialized, learnable matrix. \\ \(\boldsymbol{P}_{\mathrm{GS}}\in\mathbb{R}^{N\times N}\) & \(\boldsymbol{P}_{\mathrm{GS}}=\boldsymbol{S}^{k}((\boldsymbol{X}+\epsilon)/\tau)\), the approximated permutation matrix for propagation, not a \(0/1\) matrix. \\ \(\boldsymbol{P}_{\mathrm{real}}\in\{0,1\}^{N\times N}\) & \(\boldsymbol{P}_{\mathrm{real}}=\mathrm{Hungarian}(\boldsymbol{P}_{\mathrm{GS}})\), the outputted permutation matrix, a doubly stochastic \(0/1\) matrix. \\ \hline \hline \end{tabular}
\end{table}
Table 5: The definition and usage of important symbols.
Lookup Table (LUT) that stores \(3\times 3\) pre-calculated results: The practical LUT size, instead of \(n\times C_{\text{in}}\times H\times W\) by dividing input feature maps into \(1\times 3\times 3\) slices, and sending only one slice to a Processing Engine (PE) at each clock cycle. This leads to very low latency for the lookup process, _e.g_., LUT size is \(32\) for our \(0.56\)-bit model (\(<0.5\) nanosecond for indexing), which is easily implemented in the current clock cycle.
## Appendix C Object Detection
### Implementation
We evaluate our method for object detection on two benchmark datasets: PASCAL VOC [8] and COCO [27]. We follow the standard data split settings [46]. Regarding the PASCAL VOC dataset, we train our model on both the VOC 2007 and VOC 2012 trainval sets, which together contain about 16k natural images of 20 different categories in total. We evaluate the performance on VOC 2007 test set that is composed of about 5k images. COCO dataset (2014 object detection track) is a large-scale dataset that collects images from 80 different categories. We train our model with 80k training images as well as 35k images sampled from the validation set (denoted as trainval35k [2]), and carry out evaluations on the remaining 5k images in the validation set (minival [2]).
We follow BiDet [46] for the basic training settings including parameters and data augmentation methods. Specifically, we train 50 epochs in total with batch size 32 and the Adam optimizer. We initialize the learning rate to \(10^{-3}\) which decays by multiplying 0.1 at the 6\({}^{\text{th}}\) and 10\({}^{\text{th}}\) epoch. We consider two typical architectures including SSD300 [29] (with VGG-16 [40]) and Faster R-CNN [38] (with ResNet-18 [12]) to verify the effectiveness and generalization of our method.
\begin{table}
\begin{tabular}{l c c c c|l c c c c} \hline \hline Method & Bit-width & mAP & Storage & BOPs & Method & Bit-width & mAP & Storage & BOPs \\
**(SSD300)** & (W/A) & (\%) & Saving & Saving & **(Faster R-CNN)** & (W/A) & (\%) & Saving & Saving \\ \hline Full-precision & \(32/32\) & 72.4 & \(1\times\) & \(1\times\) & Full-precision & \(32/32\) & 74.5 & \(1\times\) & \(1\times\) \\ \hline BNN [16] & \(1/1\) & 42.0 & \(32\times\) & 64\(\times\) & BNN [16] & \(1/1\) & 35.6 & \(32\times\) & 64\(\times\) \\ XNOR-Net [37] & \(1/1\) & 50.2 & \(32\times\) & 64\(\times\) & XNOR-Net [37] & \(1/1\) & 48.4 & \(32\times\) & 64\(\times\) \\ Bi-RealNet [31] & \(1/1\) & 63.8 & \(32\times\) & 64\(\times\) & Bi-RealNet [31] & \(1/1\) & 58.2 & \(32\times\) & 64\(\times\) \\ BiDet [46] & \(1/1\) & 66.0 & \(32\times\) & 64\(\times\) & BiDet [46] & \(1/1\) & 59.5 & \(32\times\) & 64\(\times\) \\ \hline Sparks (ours) & \(0.78/1\) & 65.2 & \(41.0\times\) & 108\(\times\) & Sparks (ours) & \(0.78/1\) & 58.9 & \(41.0\times\) & 88\(\times\) \\ Sparks (ours) & \(0.56/1\) & 64.3 & \(57.1\times\) & 285\(\times\) & Sparks (ours) & \(0.56/1\) & 57.7 & \(57.1\times\) & 214\(\times\) \\ \hline Sparks* (ours) & \(0.78/1\) & 68.9 & \(41.0\times\) & 108\(\times\) & Sparks* (ours) & \(0.78/1\) & 66.2 & \(41.0\times\) & 88\(\times\) \\ Sparks* (ours) & \(0.56/1\) & 68.0 & \(57.1\times\) & 285\(\times\) & Sparks* (ours) & \(0.56/1\) & 65.5 & \(57.1\times\) & 214\(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance comparisons with object detection methods on the PASCAL VOC dataset. Sparks\({}^{*}\) indicates using the two-step training method and the generalized Sign/PReLU functions (as adopted in [32] for image classification).
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Method & Bit-width & mAP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{l}\) & Method & Bit-width & mAP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{s}\) & AP\({}_{m}\) & AP\({}_{l}\) \\
**(SSD300)** & (W/A) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) \\ \hline Full-precision & \(32/32\) & 23.2 & 41.2 & 23.4 & 8.6 & 23.2 & 39.6 & Full-precision & \(32/32\) & 26.0 & 44.8 & 27.2 & 10.0 & 28.9 & 39.7 \\ \hline BNN [16] & \(1/1\) & 6.2 & 15.9 & 3.8 & 2.4 & 10.0 & 9.9 & BNN [16] & \(1/1\) & 5.6 & 14.3 & 2.6 & 2.0 & 8.5 & 9.3 \\ XNOR-Net [37] & \(1/1\) & 8.1 & 19.5 & 5.6 & 2.6 & 8.3 & 13.3 & XNOR-Net [37] & \(1/1\) & 10.4 & 21.6 & 8.8 & 2.7 & 11.8 & 15.9 \\ Bi-RealNet [31] & \(1/1\) & 11.2 & 26.0 & 8.3 & 3.1 & 12.0 & 18.3 & Bi-RealNet [31] & \(1/1\) & 14.4 & 29.0 & 13.4 & 3.7 & 15.4 & 24.1 \\ BiDet [46] & \(1/1\) & 13.2 & 28.3 & 10.5 & 5.1 & 14.3 & 20.5 & BiDet [46] & \(1/1\) & 15.7 & 31.0 & 14.4 & 4.9 & 16.7 & 25.4 \\ \hline Sparks (ours) & \(0.78/1\) & 13.4 & 28.6 & 10.6 & 5.3 & 14.5 & 20.8 & Sparks (ours) & \(0.78/1\) & 15.6 & 30.7 & 14.0 & 4.7 & 16.5 & 25.1 \\ Sparks (ours) & \(0.56/1\) & 12.5 & 27.7 & 10.0 & 4.9 & 14.1 & 19.6 & Sparks (ours) & \(0.56/1\) & 14.9 & 29.9 & 13.6 & 4.1 & 15.7 & 24.5 \\ \hline Sparks\({}^{*}\) (ours) & \(0.78/1\) & 18.6 & 35.7 & 17.4 & 7.1 & 19.3 & 31.0 & Sparks\({}^{*}\) (ours) & \(0.78/1\) & 21.2 & 37.5 & 18.2 & 7.8 & 22.6 & 31.7 \\ Sparks\({}^{*}\) (ours) & \(0.56/1\) & 17.6 & 33.9 & 17.0 & 6.6 & 18.1 & 29.4 & Sparks\({}^{*}\) (ours) & \(0.56/1\) & 20.0 & 36.8 & 17.4 & 7.0 & 20.2 & 30.5 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance comparisons with object detection methods on the COCO dataset. Sparks\({}^{*}\) indicates using the two-step training method and the generalized Sign/PReLU functions (as adopted in [32] for image classification).
### Results
**Evaluation on PASCAL VOC.** We contrast Sparks against SOTA detection binarization methods including the standard BNN [16], Bi-RealNet [31], XNOR-Net [37] and BiDet [46] in Table 6. We implement two different versions of Sparks including 0.56-bit and 0.78-bit. Compared with BiDet, our 0.56-bit method obtains about model compression by twice (0.56 vs 1) and computation acceleration by more than 3 times (_e.g._ 285 vs 64 on VGG16+SSD300). Besides, by adopting the two-step training scheme and the generalized Sign/PReLU functions, our methods achieve new records on 1-bit object detection.
**Evaluation on COCO.** To further assess the proposed method on a larger and more challenging dataset, we conduct experiments on COCO. Comparisons with SOTA methods are provided in Table 7. Following the standard COCO evaluation metrics, we report the average mAP over different IoU thresholds from 0.5 to 0.95, the APs at particular thresholds: AP\({}_{50}\) and AP\({}_{75}\), and the scale-aware metrics: AP\({}_{s}\), AP\({}_{m}\) and AP\({}_{l}\). The benefits of Sparks are still observed, namely, clearly saving in complexity. Results with SSD300 indicate that our 0.78-bit Sparks even defeats BiDet in terms of all evaluation metrics. We speculate that Sparks reduces information redundancy by selecting essential codewords, and thus eliminates some of the false positives. In addition, our method performs stably for both the one-stage SSD300 and the two-stage Faster R-CNN, implying its robustness on different backbones. In addition, results of Sparks\({}^{*}\) indicate that our method also has compatibility with the two-step training scheme and the generalized functions.
## Appendix D Discussion and Limitation
In this research, we propose Sparks that largely enhances both the storage and computation efficiencies of BNNs. Our work is motivated by that kernel-wise codewords are highly clustered. For this reason, we propose a novel selection-based approach for kernel-wise sub-codebook learning instead of previously used channel-wise product quantization. By extending our Sparks with more layers or other blocks, the performance could surpass the standard BNN model with still fewer parameters and BOPs. This provides us with a new research line of training lighter and better BNN models. As an open-sourced research on well-used benchmarks, our method does not raise ethical concerns. However, one should notice that compressing a model needs to access the model parameters which might need further protection methods for the model privacy.
|
2310.15656 | Momentum Gradient-based Untargeted Attack on Hypergraph Neural Networks | Hypergraph Neural Networks (HGNNs) have been successfully applied in various
hypergraph-related tasks due to their excellent higher-order representation
capabilities. Recent works have shown that deep learning models are vulnerable
to adversarial attacks. Most studies on graph adversarial attacks have focused
on Graph Neural Networks (GNNs), and the study of adversarial attacks on HGNNs
remains largely unexplored. In this paper, we try to reduce this gap. We design
a new HGNNs attack model for the untargeted attack, namely MGHGA, which focuses
on modifying node features. We consider the process of HGNNs training and use a
surrogate model to implement the attack before hypergraph modeling.
Specifically, MGHGA consists of two parts: feature selection and feature
modification. We use a momentum gradient mechanism to choose the attack node
features in the feature selection module. In the feature modification module,
we use two feature generation approaches (direct modification and sign
gradient) to enable MGHGA to be employed on discrete and continuous datasets.
We conduct extensive experiments on five benchmark datasets to validate the
attack performance of MGHGA in the node and the visual object classification
tasks. The results show that MGHGA improves performance by an average of 2%
compared to the than the baselines. | Yang Chen, Stjepan Picek, Zhonglin Ye, Zhaoyang Wang, Haixing Zhao | 2023-10-24T09:10:45Z | http://arxiv.org/abs/2310.15656v1 | # Momentum Gradient-based Untargeted Attack on Hypergraph Neural Networks
###### Abstract
Hypergraph Neural Networks (HGNNs) have been successfully applied in various hypergraph-related tasks due to their excellent higher-order representation capabilities. Recent works have shown that deep learning models are vulnerable to adversarial attacks. Most studies on graph adversarial attacks have focused on Graph Neural Networks (GNNs), and the study of adversarial attacks on HGNNs remains largely unexplored. In this paper, we try to reduce this gap. We design a new HGNNs attack model for the untargeted attack, namely MGHGA, which focuses on modifying node features. We consider the process of HGNNs training and use a surrogate model to implement the attack before hypergraph modeling. Specifically, MGHGA consists of two parts: feature selection and feature modification. We use a momentum gradient mechanism to choose the attack node features in the feature selection module. In the feature modification module, we use two feature generation approaches (direct modification and sign gradient) to enable MGHGA to be employed on discrete and continuous datasets. We conduct extensive experiments on five benchmark datasets to validate the attack performance of MGHGA in the node and the visual object classification tasks. The results show that MGHGA improves performance by an average of \(2\%\) compared to the than the baselines.
## 1 Introduction
Graph Neural Networks (GNNs) are widely used in tasks such as graph classification [11], node classification [23, 24], and link prediction [25, 26] due to their efficient learning ability and generalization capability. Graphs provide a useful way to represent pairwise connections between objects in real-world networks. However, they may not fully capture the complex higher-order relationships between objects [18]. When dealing with multimodal data, GNNs cannot efficiently learn all the information between objects, resulting in missing information and lower efficiency [14]. For example, in the scientist collaboration network, researchers are abstracted as nodes and edges represent the paper collaboration relationship. Here, the common graph cannot represent the situation where multiple researchers work together on a paper [13]. The hypergraph can clearly represent this complex relationship. Specifically, the hyperedge (an edge in a hypergraph is called a hyperedge) represents the collaboration of a paper, and a hyperedge connecting \(K\) nodes means that \(K\) researchers collaboratively work on a paper. Thus, the hypergraph has an advantage over the common graph in modeling complex relationships. Hypergraph Neural Networks (HGNNs) based on the hypergraph also outperform GNNs in many tasks [15, 16, 17], especially in the field of network security [13].
In recent years, many works have demonstrated the vulnerability of GNNs to adversarial attacks, resulting in degraded performance [25, 26]. HGNNs, as an extension of hypergraph deep learning on graph data, also show vulnerability to graph adversarial attacks [15]. Graph adversarial attacks aim to disrupt the performance of GNNs by adding small perturbations to the graph [3, 16]. Depending on the goal of the attack, the adversarial attacks can be classified as targeted and untargeted attacks [25, 16]. In the targeted attack, the attacker focuses on the classification of some test nodes. The attack is successful only if the target node is misclassified to the attacker-specified label [15]. In the untargeted attack, the attacker usually focuses on the classification of all test nodes, and the attack succeeds if the test nodes are misclassified [16]. Since the targeted attack usually attacks users with higher privileges, they are easily detected by defense models and are difficult to implement in real attacks [15]. Therefore, many works are based on the untargeted attack [16, 17].
Almost all of the current works on graph adversarial learning focus on GNNs and ignore the security of HGNNs [23, 24, 15], which makes HGNNs difficult to apply in practice. For example, adding some malicious noise to the pathology data can make it hard for doctors to understand the patient's condition and make wrong decisions in the task of detecting mental illness (e.g. Alzheimer's disease) [18].
There are some differences between HGNNs and GNNs when dealing the data. For example, GNNs can only process traditional graph data, while HGNNs can not only process traditional graph data but also complex and high-dimensional data. Due to the specificity of hypergraph data, we summarize two main challenges in HGNNs attacks from the perspective of hypergraph data:
(1) **Unstructuredness**. Unlike common graph datasets, there is no association between nodes in most hypergraph datasets, and the hypergraph structure (adjacency relations) can be obtained through various modeling methods [1].
(2) **Continuous Features**. Many common graph datasets are consist of discrete features. Hypergraph datasets can be applied in tasks such as graph visualization [14], image classification [15], etc. Many hypergraph datasets are continuous features, so many attacks on GNNs cannot be applied to HGNNs [1].
HGNNs differ from GNNs in the convolution operation. Specifically, graph convolution is defined based on edges between nodes, while hypergraph convolution is defined based on hyperedges between nodes [13, 14]. Hypergraph convolution is more complex than graph convolution, which leads to more difficult implementation of attacks in HGNNs. To the best of our knowledge, only HyperAttack [11] has made a preliminary exploration of adversarial attacks for HGNNs. HyperAttack uses the gradient to modify the hyperedges. However, the HyperAttack implementation assumes that the hypergraph dataset has been modeled to obtain the hypergraph structure (the original hypergraph dataset is unstructured). In other words, HyperAttack implements the attack in the hypergraph with fixed structure. Fig. 1 (a) shows the hypergraph structure \(H\) obtained using two distance-based HGNNs (HGNN-KNN, HGNN-\(\varepsilon\)[12]). We observe that the hypergraph structure \(H\) is obtained differently under different approaches. In addition, different settings of HGNNs' parameters generate different hypergraph structures in practice. Therefore, HyperAttack has attack destabilizability as it exhibits different performances in different structural hypergraphs generated under the same dataset. Intuitively, attacking hyperedges is not the best choice in HGNNs. This is because hypergraph datasets are unstructured and the same datasets generate the different hypergraph structures. The defender can learn from previous experience in adding or deleting many useless hyperedges to reduce the efficiency of the attack [1].
To address the above challenges, in this paper, we propose an attack that is more applicable to HGNNs, namely MGHGA. MGHGA sets up the utility wider untargeted attack. Fig. 1 (b) and (c) represent the training process of GNNs and HGNNs, respectively. We observe that modeling the dataset is a prerequisite for training HGNNs, whereas GNNs do not require this step. The previous attack algorithms directly modify the structure of the graph [10, 12], and HyperAttack is no exception. To address the challenge (1), we take a new perspective of attacking features before hypergraph modeling. MGHGA uses momentum gradients to guide the attacker to modify the features. To address challenge (2), we consider the scenarios of attacking discrete and continuous features separately. In discrete datasets, MGHGA directly inverts the feature values from 0 to 1 or 1 to 0. In continuous datasets, we update the features using a sign gradient strategy. Finally, the new hypergraph structure is obtained by modeling the perturbed hypergraph dataset and feeding it into the HGNNs to verify the validity of the MGHGA. The above process is shown in Fig. 1 (d). To highlight the innovations of our model, Table 1 shows the differences between HyperAttack and MGHGA.
Finally, we summarize the contributions of this paper as follows:
\(\bullet\) We propose the first untargeted adversarial attack MGHGA against HGNNs. MGHGA considers the training characteristics of HGNNs and implements the attack before hypergraph modeling.
\(\bullet\) We propose a momentum gradient method to guide the attacker in modifying the node's features, and our model can be applied to both discrete and continuous hypergraph datasets.
\(\bullet\) Extensive experiments on five datasets verified that MGHGA can effectively reduce the effectiveness of HGNNs and outperform other baseline methods in node and visual object classification tasks.
The rest of this work is organized as follows. In Section 2, we introduce some fundamentals of the HGNNs and untargeted attack. Section 3 introduces MGHGA in detail, including feature selection and feature modification. Section 4 presents the experimental dataset, parameters and experimental results. In Section 5, we first review the work related to hypergraph learning and graph adversarial attacks. Finally, we conclude our work in Section 6.
## 2 Preliminaries
For convenience, Table 2 gives the frequently used notations.
### Hypergraph Neural Network
Given a hypergraph dataset \(\mathcal{D}=(V,X)\), where \(V=\{v_{1},v_{2},...,v_{|V|}\}\) represents the set of nodes, \(X\in\mathbb{R}^{|V|\times d}\) denotes the node feature matrix and \(d\) denotes the dimensions of the feature. Constructing the hypergraph \(\mathcal{G}=(V,E,W)\), where \(E=\{e_{1},e_{2},...,e_{|E|}\}\) represents the set of hyperedges, and \(diag(W)=[w(e_{1}),w(e_{2}),...,w(e_{|E|})]\) is the diagonal matrix of the hyperedge weights, \(w(e)\) is the weight of the hyperedge. The correlation matrix \(H\in\{0,1\}^{|V|\times|E|}\) is used to represent the structure of the hypergraph, \(H(v,e)=1\) if the node \(v\) is inside the hyperedge \(e\), and \(H(v,e)=0\) otherwise, which can be expressed as follows:
\[H(v,e)=\left\{\begin{array}{l l}1,&if\quad v\in e.\\ 0,&if\quad v\notin e.\end{array}\right. \tag{1}\]
Due to the significant performance achieved by HGNNs widely used in classification [12]. In this paper, the downstream task of HGNNs is set as node classification. The hypergraph convolutional network learns node representations, transforms and propagates information to accomplish downstream tasks. A hypergraph convolutional layer can be represented as [16]
\[X^{(l+1)}=\sigma\left(D_{v}^{-1/2}HWD_{e}^{-1}H^{T}D_{v}^{-1/2}X^{(l)}\Theta^{( l)}\right). \tag{2}\]
where \(X^{(l)}\) denotes the node representation of the hypergraph at \(l\)-th layer. \(\sigma(\cdot)\) denotes the nonlinear activation function, \(W\) is the weight matrix of the hyperedges. \(D_{e}\) represents the diagonal matrix of the hyperedge degree (hyperedge degree is the number of nodes contained in the hyperedge), \(D_{v}\) is the diagonal matrix representing the node degree (node degree is the number of hyperedges containing the node), \(\Theta^{(l)}\) denotes the \(l\) layer training parameters.
We set the layer number of HGNNs to 2 as in most other works [10], whose definition can be expressed as
\[Z=f(H,X)=\mathrm{softmax}\left(\widehat{H}\operatorname{Re}LU\left(\widehat{ H}X\Theta^{(1)}\right)\Theta^{(2)}\right). \tag{3}\]
\[\text{where }\widehat{H}=D_{v}^{-1/2}HWD_{e}^{-1}H^{T}D_{v}^{-1/2}.\ \Theta^{(1)}\text{ and }\Theta^{(2)}\text{ are denoted as the training parameters of the first and second layers, respectively. In the training phase our goal is to continuously optimize \(\Theta=(\Theta^{(1)},\Theta^{(2)})\) to obtain the optimal classifier \(f_{\Theta^{*}}(H,X)\):
\[\min_{\Theta}L_{model}=-\sum_{u\in V_{L}}Y_{u}\ln{(Z_{u,:})}. \tag{4}\]
where \(Z_{u,:}\) denotes the set of predicted labeling probabilities for node \(u\), \(Y_{u}\) denotes the true label of node \(u\), \(V_{L}\) denotes the training set of node, and the predicted label of node \(u\) is denoted as
\[L_{pre}=\arg\max(Z_{u,:}). \tag{5}\]
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Model & Target of attack & Phase of attack & Downstream task \\ \hline HyperAttack & Targeted attack & After hypergraph modeling & Node classification \\ MGHGA & Untargeted attack & Before hypergraph modeling & Node and Visual object classification \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of HyperAttack and MGHGA.
Figure 1: (a) shows two hypergraph modeling approaches. (b) and (c) represent the GNNs and HGNNs training processes, respectively. (d) shows the process of MGHGA.
### Untargeted Attack
The aim of untargeted is to reduce the global classification performance of HGNNs. Given a budget \(\Delta\) which represents the number of times the attacker modifies the feature entities. The untargeted attack can be expressed as
\[\arg\max_{X^{\prime}}\sum_{v\in V_{T}}\mathbb{I}\left(Y_{v}\neq C_{v} \right), \tag{6}\] \[\text{s.t.}\ C=\arg\max f_{\Theta^{*}}\left(H,X^{\prime}\right), \Theta^{*}=\arg\min L_{\text{model}}\ \left(H,X^{\prime}\right),\|\]
Where \(C\) is the set of node prediction labels, \(V_{T}\) denotes the test set of nodes. \(\mathbb{I}(x)\) is the indicator function. If x is true, \(\mathbb{I}(x)\) returns 1, otherwise \(\mathbb{I}(x)\) returns 0.
The main rationale for Eq. 6 is that in a model with high training error is likely to generalize poorly on the test set as well.
### Threat Model
Depending on whether the attack occurs before or after the training of HGNNs, it can be categorized into the evasion attack and poisoning attack Sharma _et al._ (2023); Shafahi _et al._ (2018). Evasion attack is performed on the trained HGNNs, and the attacker cannot modify the parameters or structure of the HGNNs Zhang _et al._ (2022). Poisoning attack is performed before the HGNNs are trained, and the attacker can insert perturbations in the training data in interfere with the training process of the HGNNs Jiang _et al._ (2022). Evasion and poisoning attacks occur during the testing and training of HGNNs Wang _et al._ (2020); Sharma _et al._ (2023), respectively. In an evasion attack, the attacker's goal is to modify the links or features of the test nodes causing the nodes to misclassify Fan _et al._ (2021). In the real attack, the attacker cannot access all the test data. For example, in an e-commerce recommendation system, graph deep learning models are used to predict the recommendation of a target item based on the sales records of existing goods. The attacker cannot modify the information of the competitor's goods, and the evasion attack is not applicable to this situation. However, the poisoning attack can solve this situation. For example, the attacker uses the surrogate model to train the goods dataset. The feedback from the surrogate model guides the attacker to modify the goods dataset. Eventually, the dataset with malicious perturbations is obtained Nguyen Thanh _et al._ (2023). When the graph deep models are run on the perturbed datasets, which learns the representation with malicious information, which decreases the recommendation rate of the target goods.
We focus on achieving HGNNs attack under poisoning attack, and the core idea is to use the surrogate model to attack to get the perturbed dataset before the HGNNs are trained. We study the robustness of HGNNs, so the surrogate model is set to HGNNs.
In addition, MGHGA is a white-box attack, which requires constant feedback information (e.g., gradient or node prediction labels) from surrogate HGNNs.
## 3 Momentum Gradient Hypergraph Attack
In this section, we detail the MGHGA components. The MGHGA pipeline is shown in Fig. 2. MGHGA addresses two challenges in hypergraph attacks. First, many hypergraph datasets do not have correlation relationships between nodes. Attacking the hypergraph structure does not guarantee the stability of the attack, due to the different hypergraph structures generated for the same hypergraph dataset under different modeling approaches. To solve this problem, we use the surrogate model to attack the nodes' features before the hypergraph modeling, as shown in Fig. 2 (c). Second, hypergraph datasets can be classified as discrete and continuous based on feature attributes. In order to improve the MGHGA, we design two methods to update the features.
### Feature Selection
We use the gradient of the feature matrix in the surrogate model to select features, denoted as
\[F_{i,j}=\frac{\partial L_{model}}{\partial X_{i,j}}. \tag{7}\]
Where, \(F\in\mathbb{R}^{|V|\times d}\) is the gradient matrix of features. A larger gradient of a feature indicates that this feature has a greater impact on the optimization of HGNNs, and thus modifying the feature can often have a large impact on HGNNs. However, previous studies have shown that using a greedy approach to directly modify the feature with the largest gradient makes the attack susceptible to falling into the optimized local optimum and easily overfitting to the attack model, which can diminish the generalizability of the generated adversarial samples Zugner _et al._ (2018); Liu _et al._ (2022).
To address the above problem, we propose a momentum gradient hypergraph attack. The momentum method is a technique to accelerate the gradient descent algorithm by accumulating velocity vectors along the gradient direction of the
\begin{table}
\begin{tabular}{c c} \hline Notation & Description \\ \hline \(\mathcal{D}\) & Clean hypergraph dataset \\ \(\mathcal{D}^{\prime}\) & Poisoned hypergraph dataset \\ \(\mathcal{G}\) & Hypergraph \\ \(V\) & Set of nodes of the clean hypergraph \\ \(X\) & Feature matrix of the hypergraph graph \\ \(X^{\prime}\) & Feature matrix of the perturbed hypergraph \\ \(E\) & Set of hyperedges of the clean hypergraph \\ \(W\) & Hyperedge weight matrix \\ \(H\) & Correlation matrix of the clean graph \\ \(D_{e}\) & Hyperedge degree \\ \(D_{v}\) & Node degree \\ \(Y\) & True label \\ \(C\) & Prediction label \\ \(F\) & Feature gradient matrix \\ \(\Delta\) & Attack budget \\ \(u\) & Momentum decay \\ \(\eta\) & Constraint factor \\ \(L_{model}(\cdot)\) & HGNNs loss \\ \hline \end{tabular}
\end{table}
Table 2: Notations frequently used in this paper and their corresponding descriptions.
loss function during the iteration [3]. Accumulating previous gradients helps the model avoid falling into local optimum. We apply the momentum method to generate malicious features. The momentum gradient matrix of the features is first computed.
\[\left\{\begin{array}{c}F^{0}=0.\\ F^{t}=\mu F^{t-1}+\frac{\partial L_{model}}{\partial X^{t}}.\end{array}\right. \tag{8}\]
where \(u\) represents the momentum decay factor and \(X^{(t)}\) denotes the perturbation feature matrix after \(t\) iterations.
MGHGA select the feature of the maximum absolute value of the momentum gradient matrix.
\[m_{i,j}^{t}=\arg\max|F^{t}|. \tag{9}\]
where \(m_{i,j}^{t}\) represents the absolute maximum value of gradient at \(t\) iterations, \(i\) and \(j\) represent the node and feature indexs, respectively.
### Feature Modification
At each iteration, we modify only one feature and stop the attack when the number of modifications reaches the budget \(\Delta\). Here, we consider discrete and continuous feature modifications, respectively.
**Discrete Feature.** In discrete features, where the feature values are only 1 or 0, we use a direct inversion mechanism to modify the features. At \(t\) iterations, feature deletion or addition can be expressed as
\[X_{i,j}^{t}=1-X_{i,j}^{t},\quad s.t.\quad||X^{t}-X||\leq\Delta. \tag{10}\]
**Continuous Feature.** For the image attack with continuous features, the researchers use the gradient sign mechanism to generate perturbations, and the results indicate that the mechanism can achieve high efficiency attack in continuous data [1]. Inspired by this, we use a gradient sign mechanism to update the continuous features. Our approach involves introducing perturbations along the opposite direction of the normal sample gradient to maximize the target model's training loss error, thereby lowering its classification confidence and increasing the probability of inter-class confusion, ultimately leading to misclassification. Specifically, the update feature matrix can be expressed as
\[X_{i,j}^{t}=X_{i,j}^{t}+\eta sign(F_{i,j}^{t}),\quad s.t.\quad||X^{t}-X||\leq\Delta. \tag{11}\]
where \(\eta\) is the constraint factor and \(sign(x)\) is denoted as the gradient symbol. \(sign(x)=1\) when \(x>0\), otherwise \(sign(x)=0\). At the end of the iteration, the anomalous features are filtered in order to avoid that the newly generated features are not within the original features.
\[\left\{\begin{array}{l}X_{i,j}^{\prime}=\arg\min X,\text{ s.t. }X_{ij}^{\prime}<\arg\min X.\\ X_{i,j}^{\prime}=\arg\max X,\text{ s.t. }X_{ij}^{\prime}>\arg\max X.\end{array}\right. \tag{12}\]
where \(X\) and \(X^{\prime}\) denote the original and perturbed hypergraph feature sets, respectively. \(argmin(X)\) and \(argmax(X)\) denote the minimum and maximum values of the original hypergraph feature set, respectively.
We reset the features that are beyond the lower and upper of the original features, which ensures the imperceptibility of the perturbations and prevents them from being detected by some simple defense mechanisms (e.g., outlier detection models). Note that filtering anomalous features in MGHGA does not consume budget.
Figure 2: The pipeline of MGHGA. The goal of MGHGA is to generate perturbed datasets with malicious information. Where the red color represents the features of the nodes under attack.
Since multiple modifications of the same feature cause a waste of budget, we attack at most once for both continuous and discrete features.
### Algorithm
The pseudo-code of MGHGA is given in Algorithm 1.
```
0: Hypergraph dataset \(\mathcal{D}=(V,X)\), momentum decay factor \(u\), constraint factor \(\varepsilon\), budget \(\Delta\)
0: Perturbation hypergraph dataset \(\mathcal{D}^{\prime}=(V,X^{\prime})\)
1:Initialization: Modeling hypergraph \(\mathcal{G}=(V,E,W)\), number of attack iterations \(T\), HGNN surrogate model \(f_{\Theta^{\prime}}(H,X)\), perturbation gradient matrix \(F^{0}\)
2:while\(t<T\) or \(||X^{t}-X||\leq\Delta\)do
3: Calculate the gradient of \(i\) iterations through Eq. 7
4: Calculate the \(i\) iteration momentum gradient through Eq. 8
5: Select the \(i\) iteration features through Eq. 9
6:if\(X\) is the discrete feature then
7: Update gradient matrix through Eq. 10
8:else\(\triangleright\)\(X\) is the continuous feature
9: Update gradient matrix through Eq. 11
10:endif
11:\(t=t+1\)
12:endwhile
13:if\(X\) is the continuous feature then
14: Filter features through Eq. 12
15:endif
```
**Algorithm 1** MGHGA
**Complexity Analysis.** MGHGA uses HGNN as a pre-training model, and HGNN includes forward and backward propagation in training with a complexity of \(\mathcal{O}(t||H||\cdot||X||)\), where \(t\) denotes the number of HGNN training. Then MGHGA calculates the gradient of the feature matrix, the complexity is \(\mathcal{O}(d|V|)\). Updating and filtering features are basic operations with low complexity and are ignored here. The complexity of modifying features is \(\mathcal{O}(Td|V|)\), where \(T\) denotes the number of attack iterations. In summary the overall complexity of MGHGA is \(\mathcal{O}(t||H||\cdot||X||+Td|V|)\).
## 4 Experiments
### Datasets
Recent works have shown that HGNNs exhibit excellent performance on node classification and visual object classification tasks [11, 14], which are the most common practical applications of HGNNs. Therefore, our work focuses on these two tasks. To illustrate the performance of MGHGA, experiments are carried out on five datasets. We performed node classification tasks on Cora, Cora-ML and Citeseer [14] datasets. The visual object classification task is performed on two multi-feature and continuous datasets including National Taiwan University 3D model (NTU) [2] and Princeton ModelNet40 (ModelNet40) [21]. The dataset information is summarized in Table 3. Hypergraphs are obtained by modeling hypergraph datasets. However, NTU and ModelNet40 are without adjacencies, and their adjacencies are obtained using the hypergraph construction methods. In order to ensure the consistency of the adjacency relations of each dataset, we do not use the original adjacency of the Cora, Cora-ML and Citeseer datasets but instead utilize commonly used construction methods to generate hypergraph structures in the experiments.
### Baselines
**Since MGHGA is the first work on the untargeted adversarial attack in HGNNs, there are fewer comparative models to refer.** HyperAttack, which is most relevant to our work, is set up as a targeted attack and cannot be used as a comparison model. Due to the specificity of the hypergraph structure, it is difficult to directly migrate the GNNs adversarial attack models to HGNNs. Here, we use the following model as comparison models.
**Random Attack (Random)**: The conclusion of the work on common graph attacks shows that the Random Attack can degrade the performance of GNNs [13]. In this paper, we attack the features randomly. Specifically, the features are randomly changed from 0 to 1 and from 1 to 0 in the discrete dataset. In the continuous dataset, the features are randomly modified. It should be noted that the modified features are within the range of the original features.
**Node Degree Attack (NDA)**: Previous works have shown that attacking nodes with maximum node degree degrades the performance of the GNNs [20]. Extending to HGNNs, NDA is a method to attack nodes with maximum node degree. Note that features are modified in the same way as Random.
**Fast Gradient Attack (FGA)**: Fast Gradient Attack is a common gradient attack in the common graph [2]. We extend it to hypergraphs. In each attack, we choose the feature with the largest absolute value of the gradient to attack.
**Fast Gradient Attack-Node Degree (FGA-D)**: We add a constraint for the FGA that attacks the node with the larger degree, which obtains the FGA-D. Note that the FGA and FGA-D modify the discrete features similarly to MGHGA.
**MGHGA-D**: MGHGA-D is an extended model of MGHGA, and MGHGA-D is obtained from MGHGA with the same constraint as FGA-D.
Note that FGA-D and MGHGA-D are attacks with constraints, i.e., they attack nodes with larger node degrees.
### Parameter setting and metrics
**Parameters**. In our experiments, the hypergraph is generated using two distance-based generation methods, i.e., HGNN-KNN and HGNN-\(\varepsilon\), where \(K\) and \(\varepsilon\) are set to 10 and 0.5, respectively. The correlation matrix \(H\) is set to a binary matrix. HGNNs are set to two layers, the feature dimension of the hidden layer is set to 64 and dropout is applied to avoid overfitting. In the training process, the training count is set to 300 and the learning rate of the Adam optimizer is 0.001. The ratio of the training set to the test set is 0.2 and 0.8, respectively. In the discrete dataset, the constraint factor \(\eta\) is set to \(X_{avg}\). The constraint factor \(\eta\) is set to \(X_{avg}=\frac{sum(X)}{|V|d}\) in the continuous dataset. The budget \(\Delta=\lambda|V|\), where \(\lambda\) is the
budget factor is set to 0.05 by default. The decay factor \(\mu\) is 0.8. FGA-D and MGHGA-D attack nodes with top \(1\%\) node degree by default. The victim model and the target model are the same by default, where the victim model is the model used by the user and the victim model is the pre-trained model used by the surrogate model. The experiments are conducted on a computer with an Intel(R) Xeon(R) Gold 5118 processor and 2* NVIDIA GeForce GTX 1070Ti GPU.
**Metrics**. For a comprehensive evaluation of MGHGA, we use the classification success rate to measure the attack effectiveness. The classification success rate indicates the classification accuracy of HGNN in the test set, and a lower rate indicates a better attack.
### Experimental Results
**MGHGA Attack Performance**
Table 4 summarizes the classification accuracies of the five types of datasets under the attacks. The performance of HGNN-KNN and HGNN-\(\varepsilon\) decreases under all attacks, which indicates that attacking node features can effectively degrade the performance of HGNNs. Specifically, we observe that MGHGA achieves the best performance under all the datasets. For example, using HGNN-KNN as the victim model, the classification accuracies of the discrete dataset Cora are 58.65\(\%\), 58.47\(\%\), 57.63\(\%\) and 55.33\(\%\) for Random, NDA, FGA and MGHGA, respectively. The lower classification accuracy indicates that the attack causes more damage to HGNNs. Therefore, MGHGA achieves the best efficiency in comparing the advanced attacks. The results are the same in other datasets, especially in NTU and ModelNet40 continuous datasets, which shows that our proposed method is applicable not only to discrete datasets but also to continuous datasets.
MGHGA improves the performance by \(3\%\) on average compared to Random. In particular, MGHGA improves the performance by 5\(\%\) in Citeseer, which shows that MGHGA can add some critically important perturbations with the same budget. FGA shows significant performance on some new tasks due to its strong generalization ability. In the comparison models, the results of FGA can be viewed as the current optimum. With a small budget, our proposed model improves the attack performance by 2\(\%\) on average, which is a satisfactory result for us. HGNNs are better able to utilize global as well as longer range contextual information when aggregating neighboring features, resulting in improved robustness of HGNNs over GNNs. Attacking HGNNs is more difficult than attacking GNNs. MGHGA, as a preliminary exploration of targetless attacks on HGNNs, shows outstanding performance compared to all other models, which indicates that our proposed attack is capable of achieving an optimal attack.
In addition, we investigate the effect of adding attack constraints (attacking the node with the largest node degree) on the attacks. Comparing MGHGA-D and MGHGA found that the performance of MGHGA-D attacking the node with the largest node degree, although it can reduce the performance of GNNs, the attack performance is not as good as that of MGHGA without constraints. For example, in Citeseer, the performance of MGHGA over MGHGA-D is improved by 2.08\(\%\) and 2.67\(\%\) in HGNN-KNN and HGNN-\(\varepsilon\) respectively. The same rule is exhibited in FGA-D and FGA. Intuitively, the unrestricted attack can maximize the efficiency of the attack.
**Running Time**
Table 5 shows the runtimes for several attacks. Specifically, Random has a low runtime in each dataset, but its performance is the worst of all the attacks and therefore would not be considered for application in the real attack. Comparing FGA and MGHGA shows that MGHGA runtime can be similar to FGA, but enables more efficient attacks. For example, in Citeseer, the running times of FGA and MGHGA are 4.10 and 4.15 minutes, respectively. Their classification accuracies are 62.44\(\%\) and 58.05\(\%\) (obtained from Table 4) when the victim model is HGNN-KNN, respectively. Our results show that our model achieves efficient attack while also achieving runtimes similar to comparison models. In most cases, we find that the runtime of the attack is positively correlated with the size of the nodes, and the runtime is longer when the node size is larger. For example, the number of nodes in descending order are ModelNet40, Citeseer, Cora_ML, Cora and NTU, and the runtimes in descending order are: ModelNet40, Citeseer, Cora_ML, NTU and Cora. An exception exists where NTU\(>\) Cora. Intuitively, NTU is two-featured data, and HGNNs spend longer time processing two-featured data than single-featured data.
As shown in Fig. 3, we observe that MGHGA achieves satisfactory results with different budgets in all datasets. In Fig. 3 (j), the classification accuracies of FGA and MGHGA are {87.64\(\%\), 85.82\(\%\), 85.29\(\%\)} and {86.62\(\%\), 84.19\(\%\), 84.14\(\%\)} when the budget factor \(\lambda\) are {0.01, 0.05, 0.1}, respectively.
Furthermore, we investigated the effect of the budget on two constrained attacks (FGA-D and MGHGA-D). Fig. 3 shows that the increase in budget negatively affects the constrained attacks. As an example, the classification accuracies of FGA-D and MGHGA-D are {88.36\(\%\), 87.54\(\%\), 87.88\(\%\)} and {88.12\(\%\), 86.97\(\%\), 87.40\(\%\)} in Fig. 3 (e), when the budget factor \(\lambda\) are {\(0.01,0.05,0.1\)}. We analyze that the larger nodes (FGA-D and MGHGA-D attack the node with the larger node degree each time) have a limited impact on the attack performance. FGA-D and MGHGA-D attack features do not positively impact the attack when the budget is too large. Therefore, the performance of the restricted attack increases and then decreases with the budget increases, which is particularly evident in the Cora and ModelNet40 datasets.
**Attack performance in HGNNs parameters \(K\) and \(\varepsilon\)**
Fig. 4 shows the performance of our proposed model for different parameter \(K\). We find that the performance of MGHGA is independent of the victim model HGNN-KNN parameter \(K\), i.e., MGHGA still reduces the accuracy of HGNN-KNN regardless of \(K\). For example, in NTU, MGHGA-D and MGHGA reduce the accuracy of {2.08\(\%\), 3.84\(\%\), 1.51\(\%\)}, and {3.90\(\%\), 4.27\(\%\), 3.81\(\%\)}, respectively, when \(K\) are {5, 10, 15}.
We investigate the effect of parameter \(\varepsilon\) on our model, and the results are shown in Fig. 5. In each dataset, MGHGA is able to achieve reduced HGNN-\(\varepsilon\) classification accuracy. Specifically, the average performance of MGHGA-D and MGHGA in Citeseer is 3.03\(\%\) and 4.43\(\%\) under all \(\varepsilon\), respectively.
The above results indicate that our model is not affected
\begin{table}
\begin{tabular}{c|c|c c c c c c c} \hline \hline Datasets & Model & Clean & Random & NDA & FGA-D & FGA & MGHGA-D & MGHGA \\ \hline \multirow{2}{*}{Cora} & HGNN-KNN & 59.31\(\pm\)0.3 & 58.65\(\pm\)0.9 & 58.47\(\pm\)0.5 & 58.24\(\pm\)0.8 & 57.63\(\pm\)1.1 & 58.13\(\pm\)0.9 & **55.33\(\pm\)1.8** \\ & HGNN-\(\varepsilon\) & 57.19\(\pm\)0.2 & 56.97\(\pm\)1.1 & 56.73\(\pm\)0.6 & 56.46\(\pm\)0.7 & 55.10\(\pm\)1.0 & 54.94\(\pm\)0.9 & **53.51\(\pm\)1.5** \\ \hline \multirow{2}{*}{Cora\_ML} & HGNN-KNN & 69.33\(\pm\)0.2 & 68.89\(\pm\)0.8 & 68.34\(\pm\)0.6 & 68.11\(\pm\)0.5 & 67.76\(\pm\)1.0 & 67.90\(\pm\)0.7 & **66.12\(\pm\)1.2** \\ & HGNN-\(\varepsilon\) & 69.13\(\pm\)0.3 & 68.77\(\pm\)0.8 & 68.19\(\pm\)0.4 & 68.04\(\pm\)0.8 & 67.42\(\pm\)0.8 & 67.64\(\pm\)0.9 & **65.76\(\pm\)1.3** \\ \hline \multirow{2}{*}{Citeseer} & HGNN-KNN & 64.63\(\pm\)0.2 & 63.34\(\pm\)0.4 & 63.03\(\pm\)0.6 & 62.90\(\pm\)0.6 & 62.44\(\pm\)1.6 & 60.13\(\pm\)0.7 & **58.05\(\pm\)0.9** \\ & HGNN-\(\varepsilon\) & 62.12\(\pm\)0.1 & 61.41\(\pm\)0.5 & 60.90\(\pm\)0.4 & 60.21\(\pm\)0.4 & 59.13\(\pm\)1.3 & 59.34\(\pm\)0.7 & **57.67\(\pm\)1.5** \\ \hline \multirow{2}{*}{NTU} & HGNN-KNN & 75.06\(\pm\)0.2 & 73.99\(\pm\)0.4 & 73.23\(\pm\)0.5 & 72.92\(\pm\)0.6 & 72.11\(\pm\)1.0 & 72.17\(\pm\)0.5 & **71.22\(\pm\)1.2** \\ & HGNN-\(\varepsilon\) & 73.73\(\pm\)0.2 & 73.06\(\pm\)0.7 & 72.61\(\pm\)0.6 & 72.13\(\pm\)0.5 & 70.30\(\pm\)0.8 & 70.49\(\pm\)0.7 & **69.18\(\pm\)1.4** \\ \hline \multirow{2}{*}{ModelNet40} & HGNN-KNN & 89.91\(\pm\)0.2 & 88.15\(\pm\)0.4 & 87.93\(\pm\)0.5 & 87.54\(\pm\)0.7 & 86.67\(\pm\)0.8 & 86.97\(\pm\)0.6 & **85.64\(\pm\)0.9** \\ & HGNN-\(\varepsilon\) & 88.12\(\pm\)0.3 & 87.49\(\pm\)0.4 & 87.02\(\pm\)0.4 & 86.46\(\pm\)0.5 & 85.82\(\pm\)0.9 & 86.07\(\pm\)0.6 & **84.19\(\pm\)1.3** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of classification accuracy (\(\%\)) of several attack models. The lower the classification success rate, the better the model performance. In each case, the best results are bolded. The results are the average of 10 runs.
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline Datasets & Model & Random & NDA & FGA-D & FGA & MGHGA-D & MGHGA \\ \hline \multirow{2}{*}{Cora} & HGNN-KNN & 0.25 & 0.37 & 2.35 & 2.33 & 2.35 & 2.34 \\ & HGNN-\(\varepsilon\) & 0.25 & 0.37 & 2.35 & 2.33 & 2.35 & 2.34 \\ \hline \multirow{2}{*}{Cora\_ML} & HGNN-KNN & 0.25 & 0.35 & 3.33 & 3.31 & 3.34 & 3.33 \\ & HGNN-\(\varepsilon\) & 0.25 & 0.36 & 3.34 & 3.31 & 3.35 & 3.33 \\ \hline \multirow{2}{*}{Citeseer} & HGNN-KNN & 0.27 & 0.40 & 4.15 & 4.10 & 4.19 & 4.15 \\ & HGNN-\(\varepsilon\) & 0.27 & 0.39 & 4.16 & 4.11 & 4.19 & 4.16 \\ \hline \multirow{2}{*}{NTU} & HGNN-KNN & 0.25 & 0.44 & 2.56 & 2.50 & 2.56 & 2.55 \\ & HGNN-\(\varepsilon\) & 0.25 & 0.44 & 2.56 & 2.49 & 2.56 & 2.55 \\ \hline \multirow{2}{*}{ModelNet40} & HGNN-KNN & 1.23 & 2.54 & 10.64 & 11.13 & 10.67 & 11.21 \\ & HGNN-\(\varepsilon\) & 1.26 & 2.58 & 10.66 & 11.16 & 10.71 & 11.18 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Running time of various attacks in minutes.
Figure 4: The model performance in different parameters \(K\).
Figure 3: The attack performance under different budget factors \(\lambda\).
by the victim model parameters. Previous work has shown that the parameters \(K\) and \(\varepsilon\) do not affect the performance of HGNNs on classification tasks, i.e., HGNNs have good stability [14]. We think that the parameters \(K\) and \(\varepsilon\) do not affect our model due to the stability of HGNNs.
**Attack performance in different decay factors**
The decay factor \(\mu\) is an important parameter of our model. Fig. 6 shows the effect of the decay factor \(\mu\) on our model. Note that the MGHGA degrades to FGA when \(\mu\)= 0. We observe that the model performance first increases and reaches a plateau or decreases as \(\mu\) increases. For example, in Fig. 6 (d), the accuracy of MGHGA are {72.11\(\%\), 71.21\(\%\), 71.23\(\%\) } when \(\mu\) are {0, 1, 1.4}, respectively. In addition, our model performs best when \(\mu\) = 1 for discrete and continuous data sets in most cases. Intuitively, when \(\mu\) is small, the momentum gradient mainly depends on the gradient of the HGNNs at the \(t\) moment, and the performance of MGHGA is similar to that of the normal gradient attack FGA. As \(\mu\) increases, the gradient of the MGHGA depends on the gradients of the previous moments and the current moment, and this way of combining the gradients starts to improve the performance of the MGHGA. However, as \(\mu\) continues to increase, the momentum gradient relies heavily on the gradient of the previous moments and ignores the feedback from the gradient of the latest moment, and MGHGA performance degrades. The above results indicate that using the momentum gradient model can improve the attack performance.
**Transferability of MGHGA**
In this section, we verify the transferability of our model. The heat map 7 illustrates the performance of MGHGA in various surrogate and victim models. It is observed that the combination of surrogate and victim models does not affect the MGHGA performance. Specifically, in Citeseer, the performance of IMGIA improved by 0.16\(\%\) when the surrogate model and victim models are HGNN-KNN and HGNN-\(\varepsilon\), respectively. We think that MGHGA completes the attack before the victim HGNNs are trained and do not need to access the training parameters of the victim model, so the choice of surrogate and victim models does not affect the performance of MGHGA. In addition, there are differences in the way the hypergraphs of HGNN-KNN and HGNN-\(\varepsilon\) are modeled making their accuracies in the classification task different, and the MGHGA performance differs under different combinations of HGNNs.
## 5 Related Work
### Hypergraph Learning
The flexibility and capability of hypergraph learning to model complex higher-order data correlations have garnered increasing attention in recent years [11, 12]. Hypergraph learning usually consists of two parts: constructing hypergraphs and designing hypergraph learning methods. (1) **Constructing hypergraphs**. There are four types of methods for constructing hypergraphs: distance-based, representation-based, attribute-based, and network-based. Specifically, Huang et al. [11] proposed a nearest neighbor construction method whose main aim is to find adjacent vertices in the feature space and construct a hyperedge to connect them. Wang et al. [13] proposed a representation-based hyperedge construction mechanism that exploits the correlation between feature vectors to establish nodes connections. The literature [11] proposed a generation method applicable to attribute hypergraphs, which uses attribute information to construct hypergraphs. Fang et al. [14] used user friendship and mobility information to construct hypergraphs in the location social networks. (2) **Hypergraph learning methods**. Hypergraph learning can be divided into spectral analysis methods, neural network methods, and other methods according to their implementations. Feng et al. [14] first proposed the hypergraph neural network, which extends the spectral approach of graph convolutional neural networks to the hypergraph and designs hypergraph convolution operations. Yadati et al. [15] proposed the HyperGCN, which solves the problem of processing semi-supervised classification on the hypergraph. Huang et al. [11] proposed MultiHGNN, which learns multimodal hypergraph data and uses hypergraph modeling for each modality to accomplish downstream tasks. Jiang et al. [11] proposed a dynamic hypergraph neural network, which consists of two modules: dynamic hypergraph construction and convolution. Tran [2] proposed a directed hypergraph neural network based on the directed hypergraph Laplacian operator for the semi-supervised learning of the directed hypergraph.
### Graph Adversarial Attack
Graph attack algorithms can be classified into different types, mainly attack type, target, knowledge and capability. (1) Attacks can be classified into three categories based on their type: **the topology attack, the feature attack and the hybrid attack**[1, 12, 13]. In the topology attack, the attacker focuses on modifying the graph topology, which is a common attack method, e.g., FGA [1], Mettack [25], RL-S2V [1] and HyperAttack [15]. The node feature modification is another common attack method, where the attacker focuses on modifying the features of the nodes, e.g., GANI [14]. In Nettack [25] and IG-Attack [15], attackers use the graph topology and node feature attacks to degrade GNNs' accuracy. (2) Based on the target of the attack, we can classify the attacks into the following two categories: **targeted and untargeted attacks**[1, 12, 13]. Dai et al. [1] proposed a targeted universal attack against GNNs, where the attacker's goal is to misclassify some of the test nodes into the attacker-specified labels. Fang et al. [14] injected fake nodes with malicious information into the graph which made the GNN perform very poorly on the test nodes. (3) According to the knowledge classification of the attacker can be divided into three categories: **the white box attack, the gray box attack and the black box attack**[1, 12]. In white box attack, the attacker knows all the knowledge about the GNNs model and datasets [25]. In a gray-box attack, the attacker only
Figure 5: The model performance in different parameters \(\varepsilon\).
Figure 6: The model performance in different decay factor \(\mu\).
Figure 7: The translatability of MGHGA. Where the x-axis represents the victim model and the y-axis represents the surrogate model.
has some knowledge, e.g., knowing the parameters of GNNs but not the prediction results of nodes [22]. In black-box attacks, the attacker does not know the model architecture, parameters and training data, and can only obtain a small amount of model feedback [15]. Liu et al. [16] proposed a multi-level propagation surrogate white box attack where the attacker knows the model parameters and dataset information. The attack improved the success rate of the attack by querying the node information and using batch normalization to enhance the dissimilarity of node representations. Hussain et al. [17] proposed a gray-box attack where the attacker can access the labels of nodes and disrupt the fairness of node classification by injecting adversarial links. Ju et al. [18] proposed a black-box attack method using a reinforcement learning framework, the attacker is not using a surrogate model to query model parameters or training labels. (4) Attacks can be classified into three categories based on the capabilities of the attacker: the **single node attack, the some node attack, and the all node attack**[10, 11, 12]. Chen et al. [10] proposed a single node structure attack model proving that the single node attack can effectively reduce the accuracy of GNNs. Zang et al. [19] proposed a universal attack with modified edges in which the attacker reduces the effectiveness of GNNs by modifying a particular node or subgraph structure.
## 6 Conclusion
Our work shows that HGNNs are vulnerable to attacks in untargeted attack. In this paper, we present the first untargeted attack on HGNNs, named MGHGA. Considering the training differences between HGNN and GNNs, MGHGA uses surrogate models to modify node features before hypergraph modeling. Specifically, MGHGA uses the momentum gradient mechanism to select the features of the attack nodes. MGHGA uses different methods to update discrete and continuous features in the feature generation module. Extensive experimental results show that MGHGA can achieve advanced attack levels in node and visual object classification tasks.
In this paper, we only discuss the vulnerability of HGNNs. However, MGHGA has drawbacks. For example, MGHGA is set up as a white-box attack that accesses the HGNNs training parameters during the process of the attack. In some extreme cases, the attacker can only access some or none of the parameters, which leads to MGHGA failure. In our future work, we will consider two main aspects: (1) Consider the robustness of HGNNs in more scenarios, such as gray-box and black-box attacks. (2) According to the conclusion of this paper, we will consider how to improve the robustness of HGNNs under untargeted attacks.
There is a paucity of current research on the robustness of HGNNs. We hope that MGHGA is the first step in opening up exciting research avenues for studying HGNNs attacks and defenses.
## References
* [1]D. Chen, X. Tian, Y. Shen, and M. Ouyoung (2003) On visual similarity based 3d model retrieval. In Computer graphics forum, Vol. 22, pp. 223-232. Cited by: SS1.
* [2]J. Chen, Y. Wu, X. Xu, Y. Chen, H. Zheng, and Q. Xuan (2018) Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797. Cited by: SS1.
* [3]J. Chen, Y. Chen, H. Zheng, S. Shen, S. Yu, D. Zhang, and Q. Xuan (2020) Mga: momentum gradient attack on network. IEEE Transactions on Computational Social Systems8 (1), pp. 99-109. Cited by: SS1.
* [4]Y. Chen, H. Zheng, S. Shen, S. Yu, D. Zhang, and Q. Xuan (2020) Mga: momentum gradient attack on network. IEEE Transactions on Computational Social Systems8 (1), pp. 99-109. Cited by: SS1.
* [5]Y. Chen, H. Zheng, X. Chen, Z. Ye, H. Zhao, L. Meng, Z. Wang, and Y. Yang (2022) A practical adversarial attack on graph neural networks by attacking single node structure. In 2022 IEEE 24th Int Conf on High Performance Computing, pp. 143-152. Cited by: SS1.
* [6]F. Cui, W. Dai, Y. Zhu, X. Kan, A. A. Chen Gu, J. Lukemire, L. Zhan, L. He, Y. Guo, and C. Yang (2023-06) Braingb: a benchmark for brain network analysis with graph neural networks. IEEE TRANSACTIONS ON MEDICAL IMAGING42 (2), pp. 493-506. Cited by: SS1.
* [7]H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song (2018) Adversarial attack on graph structured data. In International conference on machine learning, pp. 1115-1124. Cited by: SS1.
* [8]J. Dai, W. Zhu, and X. Luo (2022) A targeted universal attack on graph convolutional network by using fake nodes. Neural Processing Letters54 (4), pp. 3321-3337. Cited by: SS1.
* [9]H. Dong and Y. Yang (2018) Training Generative Adversarial Networks with Binary Neurons by End-to-end Backpropagation. arXiv e-prints. Cited by: SS1.
* [10]H. Fan, B. Wang, P. Zhou, A. Li, Z. Xu, C. Fu, H. Li, and Y. Chen (2021) Reinforcement learning-based black-box evasion attacks to link prediction in dynamic graphs. In 2021 IEEE 23rd Int Conf on High Performance Computing & Communications, pp. 933-940. Cited by: SS1.
* [11]Q. Fang, J. Sang, C. Xu, and Y. Rui (2014) Topic-sensitive influencer mining in interest-based social media networks via hypergraph learning. IEEE Transactions on Multimedia16 (3), pp. 796-812. Cited by: SS1.
* [12]J. Fang, H. Wen, J. Wu, Q. Xuan, Z. Zheng, and C. K. Tse (2022) Gani: global attacks on graph neural networks via imperceptible node injections. arXiv preprint arXiv:2210.12598. Cited by: SS1.
* [13]Y. Feng, H. You, Z. Zhang, R. Ji, and Y. Gao (2021) Hypergraph neural networks. Cited by: SS1.
* [14]H. Fan, B. Wang, P. Zhou, A. Li, Z. Xu, C. Fu, H. Li, and Y. Chen (2021) Reinforcement learning-based black-box evasion attacks to link prediction in dynamic graphs. In 2021 IEEE 23rd Int Conf on High Performance Computing & Communications, pp. 933-940. Cited by: SS1.
* [15]H. Fan, B. Wang, P. Zhou, A. Li, Z. Xu, C. Fu, H. Li, and Y. Chen (2021) Reinforcement learning-based black-box evasion attacks to link prediction in dynamic graphs. In 2021 IEEE 23rd Int Conf on High Performance Computing & Communications, pp. 933-940. Cited by: SS1.
* [16]J. Fan, H. Wen, J. Wu, Q. Xuan, Z. Zheng, and C. K. Tse (2022) Gani: global attacks on graph neural networks via imperceptible node injections. arXiv preprint arXiv:2210.12598. Cited by: SS1.
* [17]Y. Feng, H. You, Z. Zhang, R. Ji, and Y. Gao (2021) Hypergraph neural networks. Cited by: SS1.
* [18]Y. Feng, H. Wu, Z. Zhang, and C. K. Tse (2021) Gani: global attacks on graph neural networks via imperceptible node injections. arXiv preprint arXiv:2101.12598. Cited by: SS1.
* [19]Y. Feng, H. Wu, Z. Zhang, R. Ji, and Y. Gao (2021) Hypergraph neural networks. Cited by: SS1.
[MISSING_PAGE_POST]
In _Proceedings of the AAAI conference on artificial intelligence_, volume 33, pages 3558-3565, 2019.
* [Fischer _et al._2021] Maximilian T. Fischer, Devanshu Arya, Dirk Streeb, Daniel Seebacher, Daniel A. Keim, and Marcel Worring. Visual analytics for temporal hypergraph model exploration. _IEEE Transactions on Visualization and Computer Graphics_, 27(2):550-560, FEB 2021.
* [Gao _et al._2022] Yue Gao, Zizhao Zhang, Haojie Lin, Xibin Zhao, Shaoyi Du, and Changqing Zou. Hypergraph learning: Methods and practices. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 44(5):2548-2566, 2022.
* [Gao _et al._2023] Yue Gao, Yifan Feng, Shuyi Ji, and Rongrong Ji. Hgnn+: General hypergraph neural networks. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 45(3):3181-3199, 2023.
* [Goodfellow _et al._2014] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_, 2014.
* [Han _et al._2009] Yi Han, Bin Zhou, Jian Pei, and Yan Jia. Understanding importance of collaborations in co-authorship networks: A supportiveness analysis approach. In _Proceedings of the SIAM International Conference on Data Mining_, pages 1111-1122, 04 2009.
* [Heydari and Livi2022] Sajjad Heydari and Lorenzo Livi. Message passing neural networks for hypergraphs. In _AR-TIFICAL NEURAL NETWORKS AND MACHINE LEARNING_, volume 13530, pages 583-592, 2022.
* [Hu _et al._2023] Chao Hu, Ruishi Yu, Binqi Zeng, Yu Zhan, Ying Fu, Quan Zhang, Rongkai Liu, and Heyuan Shi. Hyperattack: Multi-gradient-guided white-box adversarial structure attack of hypergraph neural networks, 2023.
* [Huang _et al._2009] Yuchi Huang, Qingshan Liu, and Dimitris Metaxas. Video object segmentation by hypergraph cut. In _2009 IEEE conference on computer vision and pattern recognition_, pages 1738-1745. IEEE, 2009.
* [Huang _et al._2015] Sheng Huang, Mohamed Elhoseiny, Ahmed Elgammal, and Dan Yang. Learning hypergraph-regularized attribute predictors. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 409-417, 2015.
* [Huang _et al._2021] Jing Huang, Xiaolin Huang, and Jie Yang. Residual enhanced multi-hypergraph neural network. In _2021 IEEE international conference on image processing_, pages 3657-3661. IEEE, 2021.
* [Huang _et al._2023] Jin Huang, Tian Lu, Xuebin Zhou, Bo Cheng, Zhibin Hu, Weihao Yu, and Jing Xiao. Hyperdne: Enhanced hypergraph neural network for dynamic network embedding. _NEUROCOMPUTING_, 527:155-166, MAR 28 2023.
* [Hussain _et al._2022] Hussain Hussain, Meng Cao, Sandipan Sikdar, Denis Helic, Elisabeth Lex, Markus Strohmaier, and Roman Kern. Adversarial inter-group link injection degrades the fairness of graph neural networks. _arXiv preprint arXiv:2209.05957_, 2022.
* [Ji _et al._2023] Junzhong Ji, Hao Jia, Yating Ren, and Minglong Lei. Supervised contrastive learning with structure inference for graph classification. _IEEE Transactions on Network Science and Engineering_, 10(3):1684-1695, MAYS-JUN 2023.
* [Jiang _et al._2019] Jianwen Jiang, Yuxuan Wei, Yifan Feng, Jingxuan Cao, and Yue Gao. Dynamic hypergraph neural networks. In _2019 International Joint Conferences on Artificial Intelligence_, pages 2635-2641, 2019.
* [Jiang _et al._2022] Chao Jiang, Yi He, Richard Chapman, and Hongyi Wu. Camouflaged poisoning attack on graph neural networks. In _Proceedings of the 2022 International Conference on Multimedia Retrieval_, page 451-461, New York, NY, USA, 2022.
* [Jin _et al._2020] Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. _Graph Structure Learning for Robust Graph Neural Networks_. 2020.
* [Jingjing _et al._2022] Lin Jingjing, Ye Zhonglin, Zhao Haixing, and Fang Lusheng. Deephgnn: A novel deep hypergraph neural network. _Chinese Journal of Electronics_, 31(5):958-968, SEP 2022.
* [Ju _et al._2022] Mingxuan Ju, Yujie Fan, Chuxu Zhang, and Yanfang Ye. Let graph be the go board: Gradient-free node injection attack for graph neural networks via reinforcement learning. _arXiv preprint arXiv:2211.10782_, 2022.
* [Kipf and Welling2016] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_, 2016.
* [Kong _et al._2023] Wei Kong, Yufang Xu, Shuaiqun Wang, Kai Wei, Gen Wen, Yaling Yu, and Yuemin Zhu. A novel longitudinal phenotype-genotype association study based on deep feature extraction and hypergraph models for alzheimer's disease. _BIOMOLECULES_, 13(5), APR 23 2023.
* [Lin _et al._2022] Lu Lin, Ethan Blaser, and Hongning Wang. Graph structural attack by perturbing spectral distance. In _The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_, pages 989-998, 2022.
* [Liu _et al._2022a] Ao Liu, Beibei Li, Tao Li, Pan Zhou, and Rui Wang. An-gcn: An anonymous graph convolutional network against edge-perturbing attacks. _IEEE Transactions on Neural Networks and Learning Systems_, pages 1-15, 2022.
* [Liu _et al._2022b] Zihan Liu, Yun Luo, Lirong Wu, Zicheng Liu, and Stan Z. Li. Towards reasonable budget allocation in untargeted graph structure attacks via gradient debias. In _Advances in Neural Information Processing Systems_, 2022.
* [Liu _et al._2022c] Zihan Liu, Ge Wang, Yun Luo, and Stan Z. Li. What Does the Gradient Tell When Attacking the Graph Structure. _arXiv e-prints_, August 2022.
* [Ma _et al._2022] Zhongtian Ma, Zhiguo Jiang, and Haopeng Zhang. Hyperspectral image classification using feature fusion hypergraph convolution neural network. _IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING_, 60, 2022.
* [Min _et al._2023] Xin Min, Wei Li, Panpan Ye, Tianlong Ji, and Weidong Xie. Multi-channel hypergraph topic neural network for clinical treatment pattern mining. _Information Processing & Management_, 60(4), JUL 2023.
* [Nguyen Thanh _et al._2023] Toan Nguyen Thanh, Nguyen Duc Khang Quach, Thanh Tam Nguyen, Thanh Trung Huynh, Viet Hung Vu, Phi Le Nguyen, Jun Jo, and Quoc Viet Hung Nguyen. Poisoning gnn-based recommender systems with generative surrogate-based attacks. _ACM TRANSACTIONS ON INFORMATION SYSTEMS_, 41(3), feb 2023.
* [Raman _et al._2017] M. R. Gauthama Raman, Nivethitha Somu, Kannan Kirthivasan, and V. S. Shankar Sriram. A hypergraph and arithmetic residue-based probabilistic neural network for classification in intrusion detection systems. _NEURAL NETWORKS_, 92(SI):89-97, AUG 2017.
* [Saxena _et al._2023] Rahul Saxena, Spandan Pankaj Patil, Atul Kumar Verma, Mahipal Jadeja, Pranshu Vyas, Vikrant Bhateja, and Jerry Chun-Wei Lin. An efficient bet-gcn approach for link prediction. _International Journal of Interactive Multimedia and Artificial Intelligence_, 8(1):38-52, MAR 2023.
* [Sen _et al._2008] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. _AI magazine_, 29(3):93-93, 2008.
* [Shafahi _et al._2018] Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. _Advances in neural information processing systems_, 31, 2018.
* [Sharma _et al._2023] Ansh Kumar Sharma, Rahul Kukreja, Mayank Kharbanda, and Tanmoy Chakraborty. Node injection for class-specific network poisoning. _arXiv preprint arXiv:2301.12277_, 2023.
* [Sun _et al._2020] Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, and Vasant Honavar. Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In _Proceedings of The Web Conference 2020_, WWW '20, page 673-683, 2020.
* [Tao _et al._2022] Shuchang Tao, Qi Cao, Huawei Shen, Yunfan Wu, Liang Hou, and Xueqi Cheng. Adversarial camouflage for node injection attack on graphs. _arXiv e-prints_, 2022.
* [Tran and Tran2020] Loc Hoang Tran and Linh Hoang Tran. Directed hypergraph neural network. _arXiv preprint arXiv:2008.03626_, 2020.
* [Wang _et al._2015] Meng Wang, Xueliang Liu, and Xindong Wu. Visual classification by l1 hypergraph modeling. _IEEE Transactions on Knowledge and Data Engineering_, 27(9):2564-2574, 2015.
* [Wang _et al._2020a] Binghui Wang, Tianxiang Zhou, Minhua Lin, Pan Zhou, Ang Li, Meng Pang, Cai Fu, Hai Li, and Yiran Chen. Evasion attacks to graph neural networks via influence function. _arXiv preprint arXiv:2009.00203_, 2020.
* [Wang _et al._2020b] Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, and Qinghua Zheng. Scalable attack on graph data by injecting vicious nodes. _Data Mining and Knowledge Discovery_, 34(5):1363-1389, SEP 2020.
* [Wang _et al._2022a] Xiaoyun Wang, Minhao Cheng, Joe Eaton, Cho-Jui Hsieh, and S Felix Wu. Fake node attacks on graph convolutional networks. _Journal of Computational and Cognitive Engineering_, 1(4):165-173, 2022.
* [Wang _et al._2022b] Yongwei Wang, Yong Liu, and Zhiqi Shen. Revisiting item promotion in gnn-based collaborative filtering: A masked targeted topological attack perspective, 2022.
* [Wu _et al._2015] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 1912-1920, 2015.
* [Wu _et al._2019] Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples on graph data: Deep insights into attack and defense. _arXiv preprint arXiv:1903.01610_, 2019.
* [Wu _et al._2023] Hanrui Wu, Yuguang Yan, and Michael Kwok-Po Ng. Hypergraph collaborative network on vertices and hyperedges. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 45(3):3245-3258, MAR 1 2023.
* [Yadati _et al._2019] Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha Talukdar. Hypergcn: A new method for training graph convolutional networks on hypergraphs. _Advances in neural information processing systems_, 32, 2019.
* [Yang _et al._2022] Shuiqiao Yang, Bao Gia Doan, Paul Montague, and Olivier DeVel. Transferable graph backdoor attack. In _Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses_, pages 321-332, 2022.
* [Yang _et al._2023] Kai Yang, Yuan Liu, Zijuan Zhao, Xingxing Zhou, and Peijin Ding. Graph attention network via node similarity for link prediction. _The European Physical Journal B_, 96(3), MAR 2023.
* [Yu _et al._2023] Ruowang Yu, Yu Xin, Yihong Dong, and Jiangbo Qian. A time sequence coding based node-structure feature model oriented to node classification. _Expert Systems with Applications_, 223, AUG 1 2023.
* [Zang _et al._2023] Xiao Zang, Jie Chen, and Bo Yuan. Guap: Graph universal attack through adversarial patching. _arXiv preprint arXiv:2301.01731_, 2023.
* [Zhang _et al._2022] He Zhang, Xingliang Yuan, Chuan Zhou, and Shirui Pan. Projective ranking-based gnn evasion attacks. _IEEE Transactions on Knowledge and Data Engineering_, 2022.
* [Zhu _et al._2019] Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. Robust graph convolutional networks against adversarial attacks. In _Proceedings of the 25th ACM |
2307.15299 | Differential Evolution Algorithm based Hyper-Parameters Selection of
Transformer Neural Network Model for Load Forecasting | Accurate load forecasting plays a vital role in numerous sectors, but
accurately capturing the complex dynamics of dynamic power systems remains a
challenge for traditional statistical models. For these reasons, time-series
models (ARIMA) and deep-learning models (ANN, LSTM, GRU, etc.) are commonly
deployed and often experience higher success. In this paper, we analyze the
efficacy of the recently developed Transformer-based Neural Network model in
Load forecasting. Transformer models have the potential to improve Load
forecasting because of their ability to learn long-range dependencies derived
from their Attention Mechanism. We apply several metaheuristics namely
Differential Evolution to find the optimal hyperparameters of the
Transformer-based Neural Network to produce accurate forecasts. Differential
Evolution provides scalable, robust, global solutions to non-differentiable,
multi-objective, or constrained optimization problems. Our work compares the
proposed Transformer based Neural Network model integrated with different
metaheuristic algorithms by their performance in Load forecasting based on
numerical metrics such as Mean Squared Error (MSE) and Mean Absolute Percentage
Error (MAPE). Our findings demonstrate the potential of metaheuristic-enhanced
Transformer-based Neural Network models in Load forecasting accuracy and
provide optimal hyperparameters for each model. | Anuvab Sen, Arul Rhik Mazumder, Udayon Sen | 2023-07-28T04:29:53Z | http://arxiv.org/abs/2307.15299v5 | Differential Evolution Algorithm based Hyper-Parameters Selection of Transformer Neural Network Model for Load Forecasting
###### Abstract
Accurate load forecasting plays a vital role in numerous sectors, but accurately capturing the complex dynamics of dynamic power systems remains a challenge for traditional statistical models. For these reasons, time-series models (ARIMA) and deep-learning models (ANN, LSTM, GRU, etc.) are commonly deployed and often experience higher success. In this paper, we analyze the efficacy of the recently developed Transformer-based Neural Network model in load forecasting. Transformer models have the potential to improve load forecasting because of their ability to learn long-range dependencies derived from their Attention Mechanism. We apply several metaheuristics namely Differential Evolution to find the optimal hyperparameters of the Transformer-based Neural Network to produce accurate forecasts. Differential Evolution provides scalable, robust, global solutions to non-differentiable, multi-objective, or constrained optimization problems. Our work compares the proposed Transformer-based Neural Network model integrated with different metaheuristic algorithms by their performance in load forecasting based on numerical metrics such as Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). Our findings demonstrate the potential of metaheuristic-enhanced Transformer-based Neural Network models in load forecasting accuracy and provide optimal hyperparameters for each model.
Deep Learning, Differential Evolution, Particle Swarm Optimization, Genetic Algorithm, Metaheuristics +
Footnote †: Copyright: 978-1-5386-5541-2/18/$31.00 ©2023 IEEE
## I Introduction
Load forecasting is the application of science and technology to predict the future demand for electricity or power in a given geographical location, for some specific future time. It plays a crucial role in various sectors, such as energy, trading and markets, infrastructure planning, disaster management, etc., to name a few. Traditional load prediction methods rely on historical data and models that simulate patterns of electricity consumption, but such models often face challenges in accurately capturing the complex dynamics of power systems [1]. To model this complexity, time series models like Auto-Regressive Moving Average (ARIMA) [2] various deep learning techniques have been introduced such as Artificial Neural Networks (ANN) [3], Recurrent Neural Networks (RNN) [4], Long Short-Term Memory (LSTM) [5], and Gate Recurrent Units (GRU) [6]. The models work to improve the accuracy of load forecasts by leveraging large datasets and discovering hidden patterns to predict future values.
Recently Transformer models [7] have revolutionized machine learning due to their unique architecture. Because of the capability to run parallelly across multiple GPUs, they perform more efficiently compared to other deep learning models and take less time to train compared to sequential models such as LSTMs [8]. Furthermore, as the Transformer model generates results after training through backpropagation, they can generate future results using a larger reference window in comparison to RNNs, LSTMs, and GRUs [9]. This window gives Transformers a better ability to identify long-range dependencies in sequences and better resistance towards the vanishing gradient problem [10] compared to other deep learning models. The Transformer's strength in identifying long-range dependencies made them the optimal model for natural language processing and they are used in machine translation, text generation, speech recognition, and more.
Like any other deep learning model, their performance depends on the chosen hyperparameters. In this work we utilized metaheuristics Genetic Algorithm [11], Differential Evolution [12], and Particle Swarm Optimization [13] to identify ideal hyperparameters. Although hyperparameter search techniques like Grid Search [14], Random Search [15], and Bayesian Optimization [16] are substantial improvements to manual tuning, they are inferior to the metaheuristics discussed this paper. The metaheuristics are more efficient than grid search and random search and more robust and scalable than Bayesian Optimization. Furthermore, these algorithms can be applied to nonlinear, nonconvex, and noncontinuous functions [17][18].
Traditional Transformers take a sequence of tokenized
inputs. For Natural Language Processing these inputs are words but can be generalized to other sequential data for other tasks. These tokens are then run through several encoder and decoder layers. Encoders process the input using the self-attention mechanism to find dependencies between tokens and positional encoding to maintain the ordering of tokens. The decoders then generate output token sequences using similar self-attention mechanisms, but also a unique encoder-decoder attention layer that allows it to read the encoded information.
In this work, we created a custom Transformer Neural network model. Our model only uses the encoder of the Transformer and uses it to enhance Deep Learning Models for load forecasting. This research is unique by investigating the Transformer's Attention Mechanism capabilities outside of the usual scope of natural language processing. We identify that the Transformer's abilities in long-range dependencies can be applied to load forecasting.
Our work seeks to fill the void and propose Differential Evolution optimized custom Transformer Neural Networks specifically designed for load forecasting. To evaluate the results we also integrated Particle Swarm Optimization and Genetic Algorithm with the Transformer Neural Networks to benchmark against our proposed Differential Evolution integrated Transformer Neural Network. In particular, our work is the first to propose a Differential Evolution-based hyperparameter tuning scheme for a Transformer-based Neural Network model for load forecasting.
## II Preliminaries
### _Differential Evolution_
Differential Evolution (DE) is a stochastic population-based optimization algorithm developed by Rainer Storn and Kenneth Price in 1997. It is used to find approximate solutions to a wide class of challenging objective functions. DE can be used on functions that are nondifferentiable, non-continuous, non-linear, noisy, flat, multi-dimensional, possess multiple local minima, contain constraints, or are stochastic [19]. A general problem formulation that DE could solve is:
For objective function \(f:X\subseteq\mathbb{R}^{n}\rightarrow\mathbb{R}\) where \(X\neq\emptyset\) find \(s\in X\) s.t. \(f(s)\leq f(x)\)\(\forall x\in X\) where \(f(s)\neq-\infty\)
Its versatility comes from its unique implementation that does not require the gradient of the function. DE obtains a minimum solution by initializing a set of candidate solutions and iteratively improving each solution by applying various genetic operators [20].
#### Ii-A1 Initialization
Suppose \(f\) has \(D\) parameters. An \(N\)-sized candidate solution population is initialized, with each candidate solution modeled as \(x_{i}\), a \(D\)-parameter vector.
\[x_{i,G}=[x_{1,i,G},x_{2,i,G}...x_{D,i,G}]\text{ where }i=1,2...N\] \[\text{ and }G\text{ is the generation number}\]
Each index \(x_{j,i,G}\) with \(j=1,2...D\) represents a parameter to be manipulated approximate a solution to the objective function [21]. During the initialization of the first generation, each parameter for all candidate solutions is set randomly from bounds \([x_{j}^{L},x_{j}^{U}]\).
\[x_{j}^{L}\leq x_{j,i,1}\leq x_{j}^{U}\]
#### Ii-A2 Mutation
A mutation is a stochastic change that expands the candidate solution search space. Mutations are used in DE to prevent the algorithm from converging upon a local optimum [22]. In the original mutation scheme devised by Storn, a mutant vector \(v_{i}\) is created from randomly sampling three candidate solution vectors \(v_{r_{1}}\), \(v_{r_{2}}\), \(v_{r_{3}}\) such that \(r_{1},r_{2},r_{3}\) and \(i\) are distinct. The mutant vector is obtained by adding the weighted difference of two of the vectors to the third.
\[v_{i,G+1}=v_{r_{1},G}+F\times(v_{r_{2},G}-v_{r_{3},G})\]
\(F\in[0,2]\) represents the scale factor controlling the magnitude of the mutation.
#### Ii-A3 Crossover
Crossover is how successful candidate solutions pass their characteristics to the following generations. A trial vector \(u_{i,G+1}\) is created by combining the original vector \(x_{i,G}\) and its corresponding mutant vector \(v_{i,G+1}\). A widely used crossover scheme is described below: [23].
\[u_{j,i,G+1}=\left\{\begin{array}{ll}v_{j,i,G+1},&\text{if }p_{rand}\text{ }U(0,1)\leq CR\\ x_{j,i,G}&\text{else}\end{array}\right\}\]
Each \(j=1,2....D\) and \(v_{i,G+1}\neq x_{i,G}\)
#### Ii-A4 Selection
Given both the initial target vector and generated trial vector, the fitness of each is evaluated using the initial objective or cost function \(f\). The vector with the lower cost is passed to the next generation.
\[x_{i,G+1}=\left\{\begin{array}{ll}u_{i,G+1},&\text{if }f(u_{i,G+1})\leq f(x_{i,G})\\ x_{i,G}&\text{else}\end{array}\right\}\]
The Differential Evolution Algorithm is illustrated in Figure 1 below.
Mutation, crossover, and selection are cycled until either the maximum number of generations is attained or the candidate solutions meet a predefined accuracy threshold as defined.
Fig. 1: Differential Evolution Algorithm
## III Proposed Approach
This section describes the implementation of integrating metaheuristics with the Transformer-based Neural Network for load forecasting. Each metaheuristic is used to identify the optimal set of hyperparameters and the efficacy of the hyperparameters is measured using the Mean Squared Error (MSE) and Mean Average Percentage Error (MAPE) metrics. The integrated Differential Evolution mechanism selection strategy for the hyperparameters is outlined in Figure 2 above.
### _Transformer-based Neural Network_
We implemented the Transformer-based Neural Network by building a sequential model and sequentially adding layers. The input layer is initialized with 36 nodes and then passed through a dense time-distributed Transformer-based Neural Network layer of 64 nodes. Next, we have an 8-headed attention layer with a dimension of 64 and a dropout rate of 0.1, where the output of the previous operation is applied. The result is then flattened and run through 2 dense layers containing 64 nodes before returning through a 24-node output layer. The activation function used for all cases, except for the output layer, is Rectified Linear Units (ReLU) [24] (modeled below) except for the output layer.
\[f(x)=max(0,x)\]
The output uses a Linear Activation Function.
\[f(x)=x\]
All Transformer-based Neural Networks use the implemented form of metaheuristics algorithms to optimize the batch size, learning rate hyperparameters, and epoch. Optimization is done by minimizing loss using Mean Squared Error. The metaheuristic-optimized Transformer-based Neural Networks are assessed by comparing the MAPE for each set of hyperparameters found.
The entire architecture of the proposed model is portrayed in Figure 3 below.
The multi-headed attention is primarily used for the model to simultaneously operate different parts of the input sequence, improving the performance. The normalization layer is then passed on top of the attention layer to make the model robust. This avoids the scenario of the model relying too much on specific features, which reduces over-fitting. The best-optimized set of hyperparameters obtained from each metaheuristic algorithm is then applied to each Transformer-based Neural Network and subsequently tested on the test dataset to generate the results.
Each model's best set of batch size epoch and learning
Fig. 3: The proposed Transformer-based deep learning model
Fig. 2: Mechanism of the Differential algorithm based hyper-parameters selection approach for the Load Forecasting task.
rate are summarized in Table I below:
After passing the attention layer, the output is flattened. This means it is reshaped from a 3D to a 2D tensor, which allows the subsequent layers to treat the output as a sequence of 2D inputs. The flattened output is then passed through two dense layers, each consisting of 64 layers. These dense layers and increased nodes allow the model to capture more complex patterns and relationships in the data.
The output layer contains 24 nodes, as the model is designed to produce a prediction 24 hours ahead. The custom transformer model architecture developed is intended for the specific task of short-term load forecasting. Its primary aim is to aid the industry by operating on load data to predict variations in various load parameters.
## IV Experimental Details
### _Dataset Description_
For this project, the _Load Dataset1_ was curated using meteorological data scraped from the official website of the Government of Canada [25]. The dataset covers the period from 1st January 2017 to 4th July 2023 in Ottawa, Ontario. It contains 19 variables, capturing details such as date, time (in 24 hours), year, quarter, month, week of the year, day of the year, state holiday, hour of the day, day of the week, day type, temperature (in \({}^{\circ}\)C), dew point temperature (in \({}^{\circ}\)C), relative humidity (%), wind speed (in km/h), visibility (in km), precipitation amounts (in mm), daily peak (in MW), and hourly demand (in MW). In total, there are 96,432 rows, with each row representing data for a specific hour.
Footnote 1: Dataset Link: [https://doi.org/10.7910/DVN/08QASH](https://doi.org/10.7910/DVN/08QASH)
### _Preprocessing_
During the preprocessing stage, we addressed missing data in the compiled dataset. Since the precipitation column had significant missing information, it was excluded from the analysis. Regarding the temperature, only \(0.03\%\) of the data was missing. To forecast predictions up to 24 hours into the future, we used 3 hours of past data. The data was standardized using the StandardScaler function from the sklearn.preprocessing library [26].
The dataset was then split into three subsets: the training dataset, denoted as \(D_{train}\), the validation dataset, denoted as \(D_{val}\), and the testing dataset, denoted as \(D_{test}\). The training dataset covers the period from January 1st, 2017, to December 31st, 2020. Within this dataset, 25% of the data was allocated to the validation dataset. The remaining data, extending until July 14th, 2023, constitutes the testing dataset.
### _Experimental Setups_
The experiments of this work are implemented in Python 3.10.11 using three libraries : Tensorflow 2.11.0, Tensorflow built in Keras, and Numpy 1.21.
## V Results and Discussion
We obtained the mean absolute percentage error (MAPE) using the proposed approach to implement the differential evolution-based hyperparameter tuning of the Transformer-based deep Neural Network offered in the preceding section. This MAPE was compared to the MAPEs generated from the proposed approach with the genetic algorithm and particle swarm optimization-based hyperparameter tuning of the custom architecture. The codes used in this paper are linked below2.
Footnote 2: Code Link: [https://github.com/AnuvabSen1/Meta-Transformer](https://github.com/AnuvabSen1/Meta-Transformer)
The Standard scaler has been used to improve the convergence and stability of seasonal data during model training. This scaler prevents features of larger sizes from dominating the training process and also normalizes the dataset, allowing the model to learn effectively from the data. These steps are necessary for improving forecasting models.
The mean squared error (MSE) is used to measure the fitness of the differential evolution algorithm.
MSE serves as the loss function and is plotted against the number of epochs for the entire training duration as shown in Figure 4.
The plot helps us to observe how the loss changes over time and whether the model is optimizing or overfitting.
Mean Absolute Percentage Error (MAPE) is used to gauge the accuracy of the entire model. It provides a measure of the average percentage difference between predicted values and the actual values.
Table II below provides us with a comparison of MAPE among various metaheuristic optimization algorithms used here.
Fig. 4: Training & Validation Loss vs Epochs plots for the Transformer-based Neural Network DE model
The results prove that Differential Evolution (DE) algorithm outperforms the Genetic Algorithm (GA) and Particle Swarm Optimisation (PSO) in terms of mean absolute percentage error (MAPE).
Differential Evolution's superior ability can be attributed to a few factors. DE can more effectively explore the search space and exploit the promising regions for optimal solutions using its various genetic operators, therefore producing most desirable results. To visually understand the results and accuracy of the load forecasting model proposed here we have used two plots.
The first plot provides us with a 24-hour prediction for the best-performing DE on the Transformer-based Neural Network model as shown in Figure 6.
This shows that DE on Transformer-based Neural Network gives a fairly accurate prediction on Test data. The second graph plots the hourly demand variation for 24 hours starting from the \(N\)\({}^{\text{th}}\) hour shown in Figure 5.
The plots indicate that the accuracy decreases as N increases or as further in time we want to predict the less accurate results we obtain. The mutation operator sets random disturbances to ensure the prevention of early convergence towards a local minimum. The crossover operator passes on successful attributes to accelerate the convergence process even further.
The selection operator preserves the fittest candidate solutions to improve the quality of results.
The results ascertain that metaheuristic optimization algorithms consistently outperform the manual selection method of selecting hyperparameters. Particle Swarm Optimization (PSO) performs better than Genetic Algorithm (GA) but falls short behind Differential Evolution (DE). PSO suffers from rapid convergence, limiting its ability to reach the global optimum, which could be an explanation for its performance.
## VI conclusion and future work
This paper applies several metaheuristic algorithms to a custom Transformer-based Neural Network to find the optimal hyperparameters. This selection method was proven to be far more efficient and accurate than manual selection. Amongst the metaheuristics tested, Differential
Fig. 5: Predicted plots for hourly demand for next 24 hours starting from the \(N\)-th hour for Transformer-based Neural Network DE Model.
Fig. 6: 24-hours ahead forecast plot for the Transformer-based Neural Network DE model
Evolution proved to be the best because of its mutation and selection operators which not only allowed the algorithm to thoroughly search the sample space but the filter and refine the best solutions. Differential Evolutions performance was then followed by Particle Swarm Optimization and finally Genetic Algorithm.
Due to possessing limited computational resources, each metaheuristic algorithm couldn't be applied to sufficiently large populations over many generations. If this research is extended with more powerful devices, future studies over larger populations and more generations will corroborate our findings. Future study may investigate the performance of other alternative metaheuristic algorithms on hyperparameter tuning for similar deep learning models, across a wide range of forecasting tasks.
## VII acknowledgement
This work was made possible through the generous backing of Mitacs and the dedicated support of Dr. Chi Tang, Associate Professor at McMaster University. Their invaluable support enabled Anuvable Sen to embark on an enriching journey of undergraduate research at McMaster University, Canada.
|
2303.06060 | Deep Spiking Neural Networks with High Representation Similarity Model
Visual Pathways of Macaque and Mouse | Deep artificial neural networks (ANNs) play a major role in modeling the
visual pathways of primate and rodent. However, they highly simplify the
computational properties of neurons compared to their biological counterparts.
Instead, Spiking Neural Networks (SNNs) are more biologically plausible models
since spiking neurons encode information with time sequences of spikes, just
like biological neurons do. However, there is a lack of studies on visual
pathways with deep SNNs models. In this study, we model the visual cortex with
deep SNNs for the first time, and also with a wide range of state-of-the-art
deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct
neural representation similarity experiments on three neural datasets collected
from two species under three types of stimuli. Based on extensive similarity
analyses, we further investigate the functional hierarchy and mechanisms across
species. Almost all similarity scores of SNNs are higher than their
counterparts of CNNs with an average of 6.6%. Depths of the layers with the
highest similarity scores exhibit little differences across mouse cortical
regions, but vary significantly across macaque regions, suggesting that the
visual processing structure of mice is more regionally homogeneous than that of
macaques. Besides, the multi-branch structures observed in some top mouse
brain-like neural networks provide computational evidence of parallel
processing streams in mice, and the different performance in fitting macaque
neural representations under different stimuli exhibits the functional
specialization of information processing in macaques. Taken together, our study
demonstrates that SNNs could serve as promising candidates to better model and
explain the functional hierarchy and mechanisms of the visual system. | Liwei Huang, Zhengyu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian | 2023-03-09T13:07:30Z | http://arxiv.org/abs/2303.06060v5 | Deep Spiking Neural Networks with High Representation Similarity Model Visual Pathways of Macaque and Mouse
###### Abstract
Deep artificial neural networks (ANNs) play a major role in modeling the visual pathways of primate and rodent. However, they highly simplify the computational properties of neurons compared to their biological counterparts. Instead, Spiking Neural Networks (SNNs) are more biologically plausible models since spiking neurons encode information with time sequences of spikes, just like biological neurons do. However, there is a lack of studies on visual pathways with deep SNNs models. In this study, we model the visual cortex with deep SNNs for the first time, and also with a wide range of state-of-the-art deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct neural representation similarity experiments on three neural datasets collected from two species under three types of stimuli. Based on extensive similarity analyses, we further investigate the functional hierarchy and mechanisms across species. Almost all similarity scores of SNNs are higher than their counterparts of CNNs with an average of \(6.6\%\). Depths of the layers with the highest similarity scores exhibit little differences across mouse cortical regions, but vary significantly across macaque regions, suggesting that the visual processing structure of mice is more regionally homogeneous than that of macaques. Besides, the multi-branch structures observed in some top mouse brain-like neural networks provide computational evidence of parallel processing streams in mice, and the different performance in fitting macaque neural representations under different stimuli exhibits the functional specialization of information processing in macaques. Taken together, our study demonstrates that SNNs could serve as promising candidates to better model and explain the functional hierarchy and mechanisms of the visual system.
\({}^{1}\)National Engineering Research Center of Visual Technology, School of Computer Science, Peking University, China
\({}^{2}\)Department of Networked Intelligence, Peng Cheng Laboratory, China
[email protected], {mazhy, yult, zhouhh}@pcl.ac.cn, [email protected]
## Introduction
Originally, the prototype of deep neural networks is inspired by the biological vision system [12]. To date, deep neural networks not only occupy an unassailable position in the field of computer vision [13], but also become better models of the biological visual cortex compared to traditional models in the neuroscience community [1, 14, 15]. They have been successful at predicting the neural responses in primate visual cortex, matching the hierarchy of ventral visual stream [16, 17, 18], and even controlling neural activity [1, 19, 20]. Moreover, as training paradigms of mice [21] and techniques for collecting neural activity [18] have been greatly improved, there is a strong interest in exploring mouse visual cortex. Deep neural networks also play an important role in revealing the functional mechanisms and structures of mouse visual cortex [22, 2, 19, 2, 16].
Compared to biological networks, Artificial Neural Networks discard the complexity of neurons [13]. Spiking Neural Networks, incorporating the concept of time and spikes, are more biologically plausible models [15]. To be more specific, because of their capabilities of encoding information with spikes, capturing the dynamics of biological neurons, and extracting spatio-temporal features, deep SNNs are highly possible to yield brain-like representations [1, 12, 13, 14]. However, deep SNNs have not been employed to model visual cortex due to the immaturity of training algorithms. Recently, a state-of-the-art directly trained deep SNN [10], makes it possible to use deep SNNs as visual cortex models.
**Contributions.** In this work, we conduct large-scale neural representation similarity experiments on SNNs and other high-performing deep neural networks to study the brain's visual processing mechanisms, with three datasets and three similarity metrics (Figure 1). Specifically, to the best of our knowledge, we are the first to use deep SNNs to fit complex biological neural representations and explore the biological visual cortex. We summarize our main contributions in four points as follows.
* We find that SNNs outperform their counterparts of CNNs with the same depth and almost the same architectures in almost all experiments. In addition, even with very different depths and architectures, SNNs can achieve top performance in most conditions.
* By making a more direct comparison between macaques
and mice for the first time, we reveal the differences in the visual pathways across the two species in terms of the homogeneity of visual regions and the increases of receptive field sizes across cortical visual pathways, which is consistent with previous physiological work.
* The multi-branch structures in neural networks benefit neural representation similarity to mouse visual cortex, providing computational evidence that parallel information processing streams are widespread between cortical regions in the mouse visual system.
* Comparing the results of two macaque neural datasets under different stimuli, we reveal that the macaque vision system may have functional specialization for processing human faces and other natural scenes.
Altogether, as the first work to apply deep SNNs to fit neural representations, we shed light on visual processing mechanisms in both macaques and mice, demonstrating the potential of SNNs as a novel and powerful tool for research on the visual system. Our codes and appendix are available at _[https://github.com/Grasshhw/SNN-Neural-Similarity_](https://github.com/Grasshhw/SNN-Neural-Similarity_).
## Related Work
There are plenty of computational models of macaque and mouse visual systems for exploring the visual processing mechanisms recently. We summarize some of the outstanding work in the following.
**The network models of macaque visual system.** In the early days, studies basically used simple feedforward neural networks as the models of the macaque visual system [1, 13, 14]. Recently, some bio-inspired or more complex models achieved better performance in fitting the neural representations of macaque visual cortex [1, 15, 16, 17]. [18] proposed a brain-like shallow CNN with recurrent connections to better match the macaque ventral visual stream. By mimicking the primary stage of the primate visual system, VOneNets [1] performed more robustly in image recognition while better simulating macaque V1. Moreover, the representations learned by unsupervised neural networks [16, 15] also effectively matched the neural activity of macaque ventral visual stream. Although the above work developed many bio-inspired structures, the networks are still traditional ANNs in nature. Our work introduces deep SNNs for the first time to explore the visual processing mechanisms of macaque visual system.
**The network models of mouse visual system.** Large-scale mouse neural dataset provided an experimental basis for model studies of mouse visual system [1, 15]. [17] conducted comparisons between the representations of mouse visual cortex and the VGG16 trained on the ImageNet dataset. In [1], they developed a single neural network to model both the dorsal and ventral pathways with showing the functional specializations. What's more, a large survey of advanced deep networks [13] revealed some hierarchy and functional properties of mice. Similar to the studies of macaque visual system, deep SNNs have never been used to model the mouse visual system. In this work, we not only use SNNs as one of the candidates to fit the representations of mouse visual cortex, but also conduct direct comparisons between macaques and mice to further investigate the functional hierarchy and mechanisms of the two species.
## Methods
### Neural Datasets
Our work is conducted with three neural datasets. These datasets are recorded from two species under three types of stimuli. More specifically, there are neural responses of mouse visual cortex to natural scene stimuli, and responses of macaque visual cortex to face image and synthetic image stimuli.
**Allen Brain mouse dataset.** It is part of the Allen Brain Observatory Visual Coding dataset [15] col
Figure 1: To conduct neural representation similarity experiments, we apply three similarity metrics to a layer-by-layer comparison between the responses of models and the neural activities of visual cortex.
lected using Neuropixel probes from 6 regions simultaneously in mouse visual cortex. Compared to two-photon calcium imaging, Neuropixel probes simultaneously record the spikes across many cortical regions with high temporal resolution. In these experiments, mice are presented with 118 250-ms natural scene stimuli in random orders for 50 times. Hundreds to thousands of neurons are recorded for each brain region. To get the stable neurons, we first concatenate the neural responses (average number of spikes in 10-ms bins across time) under 118 images for each neuron, and then preserve the neurons whose split-half reliability across 50 trials reaches at least 0.8.
**Macaque-Face dataset.** This dataset [12] is composed of neural responses of 159 neurons in the macaque anterior medial (AM) face patch under 2,100 real face stimuli, recorded with Tungsten electrodes. For this dataset, we compute the average number of spikes in a time window of 50-350ms after stimulus onset and exclude eleven neurons with noisy responses by assessing the neurons' noise ceiling. The details of the preprocessing procedure are the same as [12].
**Macaque-Synthetic dataset.** This dataset [13] is also about macaque neural responses which are recorded by electrodes under 3,200 synthetic image stimuli, and used for neural prediction in the initial version of Brain-Score [14]. The image stimuli are generated by adding a 2D projection of a 3D object model to a natural background. The objects consist of eight categories, each with eight subclasses. The position, pose, and size of each object are randomly selected. 88 neurons of V4 and 168 neurons of IT are recorded. The neural responses are preprocessed to the form of average firing rate and can be downloaded from Brain-Score.
### Models
Since the core visual function of macaque and mouse visual cortex is to recognize objects, the basic premise of model selection is that the model has good performance on object recognition tasks (e.g. classification on ImageNet). Based on this premise, we employ 12 SNNs, 43 CNNs, and 26 vision transformers, all of which are pretrained on the ImageNet dataset and perform well in the classification task. As for SNNs, we use SEW ResNet as the base model, which is the deepest and SOTA directly trained SNN [12]. Furthermore, by combining the residual block used in SEW ResNet and the hierarchy of the visual cortex, we build several new SNNs and train them on the ImageNet using SpikingJelly [12] (see Appendix A for model structures and the details of model training). As for CNNs and vision transformers, we use 44 models from the Torchvision model zoo [12], 22 models from the Timm model zoo [13] and 3 models from the brain-like CNNs, CORnet family [11]. In the feature extraction procedures of all models, we feed the same set of images used in biological experiments to the pretrained models and obtain features from all chosen layers. Different from CNNs and vision transformers, the features of SNNs are spikes in multiple time steps.
### Similarity Metrics
To obtain the representation similarity between biological visual cortex and computational models, we apply three similarity metrics to computing similarity scores: representational similarity analysis (RSA) [15, 16, 17], regression-based encoding method [14, 15, 16] and singular vector canonical correlation analysis (SVCCA) [12, 13]. RSA has already been widely used to analyze neural representations of a model and a brain to different stimuli at the population level, while the regression-based encoding method directly fits the model features to neural activity data. SVCCA is originally proposed to compare features of deep neural networks, and then [13] used it to compare representation matrices from mouse visual cortex and DNNs, which demonstrated its effectiveness.
With the same model and same cortical region, we use these metrics for a layer-by-layer comparison to compute the similarity scores. The maximum similarity score across layers for a given cortical region is considered to be the level of representation similarity between the model and the cortical region. Finally, in a given dataset, we take the average score of all cortical regions as the final similarity score for each model, which gives the overall model rankings. The implementation of each similarity metric is as follows.
**RSA.** For two response matrices \(R\in\mathbb{R}^{n\times m}\) from each layer of models and each cortical region, where \(n\) is the number of units/neurons and \(m\) is the number of stimuli, we calculate the representational similarity between the responses to each pair of image stimuli using the Pearson correlation coefficient \(r\), yielding two representational dissimilarity matrices (\(RDM\in\mathbb{R}^{m\times m}\), where each element is the correlation distance \(1-r\)). Then, the Spearman rank correlation coefficient between the flattened upper triangles of these two matrices is the metric score.
**Regression-Based Encoding Method.** Firstly, we run truncated singular value decomposition (TSVD) to reduce the feature dimension of model layers to 40. Secondly, the features after dimensionality reduction are fitted to the representations of each neuron by ridge regression. Finally, we compute the Pearson correlation coefficient between the predicted and ground-truth representations of each neuron and take the mean of all correlation coefficients as the metric score. More specifically, we apply leave-one-out cross-validation to obtain predicted representations of each neuron. For simplicity, we name this method 'TSVD-Reg'.
**SVCCA.** For both the responses of model layers and cortical regions, we use TSVD to reduce the dimension of unit/neuron to 40, yielding two reduced representation matrices. Then we apply canonical correlation analysis (CCA) to these two matrices to obtain a vector of correlation coefficients (the length of the vector is 40). The metric score is the mean of the vector. Because of the invariance of CCA to affine transformations [12], in this procedure, we only need to ensure that the stimulus dimension is consistent and aligned, even if the unit/neuron dimension is different. Dimensionality reduction plays an important role
in this method to make the number of model features comparable to the number of neurons in cortical regions, since the former usually far exceeds the latter. In addition, dimensionality reduction helps to determine which features are important to the original data, while CCA suffers in important feature detection. Using just CCA performs badly, which has been proven by [1].
## Results
### Comparisons of Representation Similarity Scores between SNNs and Other Types of Models
To check how similar the models are to the visual cortex's mechanisms in visual processing, we rank the final similarity scores of all models and conduct comparisons among three types of models (CNNs, SNNs, and vision transformers). Specially, we focus on comparing SNN (SEW ResNet) and CNN (ResNet) with the same depth and almost the same architectures (Figure 2). The final similarity score of a model is the average similarity score across all cortical regions. (The overall rankings can be found in Appendix B and the comparisons among three types of models are shown in Appendix C.)
**Allen brain mouse dataset.** No single model achieves the highest final similarity scores with all three metrics. For a fair comparison, we apply the paired t-test to SEW ResNet and ResNet with the same depth. For all three metrics, SEW ResNet performs better than ResNet by a large margin (\(t=5.857\), \(p=0.004\); \(t=7.666\), \(p=0.002\); \(t=7.592\), \(p=0.002\)1.
Footnote 1: The results of the three similarity metrics are separated by semicolons, in the order of SVCCA, TSVD-Reg, and RSA. Other results that appear below also correspond to the three metrics in this order, unless the correspondence is stated in the text.
**Macaque-Face dataset.** For both SVCCA and TSVD-Reg, Wide-SEW-ResNet14 and Wide-SEW-ResNet8 achieve the first and second highest final similarity scores respectively. But for RSA, TNT-S and Inception-ResNet-V2 take their place and outperform other models by a large margin. As for SEW ResNet and ResNet, the former performs significantly better than the latter for both SVCCA and TSVD-Reg (\(t=8.195\), \(p=0.001\); \(t=7.528\), \(p=0.002\)). However, the difference is not significant for RSA (\(t=1.117\), \(p=0.327\)). Specifically, the similarity score of SEW ResNet152 is only slightly higher than that of ResNet152, and at the depth of 50 and 101, SEW ResNet's scores are lower than ResNet's.
**Macaque-Synthetic dataset.** Similar to the results of Allen Brain dataset, no model performs best for all three metrics. SEW ResNet performs moderately better than ResNet (\(t=3.354\), \(p=0.028\); \(t=3.824\), \(p=0.019\); \(t=2.343\), \(p=0.079\)). The only contrary is that SEW ResNet18 performs worse than ResNet18 for RSA.
Further, to check the details of comparison between the SNNs and their CNN counterparts, we analyze the trajectories of similarity score across model layers (Figure 3). As for ResNet and SEW ResNet with the same depth, the trends of their similarities across model layers are almost the same, but the former's trajectory is generally below the latter's. In other words, the similarity scores of SEW ResNet are higher than those of ResNet at almost all layers.
Taken together, the results suggest that when the overall
Figure 3: For three datasets and three similarity metrics, we plot the trajectories of similarity score with model layer depth. The models are divided into two groups: ResNet and SEW ResNet. The normalized layer depth ranges from 0 (the first layer) to 1 (the last layer). Because the depths of models are not the same, we first discretize the normalized depth into 50 bins, and then apply the cubic spline interpolation to the scores of each model, yielding the smooth trajectories shown in the plot. The fine, semitransparent lines are the trajectories of each model. The thick lines are the average trajectories among each group.
Figure 2: For three datasets and three similarity metrics, each point indicates the final representation similarity score of a model. Each pair of SEW ResNet and ResNet with the same depth are linked by a gray solid line. In almost all conditions, SEW ResNet outperforms ResNet by a large margin.
architectures and depth are the same, SNNs with spiking neurons perform consistently better than their counterparts of CNNs with an average increase of \(6.6\%\). Besides, SEW ResNet14 also outperforms the brain-like recurrent CNN, CORnet-S, with the same number of layers (see more details in Appendix B). Two properties of SNNs might contribute to the higher similarity scores. On the one hand, IF neurons are the basic neurons of spiking neural networks. The IF neuron uses several differential equations to roughly approximate the membrane potential dynamics of biological neurons, which provides a more biologically plausible spike mechanism for the network. On the other hand, the spiking neural network is able to capture the temporal features by incorporating both time and binary signals, just like the biological visual system during information processing.
Best Layers across Cortical Regions Reveal Functional Hierarchy in the Visual Cortex of Macaques and Mice
To figure out the distinctions in the functional hierarchy between macaques and mice, for each cortical region, we obtain the normalized depth of the layer that achieves the highest similarity score in each model. Then, we divide models (excluding vision transformers) into two groups based on their depths and conduct investigations on these two groups separately. A nonparametric ANOVA is applied to each group for testing whether layer depths change significantly across cortical regions.
For mouse visual cortex (Figure 4 (a)), taking the deep model group as an example, ANOVA shows overall significant changes in depth across cortical regions for TSVD-Reg and RSA (Friedman's \(\chi^{2}=49.169\), \(p=2.0\times 10^{-9}\); \(\chi^{2}=19.455\), \(p=0.002\)). But there is no significant change for SVCCA (\(\chi^{2}=8.689\), \(p=0.122\)). According to these results, the differences in depth across regions are indeterminacy and irregular. Meanwhile, the trends of layer depth between some regions contradict the hierarchy observed in physiological experiments of mice (those between VISp and VISrl for TSVD-Reg and between VISal and VISpm for RSA). However, for macaque visual cortex (Figure 4 (b)), there are significant differences (\(t=-5.451\), \(p=6.5\times 10^{-6}\); \(t=-8.312\), \(p=2.8\times 10^{-9}\); \(t=-3.782\), \(p=6.9\times 10^{-4}\), also taking the deep model group as an example) between V4 and IT, and the trend is consistent with the information processing hierarchy in primate visual cortex.
The comparative analyses of the best layer depths of the shallow and deep model groups also exhibit the differences between macaques and mice. For mouse visual cortex, the best layer depths of shallow models are significantly higher than those of deep models. Compared to deep models, most shallow models achieve the top similarity scores in intermediate and even later layers. Differently, for macaque visual cortex, the depth of models has little effect on the depth of the most similar layer. What's more, we find that the most similar layer of mouse visual cortex always occurs after the \(28\times 28\) feature map is downsampled to \(14\times 14\), which leads to the layer depths' difference between shallow and deep models. Nevertheless, the best layer of macaque IT appears in the last part of networks, where the feature map has been downsampled more times.
In summary, our results might reveal two distinctions in the functional hierarchy between macaques and mice. First, there is a distinct functional hierarchical structure of macaque ventral visual pathway, while there might be no clear sequential functional hierarchy in mouse visual cortex. One explanation is that the mouse visual cortex is organized into a parallel structure and the function of mouse cortical regions are more generalized and homogeneous than those of macaques. Another possibility would be that even though the sequential relations exist among mouse cortical regions as proposed in anatomical and physiological work, they are too weak for the current deep neural networks to capture. Additionally, mice perform more complex visual tasks than expected with a limited brain capacity (Djurdjevic et al., 2018). Consequently, the neural responses of mouse visual cortex may contain more information not related to object recognition that neural networks focus on. Secondly, it is well known that the units in the neural networks get larger receptive fields after downsampling, and through the analyses of differences between two groups of models based on depth, we find the feature map of the best layer for mouse is downsampled fewer times than that for macaque. Based on these results, we provide computational evidence that the increased ratio of the receptive field size in cortical regions across the mouse visual pathway is smaller than those across the macaque visual pathways, which echoes some physio
Figure 4: For three datasets, we plot the normalized depth of the layer that achieves the top similarity score in each cortical region and each metric. Based on model depth, neural networks are divided into two groups: shallow models with less than 50 layers and deep models with more than 50 layers. The normalized layer depth ranges from 0 (the first layer) to 1 (the last layer). Each small point indicates an individual model. The large point indicates the average depth across a group.
logical work (Siegle et al., 2021; Zhu and Yang, 2013).
Structures and Mechanisms of Models Reveal Processing Mechanisms in the Visual Cortex of Macaques and Mice
To explore the processing mechanisms in the visual cortex of macaques and mice, we investigate the model properties from the whole to the details. As shown in Table 1 and 2, we first measure the correlation between the similarity scores and the sizes (i.e. the number of trainable parameters and the depth) of network models. For Allen Brain mouse dataset, there are significant negative correlations between the similarity scores and the number of parameters for three metrics while there is no correlation with the depth. Conversely, for the two macaque neural datasets, the similarity scores are highly correlated with the depth of networks, but not with the number of parameters. Specifically, there is a positive correlation for Macaque-Face dataset while a negative correlation for Macaque-Synthetic dataset. (We also apply the linear regression to analyze the correlation between the similarity scores and the model size. The results are consistent with Spearman's rank correlation and are shown in Appendix E). Based on these results, we further investigate more detailed properties of neural networks to explain the processing mechanisms in the visual cortex.
For the mouse dataset, on the one hand, the best layer depths show non-significant changes across the mouse cortical regions as mentioned in the previous section. On the other hand, the similarity scores of the mouse dataset are only correlated with the number of model parameters but not with the depth of models. It calls into the question whether any detailed structures in the neural networks help to reduce the number of parameters and improve its similarity to mouse visual cortex. Therefore, we explore the commonalities between models that have the top 20% representation similarities (see Appendix D) for Allen Brain dataset. As expected, the top models contain similar structures, such as fire module, inception module, and depthwise separable convolution. All these structures essentially process information through multiple branches/channels and then integrate the features from each branch. The models with this type of structure outperform other models (\(t=2.411\), \(p=0.024\); \(t=3.030\), \(p=0.007\); \(t=1.174\), \(p=0.247\)). Moreover, we apply the depthwise separable convolution to SNNs, which yields a positive effect. The representation similarity of Spiking-MobileNet is higher than SEW-ResNet50 with a similar depth (+0.8%; +3.9%; +12.1%). In fact, some studies using multiple pathways simulate the functions of mouse visual cortex to some extent (Shi et al., 2022; Nayebi et al., 2022). Our results further suggest that not only the mouse visual cortex might be an organization of parallel structures, but also there are extensive parallel information processing streams between each pair of cortical regions (Wang, Sporns, and Burkhalter, 2012; Siegle et al., 2021).
For the two macaque datasets with different stimuli, not only are the model rankings significantly different, but also the correlations between the similarity scores and the model depth are totally opposite. These results corroborate the following two processing mechanisms in macaques: the ventral visual stream of primate visual cortex possesses canonical coding principles at different stages; the brain exhibits a high degree of functional specialization, such as the visual recognition of faces and other objects, which is reflected in the different neural responses of the corresponding region (although the face patch AM is a sub-network of IT, they differ in the neural representations). Besides, as shown in Figure 5,
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**DatasetMetric** & **SVCCA** & **TSVD-Regression** & **RSA** \\ \hline
**Allen Brain mouse dataset** & \(r=-0.654\), & \(r=-0.596\), & \(r=-0.548\), \\ & \(p=2.0\times 10^{-6}\) & \(p=2.4\times 10^{-5}\) & \(p=1.4\times 10^{-4}\) \\ \hline
**Macaque-Face dataset** & — & — & — \\ \hline
**Macaque-Synthetic dataset** & — & — & — \\ \hline \hline \end{tabular}
\end{table}
Table 1: The correlation between the similarity scores and the number of parameters. \(r\) is Spearman’s rank correlation coefficient. ”—” indicates that there is no significant correlation.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**DatasetMetric** & **SVCCA** & **TSVD-Regression** & **RSA** \\ \hline
**Allen Brain mouse dataset** & — & — & — \\ \hline
**Macaque-Face dataset** & \(r=0.657\), & \(r=0.634\), & \(r=0.527\), \\ & \(p=4.2\times 10^{-6}\) & \(p=1.1\times 10^{-5}\) & \(p=4.7\times 10^{-4}\) \\ \hline
**Macaque-Synthetic dataset** & — & \(r=-0.408\), & \(r=-0.575\), \\ & & \(p=0.009\) & \(p=1.1\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The correlation between the similarity scores and the model depth. \(r\) is Spearman’s rank correlation coefficient. ”—” indicates that there is no significant correlation.
the similarity scores of vision transformers reach the maximum in the early layers and then decrease. Differently, the scores of CNNs and SNNs keep trending upwards, reaching the maximum in almost the last layer. On the other hand, Appendix C shows that vision transformers perform well in Macaque-Face dataset but poorly in Macaque-Synthetic dataset. Considering the features extraction mechanism of vision transformers, it divides the image into several patches and encodes each patch as well as their internal relation by self-attention. This mechanism is effective for face images that are full of useful information. However, the synthetic image consists of a central target object and a naturalistic background. When vision transformers are fed with this type of stimuli, premature integration of global information can lead to model representations containing noise from the unrelated background. What's more, when we take all models with the top 20% representation similarities as a whole for analyses, as described in the above paragraph, the properties that enable networks to achieve higher neural similarity are not yet clear. Taken together, the computational mechanism of the better models may reveal core processing divergence to different types of stimuli in the visual cortex.
## Discussion
In this work, we take large-scale neural representation similarity experiments as a basis, aided by analyses of the similarities across models and the visual cortical regions. Compared to other work, we introduce SNNs in the similarity analyses with biological neural responses for the first time, showing that SNNs achieve higher similarity scores than CNNs that have the same depth and almost the same architectures. As analyzed in Section 3.1, two properties of SNNs might serve as the explanations for their high similarity scores.
The subsequent analyses of the models' simulation performance and structures indicate significant differences in functional hierarchies between macaque and mouse visual cortex. As for macaques, we observed a clear sequential hierarchy. However, as for mouse visual cortex, some work [15] exhibits that the trend of the model feature complexity roughly matches the processing hierarchy, but other work suggests that the cortex [23, 24] is organized into a parallel structure. Our results are more supportive of the latter. Furthermore, we provide computational evidence not only that the increased ratio of the receptive field size in cortical regions across the mouse visual pathway is smaller than those across the macaque visual pathway, but also that there may be multiple pathways with parallel processing streams between mouse cortical regions. Our results also clearly reveal that the processing mechanisms of macaque visual cortex differ to various stimuli. These findings provide us with new insights into the visual processing mechanisms of macaque and mouse, which are the two species that dominate the research of biological vision systems and differ considerably from each other.
Compared to CNNs, the study of task-driven deep SNNs is just in its initial state. Although we demonstrate that SNNs outperform their counterparts of CNNs, SNNs exhibit similar properties as CNNs in the further analyses. In this work, we only build several new SNNs by taking the hints from the biological visual hierarchy, while many well-established structures and learning algorithms in CNNs have not been applied to SNNs yet. In addition, the neural datasets used in our experiments are all collected under static image stimuli, lacking rich dynamic information to some certain, which may not fully exploit the properties of SNNs. Given that SNNs perform well in the current experiments, we hope to explore more potential of SNNs in future work.
In conclusion, as more biologically plausible neural networks, SNNs may serve as a shortcut to explore the biological visual cortex. With studies on various aspects of SNNs, such as model architectures, learning algorithms, processing mechanisms, and neural coding methods, it's highly promising to better explain the sophisticated, complex, and diverse vision systems in the future.
Figure 5: For Macaque-Synthetic dataset, trajectories of similarity score with model layer depth are plotted. The models are divided into two groups: ViT and CNN&SNN. The normalized layer depth ranges from 0 (the first layer) to 1 (the last layer). The calculation and plotting of the trajectories are the same as Figure 3.
## Ethics Statement
The biological neural datasets used in our experiments are obtained from public datasets or from published papers with the authors' consent.
## Acknowledgements
We thank L. Chang for providing Macaque-Face dataset. This work is supported by the National Natural Science Foundation of China (No.61825101, No.62027804, and No.62088102).
|
2303.00439 | Detection of Berezinskii--Kosterlitz--Thouless transitions for the
two-dimensional $q$-state clock models with neural networks | Using the technique of supervised neural networks (NN), we study the phase
transitions of two-dimensional (2D) 6- and 8-state clock models on the square
lattice. The employed NN has only one input layer, one hidden layer of 2
neurons, and one output layer. In addition, the NN is trained without any prior
information about the considered models. Interestingly, despite its simple
architecture, the built supervised NN not only detects both the two
Berezinskii--Kosterlitz--Thouless (BKT) transitions but also determines the
transition temperatures with reasonable high accuracy. It is remarkable that a
NN, which has an extremely simple structure and is trained without any input
from the studied models, can be employed to study topological phase
transitions. The outcomes shown here as well as those previously demonstrated
in the literature suggest the feasibility of constructing a universal NN that
is applicable to investigate the phase transitions of many systems. | Yaun-Heng Tseng, Fu-Jiun Jiang | 2023-03-01T11:55:55Z | http://arxiv.org/abs/2303.00439v1 | Detection of Berezinskii-Kosterlitz-Thouless transitions for the two-dimensional \(q\)-state clock models with neural networks
###### Abstract
Using the technique of supervised neural networks (NN), we study the phase transitions of two-dimensional (2D) 6- and 8-state clock models on the square lattice. The employed NN has only one input layer, one hidden layer of 2 neurons, and one output layer. In addition, the NN is trained without any prior information of the considered models. Interestingly, despite its simple architecture, the built supervised NN not only detects both the two Berezinskii-Kosterlitz-Thouless (BKT) transitions, but also determines the transition temperatures with reasonable high accuracy. It is remarkable that a NN, which has extremely simple structure and is trained without any input of the studied models, can be employed to study topological phase transitions. The outcomes shown here as well as those previously demonstrated in the literature suggest the feasibility of constructing a universal NN that is applicable to investigate the phase transitions of many systems.
## I Introduction
When phase transitions are concerned, apart from the well-known first- and second-order phase transitions which are related to spontaneous symmetry breaking, there is a novel type of phase transitions associated with topological defects [1; 2; 3; 4; 5]. Unlike the first- and second order phase transitions which are characterized by the behavior of the so-called order parameters, this kind of novel phase transitions, namely the Berezinskii-Kosterlitz-Thouless (BKT) transitions, cannot be understood quantitatively by any order parameter. Two-dimensional (2D) \(XY\) model on the square lattice is a typical model exhibiting the BKT transition.
The 2D \(q\)-state clock models are simplified version of the 2D \(XY\) model, Instead of continuous spin values like those of the \(XY\) spin, the clock spins are discrete. These models have very rich phase structures, hence have been studied extensively in the literature [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. For \(q\leq 4\), the \(q\)-state clock models exhibit one second-order (Ising-type) phase transition from the ferromagnetic phase to the disordered phase. For \(q\geq 5\), the models have two BKT-type transitions: one from the long-range order phase (LRO) to the pseudo-long-range order phase (PLRO, and the related transition temperature has a symbol of \(T_{c}^{2}\)) and the other from PLRO to the paramagnetic phase (The associated critical temperature is denoted as \(T_{c}^{1}\)). The 2D \(XY\) model is recovered from the \(q\)-state clock model when \(q\rightarrow\infty\). It is well-established that the values of \(T_{c}^{1}\sim 0.892\) do not change appreciably with \(q\).
Recently, techniques of Machine learning (ML) have been applied to many fields of physics [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. In particular, neural networks (NN) are considered to classify various phases of many-body systems. Both supervised and unsupervised NN have been demonstrated to be able to determine the critical points of phase transitions accurately for numerous models [28; 29; 32; 45].
The conventional NNs known in the literature have very complicated architectures. Typically these NNs have various layers and each layer has many independent nodes (neurons). Moreover, the associated trainings use real physical quantities, such as the spin configurations or the correlation functions, as the training sets. As a result, conducting studies with these conventional NNs demands huge amount of computer memory and is very time consuming. In particular, the investigations are limited to small to intermediate system sizes. Because of this, the detection of topological phase transitions with the methods of NN is more challenging when compared with the phase transitions related to spontaneous symmetry breaking.
Unlike the conventional NNs, extremely simple supervised and unsupervised NNs consisting of one input layer, one hidden layer, and one output layer are constructed in Refs. [45; 46; 47]. In addition, the trainings for these unconventional NNs use no information of the studied systems. Instead, two artificially made one-dimensional (1D) configurations are employed as the training sets. As a result, the associated training procedure is easier to implement and is much more efficient than the conventional approaches. We would like to emphasize the fact that no inputs of the considered systems, such as the vortex configurations, the histograms of spin orientations, the spin correlation functions, and the raw spin configurations, are used for training these unconvetoinal NNs. It is demonstrated that the NNs resulting from these unusual training strategies are very efficient. In other words, it takes much less time to conduct the associated NN calculations. Particularly, these unconventional NNs can be considered to detect the phase transitions of many three-dimensional (3D) and two-dimensional (2D) models. Finally, there is also no system size restriction for these unconventional NNs as well and they can be recycled to study the phase transitions of other systems not considered in Refs. [45; 46; 47].
Between the conventional supervised and unsupervised NNs, unsupervised ones are preferred when the detection of phase transitions are considered. This is because no
prior information of critical point of the studied system is needed when one carries out an unsupervised investigation. In other words, less efforts in preparation is required when an unsupervised study is performed.
For the mentioned unconventional supervised and unsupervised NNs, the trainings are conducted without any prior information or input from the considered models. Consequently, supervised one will be a better choice since it takes less time to complete the associated training and prediction processes. Due to this fact, in this study we directly adopt the supervised NN of Ref. [47] to study the phase transitions of 2D 6- and 8-state clock models.
Interestingly, the simple supervised NN employed here not only detect both the BKT-type transitions of the 6- and 8-state clock models, it also estimates the transition temperatures with reasonable good accuracy. It is remarkable that a NN trained without any input of the considered systems can successfully map out the non-trivial topological phase structures of these studied models. Similar to the unsupervised NN considered in Ref. [46], it is anticipated that the simple supervised NN used here can be directly applied to study the phase transitions of other models, such as the three-dimensional (3D) \(O(3)\) model, the 2D generalized \(XY\) model, the one-dimensional (1D) Bose-Hubbard model, and the 2D \(q\)-state ferromagnetic Potts model, without carrying out any re-training.
The rest of the paper is organized as follows. After the introduction, the considered models and the built supervised NN are described in Secs. II and III, respectively. Then we present the NN outcomes in Sec. IV. In particular, the critical temperatures of the studied phase transitions are determined with good precision. Finally, we conclude our investigation in Sec. V.
## II The considered models
The Hamiltonian of the 2D \(q\)-state clock model on the square lattice considered here has the following expression [20]
\[H=-\sum_{\langle ij\rangle}\vec{\sigma}_{i}\cdot\vec{\sigma}_{j}, \tag{1}\]
where \(\langle ij\rangle\) refers to nearest neighbor sites \(i\) and \(j\), and \(\vec{\sigma}_{i}\) is a vector at site \(i\) with \(\vec{\sigma}_{i}=(\cos\theta_{i},\sin\theta_{i})\). Here \(\theta_{i}=\frac{2\pi k}{q}\) with \(k=1,2,...,q-2,q-1\).
## III The constructed NN
The supervised NN, namely a multilayer perceptron (MLP) is built using the keras and tensorflow [48; 49]. In addition, it has extremely simple architecture. Specifically, the constructed NN has only one input layer, one hidden layer of two neurons, and one output layer. The considered algorithm and optimizer are minibatch and adam (learning rate is set to 0.05), respectively. The activation function ReLU (softmax) is applied in the hidden (output) layer. The definitions of ReLU and softmax are given by
\[\text{ReLU}(x)=\text{max}(0,x), \tag{2}\] \[\left(\text{softmax}(x)\right)_{i}=\frac{e^{x_{i}}}{\sum_{j}e^{x _{j}}}. \tag{3}\]
One-hot encoding, flattening, and \(L_{2}\) regularization are used as well. Finally, the loss function considered is the categorical crossentropy \(C\) which is defined as
\[C=-\frac{1}{n}\sum_{x}\sum_{j}^{2}y_{j}\ln a_{j}, \tag{4}\]
where \(n\) is the number of objects included in each batch and \(a_{j}\) are the outcomes obtained after applying all the layers. Moreover, \(x\) and \(y\) are the training inputs and the corresponding designed labels, respectively.
Figure 1 is adopted from Ref. [47] and is the cartoon representation of the employed supervised NN.
The training of the supervised NN is conducted using 200 copies of two one-dimensional artificially made configurations (Each of which consists of 200 sites) as the training set. These configurations contain no information from the considered models. Specifically, one (the other) configuration has 1 (0) as the values for all of its elements. Due to the used training set, the labels employed are two-component vectors (0,1) and (1,0). On a server with two opteron 6344 and 96G memory, the training takes only 24 seconds.
## IV Numerical results
### Preparation of configurations for the NN prediction
Using the Wolff algorithm [50], we have generated several thousand configurations for the 6- and 8-state clock models with various temperatures \(T\) and linear system sizes \(L\). For each produced clock configuration, the angles \(\theta\) of 2000 sites, which are randomly chosen, are stored. From these stored variables, two hundred are picked randomly and the resulting \(\theta\) mod \(\pi\) are used to build a 1D configuration consisiting of 200 sites which will then be employed for the NN prediction.
### The NN results associated with 6-state clock model
The magnitude \(R\) of the NN output vectors as functions of \(T\) for various \(L\) for the 6-state clock model are shown in fig. 2. The figure implies that there are possibly
two phase transitions: one before and one after \(T\sim 0.6\). The dashed vertical line is the expected transition temperature \(T_{c}^{2}\) from LRO to PLRO. From the figure, one observes that there is a range of \(T\) where data of various \(L\) collapse to form a single (universal) curve. In addition, such a universal curve seems to end at \(T_{c}^{2}\). This can be considered as a NN method of estimating \(T_{c}^{2}\). Based on such an idea, the NN prediction for \(T_{c}^{2}\) is found to be \(0.67(2)\) (see fig. 3). The obtained \(T_{c}^{2}\) agrees reasonably well with the MC result of \(0.681\) determined in Ref. [20]. It should be pointed out that this NN method of calculating \(T_{c}^{2}\) is not of high precision. Despite this, it leads to a NN prediction of \(T_{c}^{2}\) with acceptable quality.
After establishing a NN method of estimating \(T_{c}^{2}\), we turn to the determination of the transition temperature \(T_{c}^{1}\) from PLRO to paramagnetic phase. This transition is similar to that of the 2D classical \(XY\) model. Hence we will use the standard analysis procedure to calculate \(T_{c}^{1}\). First of all, one notices that \(R\) will take the value of \(1/\sqrt{2}\sim 0.70712\) at extremely high temperatures. As a result, for a given \(L\), we will consider the intersection between the related \(R\) data (as a function of \(T\)) and the curve of \(2T/\pi q+0.70712\) with \(q=6\) to be the estimated \(T_{c}^{1}(L)\). With this approach, \(T_{c}^{1}(L)\) as a function of \(1/L\) is shown in fig. 4.
It is anticipated that the \(T_{c}^{1}(L)\) should fulfill the following ansatz [51; 52]
\[T_{c}^{1}(L)=T_{c}^{1}+\frac{b}{\left(\log\left(L\right)\right)^{2}}, \tag{5}\]
where \(b\) is some constant. A fit using the data of fig. 4 and above ansatz leads to \(T_{c}^{1}=0.890(5)\). The obtained \(T_{c}^{1}=0.890(5)\) agrees quantitatively with the expected \(T_{c}^{1}\sim 0.892\).
We would like to point out that analytically the finite-size scaling ansatz for the transition temperature \(T_{\rm BKT}\)
Figure 1: The MLP employed in this study.
Figure 3: The smooth single (universal) curve formed by data collapse of various \(L\) for the 6-state clock model. The vertical dashed line is the expected \(T_{c}^{2}\).
Figure 2: \(R\) as functions of \(T\) for various \(L\) for the 6-state clock model. The dashed horizontal line is \(0.70712\). The vertical dashed and solid lines are the expected transition temperatures \(T_{c}^{2}\) and \(T_{c}^{3}\). The tilted line is \(2T/(\pi q)+0.70712\) with \(q=6\).
of a BKT phase transition is given by
\[T_{\rm BKT}(L)=T_{\rm BKT}+a\frac{T_{\rm BKT}}{(\log{(L)}+c)^{2}}, \tag{6}\]
where \(a\) and \(c\) are some constants. A fit using this ansatz and the data of fig. 4 leads to \(T_{c}^{1}=0.893(15)\) which also matches well with \(T_{c}^{1}\sim 0.892\).
### The NN results associated with 8-state clock model
The magnitude \(R\) of the NN output vectors as functions of \(T\) for various \(L\) for the 8-state clock model are shown in fig. 5. The conventions of the dashed and solid lines in the figure are similar to those used in fig 2. Following the same procedures for determining both the transition temperatures \(T_{c}^{2}\) and \(T_{c}^{1}\) of the 6-state clock model, the values of \(T_{c}^{2}\) and \(T_{c}^{1}\) for the 8-state clock model are calculated to be \(0.40(2)\) and \(0.884(7)\), respectively, sees figs. 6 and 7. We would like to point out that the tilted curve in fig. 5 is given by \(2T/\left(\pi q\right)+0.70712\) with \(q=8\) and \(T_{c}^{1}=0.884(7)\) is obtained with data of \(192\leq L\leq 1024\). The obtained \(T_{c}^{2}=0.40(2)\) is in good agreement with \(T_{c}^{2}=0.418\) found in Ref. [20]. Moreover, the determined \(T_{c}^{1}\) of the 8-state clock model matches nicely with the expected \(T_{c}^{1}\sim 0.892\) as well.
Finally, with a fit using equation (6) and data of fig. 7 (\(64\leq L\leq 1024\)), we arrive at \(T_{c}^{1}=0.890(15)\).
## V Discussions and conclusions
In this study, we calculate the transition temperatures \(T_{c}^{1}\) and \(T_{c}^{2}\) of the 6- and 8-state clock models using the technique of supervised NN. In particular, the employed supervised NN has extremely simple architecture, namely it consists of one input layer, one hidden layer of two neurons, and one output layer. The supervised NN is trained without any input from the considered models.
By considering the magnitude \(R\) of the NN output vectors as functions of \(T\) and \(L\), the values of \(T_{c}^{1}\) and \(T_{c}^{2}\) are estimated using semi-experimental methods. The obtained NN outcomes of the transitions temperatures are in nice agreement with the corresponding results established in the literature.
Typically, one needs to construct a new NN whenever a new model or a different system size is considered. The outcomes shown in Refs. [45; 46] and here demonstrate that a NN with simple infrastructure can be recycled to investigate the phase transitions of many 3D and 2D models.
We would like to emphasize the fact that due to the unique features of the employed supervised NN, there is no system size restriction in our calculations. As a result, outcomes of \(L=1024\) can be reached with ease.
In Refs. [15; 20], two Binder ratios \(U_{4}\) and \(U_{m}\) are built to detect the the phase transitions associated with \(T_{c}^{1}\) and \(T_{c}^{2}\), respectively. It should be noticed that \(U_{4}\) (\(U_{m}\)) cannot be used for studying the phase transition related
Figure 6: The smooth single (universal) curve formed by data collapse of various \(L\) for the 8-state clock model. The vertical dashed line is the expected \(T_{c}^{2}\).
Figure 5: \(R\) as functions of \(T\) for various \(L\) for the 8-state clock model. The dashed horizontal line is \(0.70712\). The vertical dashed and solid lines are the expected transition temperatures \(T_{c}^{2}\) and \(T_{c}^{1}\). The tilted line is \(2T/(\pi q)+0.70712\) with \(q=8\).
to \(T_{c}^{2}\) (\(T_{c}^{1}\)). Particularly, one needs to analytically investigate the model in order to construct the suitable observables to calculate the transition temperatures. For the present study, one single quantity, namely \(R\) can reveal clear signals of both the transitions. The use of \(R\) is natural and does not require any prior investigation of the targeted system(s). This feature can be considered as one advantage of our NN approach.
For the traditional methods, one can directly use the associated analytic predictions to perform the finite-size scaling analysis to extract the relevant physical quantities such as the critical points. For the NN method, such theoretical formulas typically do not exist and one has to rely on semi-experimental ansatzes to conduct the tasks. In particular, to establish these semi-experimental ansatzes requires certain efforts of investigations, and the employed ansatzes may not be applicable to all cases. From this point of view, the NN method is still in the developing phase and there is room for improvement.
Finally, we would like to emphasize the fact that similar to the autoencoder and the generative adversarial network constructed in Ref. [46], it is anticipated that the simple supervised NN employed in this study can be directly applied to study the phase transitions of other models, such as the three-dimensional (3D) \(O(3)\) model, the 2D generalized \(XY\) model, the one-dimensional (1D) Bose-Hubbard model, and the 2D \(q\)-state ferromagnetic Potts model. In other words, it is likely that one can construct a universal NN that is applicable to investigate the phase transitions of many systems.
## Acknowledgement
Partial support from National Science and Technology Council (NSTC) of Taiwan (MOST 110-2112-M-003-015 and MOST 111-2112-M-003-011) is acknowledged.
|
2308.07024 | PGT-Net: Progressive Guided Multi-task Neural Network for Small-area Wet
Fingerprint Denoising and Recognition | Fingerprint recognition on mobile devices is an important method for identity
verification. However, real fingerprints usually contain sweat and moisture
which leads to poor recognition performance. In addition, for rolling out
slimmer and thinner phones, technology companies reduce the size of recognition
sensors by embedding them with the power button. Therefore, the limited size of
fingerprint data also increases the difficulty of recognition. Denoising the
small-area wet fingerprint images to clean ones becomes crucial to improve
recognition performance. In this paper, we propose an end-to-end trainable
progressive guided multi-task neural network (PGT-Net). The PGT-Net includes a
shared stage and specific multi-task stages, enabling the network to train
binary and non-binary fingerprints sequentially. The binary information is
regarded as guidance for output enhancement which is enriched with the ridge
and valley details. Moreover, a novel residual scaling mechanism is introduced
to stabilize the training process. Experiment results on the FW9395 and
FT-lightnoised dataset provided by FocalTech shows that PGT-Net has promising
performance on the wet-fingerprint denoising and significantly improves the
fingerprint recognition rate (FRR). On the FT-lightnoised dataset, the FRR of
fingerprint recognition can be declined from 17.75% to 4.47%. On the FW9395
dataset, the FRR of fingerprint recognition can be declined from 9.45% to
1.09%. | Yu-Ting Li, Ching-Te Chiu, An-Ting Hsieh, Mao-Hsiu Hsu, Long Wenyong, Jui-Min Hsu | 2023-08-14T09:19:26Z | http://arxiv.org/abs/2308.07024v1 | PGT-Net: Progressive Guided Multi-task Neural Network for Small-area Wet Fingerprint Denoising and Recognition
###### Abstract
Fingerprint recognition on mobile devices is an important method for identity verification. However, real fingerprints usually contain sweat and moisture which leads to poor recognition performance. In addition, for rolling out slimmer and thinner phones, technology companies reduce the size of recognition sensors by embedding them with the power button. Therefore, the limited size of fingerprint data also increases the difficulty of recognition. Denoising the small-area wet fingerprint images to clean ones becomes crucial to improve recognition performance. In this paper, we propose an end-to-end trainable progressive guided multi-task neural network (PGT-Net). The PGT-Net includes a shared stage and specific multi-task stages, enabling the network to train binary and non-binary fingerprints sequentially. The binary information is regarded as guidance for output enhancement which is enriched with the ridge and valley details. Moreover, a novel residual scaling mechanism is introduced to stabilize the training process. Experiment results on the FW395 and FT-lightnoised dataset provided by FocalTech shows that PGT-Net has promising performance on the wet-fingerprint denoising and significantly improves the fingerprint recognition rate (FRR). On the FT-lightnoised dataset, the false recognition rate (FRR) of fingerprint recognition can be declined from 17.75% to 4.47%. On the FW395 dataset, the FRR of fingerprint recognition can be declined from 9.45% to 1.09%.
Wet fingerprint denoising, Real noise, Synthetic noise, Multi-task neural network.
## I Introduction
Biometrics-based security, such as fingerprint authentication, is proven to be both more secure and convenient than passwords and is widely used in our daily life. The uniqueness of fingerprints allows applications, including electronic payment authentication, background checks, mass disaster identification, criminal identity verification, and mobile phone unlocking. However, fingerprint images are degraded due to the noise caused by sweat, grease, or water. Nowadays, mobile phones become thinner and the screen becomes larger, and the space for the fingerprint sensor has been compressed. Fig. 1 shows an example of the sensor on the side of a mobile phone that collects tiny fingerprints of size 176 x 36. The size of the captured fingerprints' area is small which increases the difficulty of fingerprint recovery due to the limited information available from the input. If the fingerprint images are both small and blurry (particularly in wet conditions), the task of recovering clear fingerprints becomes more challenging.
Recently, deep learning neural networks have achieved great success in image processing. Works have been studied to apply deep learning network models to fingerprint denoising tasks. Some studies focus on latent fingerprints [1, 2], which are accidentally captured at the crime scene. In [3], denoising is applied first then followed by Fingerprint pore matching. Some studies focus on fingerprints with synthetic background noises, such as [4]. Unfortunately, to our best knowledge, few studies focus on daily applications to our lives-wet fingerprints denoising, especially tiny fingerprints with real noises caused by water, sweat, or grease. This situation may be caused by related datasets that are not easy to obtain since fingerprints are private information. Also, constructing such datasets that contain clean & noisy fingerprint pairs requires a lot of time and effort. Also, due to the security issue, fingerprint recognition needs a very high quality of denoising performance. Recognition algorithms need to have a meager false rejection rate (FRR) to ensure that the identity can be recognized successfully. Consequently, even small changes
Fig. 1: (a) Sensor on the side of a mobile phone that collects tiny fingerprints. (b) A fingerprint example collected by the sensor.
in the fingerprint image may lead to failure in fingerprint identification.
In this work, we developed a network, progressive guided multi-task network (PGT-Net) for wet fingerprint denoising. We also attempted several techniques to improve the model's performance. First, in the multi-task architecture, we use binary fingerprints as a supported task (guided task) to guide the main task toward more precise results. Second, our well-designed data flow makes outstanding performance, as we can get multiple helpful information through it, such as basic feature maps or the shared features between binary and non-binary fingerprints, and use the information for fingerprint denoising. Third, the novel residual scaling can help stabilize the training loss. Fourth, the binary progressive guided multi-task can produce precise binary fingerprint contour, which allows the main task to know how to restore the fingerprint better. Last but not least, the optimized loss function adds structural similarity index measure (SSIM) loss and Laplacian loss, which significantly improves the denoising results. In this work, we present the PGT-Net-block-84 model and PGT-Net-block-edge for heavy and light weight applications.
Fig. 2 shows an example of the denoised fingerprints that fail to be recognized with their corresponding input noisy image and output ground truth image. A red cross means a fingerprint fails to be identified, while a green circle represents a fingerprint that can be identified successfully. As shown in The noisy fingerprints in the first column fail to be recognized since the finger is covered with water, grease, or sweat. The images in the second and third columns are the denoised results of the FPD-M-Net [4] and single-task PGT-Net-block-84 model, the denoised quality is quite well that the denoised fingerprint is very similar to the ground truth in the fourth column, but it still cannot be recognized. While our proposed multi-task PGT-Net-block-84 neural network not only removes real-world noise but keeps the detail of the contour to succeed in passing the recognition, as shown in the fourth column.
To further show the different results between models, we enlarged the critical parts of the fingerprints that affect the recognition results, as shown in Fig. 3. The proposed multi-task PGT-Net-block-84 has the capability of not only denoising the fingerprint, and also making it precise and correct.
To be summarized, the contributions of our work are listed below:
* The proposed model can be used the wet fingerprints with tiny image sizes and significantly reduce the false rejection rate (FRR).
* The convenience of fingerprint recognition has been greatly improved.
* The FRR of the FT-lightnoised dataset (real noise) can be improved from 17.75 % to 4.47 %
* The FRR of the FW9395 dataset (synthetic noise) can be improved from 9.45% to 1.09 %
* Outperform state-of-the-art network models on fingerprint denoising.
## II Related Work
Fingerprints enhancements related technologies have been developed for decades, and many traditional methods or algorithms have been introduced. Such as Gabor-filtering-based fingerprint enhancements [5, 6, 7], orientation field estimation methods [8, 9, 10, 11, 6, 12, 13], total variation (TV) model [14, 13]. In general, these conventional methods may not perform well when heavy noise exists, which leads to poor fingerprint recovery or recognition performance. Consequently, it's still beneficial for deep learning training data pre-processing [2].
As described previously, small-area wet fingerprint denoising is difficult since there is limited input information, and it needs high precision so that the denoised fingerprints can be recognized. Most of current researches either have a poor denoising performance, or the denoised fingerprint contour is not precise enough for real wet fingerprint images. Below are the description of deep learning with different neural network models for fingerprint denoising.
### _Residual Neural Networks_
Residual learning and residual neural networks (ResNet) were invented by He et al. [15] to solve the problem of performance degradation as a network model's depth increases. This learning mechanism adds the extracted features and the output from previously convolution layers. It achieves remarkable success in the computer vision field.
The PFE-Net [3] makes use of the residual architecture to improve its performance. To reconstruct the input images, they make use of the residual structure to learn the local features. Also, the scaling is decreasing layer by layer between the residual blocks, which makes the PFE-Net able to learn the fingerprint features robustly.
Fig. 3: Enlarged critical parts of the fingerprints that affect the recognition results.
Fig. 2: Real-world Fingerprint denoising and recognition results with different model on FT-lightnoised dataset.
### _U-Net_
The U-Net [16] is an autoencoder architecture that has been used widely. It was proposed for the semantic segmentation task. The architecture has two main components, an encoder path, and a symmetric decoder path. The encoder is used to capture the context, and the decoder is used to estimate the segmentation. The symmetric architecture concatenates feature maps at the same level, thus improving the feature map's reusability and reducing the information loss during the encoding or decoding process.
FusionNet [17] is a U-Net-based architecture, which focuses on extracting cellular membrane segmentation in electron microscopy (EM) images, and had achieved great success in the EM segmentation tasks. Jung Yoon Bae et al. [18] proposed a network model for fingerprint denoising that is based on FusionNet. DenseUNet [1] is also based on U-Net and focuses on latent fingerprint enhancement. Also, it uses dense blocks to improve the information flow between layers.
### _M-Net_
The M-Net [19] is a U-net based model which is modified for better segmentation. The main difference is that two side paths are added to the two main encoding and decoding paths. The model preserves the input details with these two legs in case the downsampling operator drops image details. Besides, skip connections between corresponding encoding and decoding layers provide sufficient information and better features.
The FPD-M-Net [4] is a model that focuses on latent fingerprint denoising. It is modified from M-Net to fit the fingerprint denoising and inpainting tasks. The segmentation-based architecture demonstrated the capability to handle both denoising and inpainting tasks for fingerprint images simultaneously. The proposed model outperformed both the U-net and the baseline model provided in the competition. The model exhibited robustness to challenges such as strong background clutter and weak signals and effectively performed automatic filling. The findings also emphasized the importance of sensor-specific training for improved results when dealing with images acquired from diverse sensors.
### _Multi-tasks Learning_
Multi-tasks learning is a technique that trains multiple tasks together to improve the accuracy or reduce the parameters. Such a technique has been applied to different domains in deep learning [20, 21, 22, 23, 24, 25, 26, 21, 22, 23, 20, 24, 25, 26], such as text classification [21, 23], or image processing [20, 21, 22, 24, 25, 26].
Conventionally, deep learning models reach their target directly. For example, for fingerprint denoising, most of the studies get the denoised output fingerprint directly. While in our work, the denoised binary output fingerprints will be obtained first, and then the denoised non-binary output fingerprints, which are our main target.
FingerNet [2] introduced a multi-task deep learning fingerprint denoising network model. Some significant differences exist between our works and FingerNet [2].
First of all, although both our work and [2] are multi-task, the weights of the tasks are different in our work. In [2], two tasks exist in the model, one is the orientation task, and another is the enhancement task; these two tasks are equally important and produce equally important outputs. While in our work, we also have two tasks, one task is the "progressive guided multi-task," which has a lighter weight during training, and the other one is the "main task," which has a heavier weight during training. We focus more on the output of the main task, the output of the supported task is only for guiding the main task to achieve better denoising performance.
Second, the output of the supported task in the PGT-Net has a concatenate path to the input of the main task branch. Such concatenation further helps the main task improve its denoising performance since the denoised binary fingerprint can provide some helpful information such as fingerprints orientation or the contour of ridges and valleys in the fingerprints. While in [2], the task output wasn't utilized for further improvements.
## III Network Model
The PGT-Net is a multi-task network model. It takes noisy non-binary inputs and then separates them into two different paths to produce two outputs: denoised binary outputs and denoised non-binary contour outputs. Fig. 4 shows the data flow and the critical ideas of our proposed method. Besides, Fig. 5 displays the complete architecture of our proposed model. There are four main data flow blocks in the PGT-Net.
#### Iii-D1 Produce the Basic Feature Map (BFM)
The basic feature map is the important feature map that contains extracted features of the input image. It can also provide information to reduce feature loss during the following denoising process. BFM will be concatenated to the output of data flow block C and the output of data flow block D.
#### Iii-D2 Processing Shared Features
There are some shared features between the non-binary fingerprints and binary fingerprint contour. For example, denoised binary and non-binary fingerprints must have clear ridges and valleys to achieve a higher identification rate after denoising. Consequently, this step, "Processing shared features", is necessary.
#### Iii-D3 Processing Progressive Guided Features
As described previously, the denoised binary and non-binary fingerprints must have a precise contour to have a higher identification rate. The denoised binary fingerprint image contains accurate information about the ridges and valleys of the fingerprints, which is very helpful for the non-binary fingerprint image denoising.
#### Iii-D4 Processing Main Task's Features
At this step, the data flow block D receives the output of data flow block B,and the denoised binary output as its input. Those feature maps contain valuable information that can be used to denoise the non-binary fingerprints, which is same with our main target.
Our dataflow uses concatenation operations, as shown in Fig. 4. The concatenate operations are used to increase the number of feature maps. And the concatenate operation can be described precisely and mathematically with the concept of "set".
### _PGT-Net-block-84_
To further improve the performance of fingerprint recognition after denoising, PGT-Net-block-84 is proposed. Fig. 5 shows the detailed architecture of the proposed PGT-Net-block-84.
* The first two convolution layers are used to produce the BFM.
* Residual blocks stage 1-24 are used to the shared features, and Fig. 6 shows the architecture of a residual block. Further information about residual blocks will be discussed in section III-A1.
* Residual blocks stage 25-60 of the binary branch are used to process the progressive guided features.
* Residual block stages 25-48 of the non-binary branch are used to process the main task's features.
* PGT-Net-block-84 has 84 residual blocks in total.
And detailed information about the architecture will be discussed in sections III-A1 to III-A3.
#### Iii-A1 Residual Blocks
The architecture of residual blocks is shown in Fig. 6. Each residual block contains two convolution layers, and the choice between 2 activation functions, Sigmoid and Relu, is based on the residual block's task. For those residual blocks that handle the non-binary information, Relu will be chosen. While for those residual blocks that handle binary information, Sigmoid will be selected, as it could help the pixel value converge to either one or zero. For example, In PGT-Net, a binary task guides the non-binary task to produce better results. After processing through the shared features layers, the left side of the fork handles the binary task, and the right side deals with the non-binary task.
Also, there is a scaling factor \(\varepsilon\) in each residual block, \(\varepsilon\) is set to \(0.01\times\alpha\) initially, and it will decay 0.01 as the stage increase by one, as shown in Eq. 1. The residual scaling is proven helpful when training a deeper network, and such a technique makes the model more robust [27, 28, 29].
Note that the residual scaling factor \(\varepsilon\) could be either positive or negative. For example, the residual block at stage 15 will have \(\varepsilon\) = 0.01 \(\times\) (24 - 15) = 0.09, the residual block at stage 30 will have \(\varepsilon\) = 0.01 \(\times\) (24 - 30) = -0.06. We use positive scaling to extract fingerprint features and use negative scaling for denoising. Take PGT-Net-block-84 as an example, stage 1 to 23 are responsible for extracting shared features, so its scaling factor \(\varepsilon\) is set to be positive. Stage 24 to 60 are responsible for the binary or non-binary fingerprint image denoising, so they have a negative scaling factor \(\varepsilon\). Fig. 7 shows the residual blocks with positive scaling factor \(\varepsilon\) or negative scaling factor \(\varepsilon\). To avoid the scaling factor being zero, the residual scaling after stage 24 is subtracted 0.01 explicitly. Experiments have shown that such a residual scaling technique can achieve better performance, further analytic data can be referred to in section VI-C3.
\[\varepsilon=\begin{cases}0.01(\alpha-\text{current stage}),&if\text{current stage}<24\\ 0.01(\alpha-\text{current stage})-0.01,&otherwise\end{cases} \tag{1}\]
Fig. 4: The data flow of the proposed PGT-Net model.
Fig. 5: The network architecture of multi-task PGT-Net-block-84.
Fig. 6: The architecture of a residual block.
Fig. 7: Residual blocks with positive or negative scaling factor\(\varepsilon\).
#### Iii-B2 Binary Progressive Guided Task
We chose binary fingerprints as our guided task because the textures in binary fingerprints are more precise. For example, the textures are more evident in binary fingerprints than in non-binary ones. Thus binary fingerprints can provide a better quality of fingerprint contour information, which can guide the main task to find the critical information, such as the ridges and valleys of fingerprints, thus reaching a better denoising performance.
#### Iii-B3 Optimized Loss Function
Laplacian Operator is a differential operator widely used in image edge detection. We add Laplacian loss to our loss function because we need to emphasize the edge information between edges and valleys of a fingerprint so that the model learns how to restore the orientation of a noisy fingerprint.
Eq. 2 shows the definition of the Laplacian operator, the Laplacian of f is the sum of all unmixed second partial derivatives in the Cartesian coordinates \(x_{i}\).
\[\Delta f=\sum_{i=1}^{n}\frac{\partial^{2}f}{\partial x_{i}^{2}} \tag{2}\]
In practice, we usually use the Laplacian filter to convolve with the image to get the gradient of the image. Eq. 3 shows the discrete form of the Laplacian operator, namely, the Laplacian filter [3].
\[Laplacian\ filter=\begin{bmatrix}-1&-1&-1\\ -1&8&-1\\ -1&-1&-1\end{bmatrix} \tag{3}\]
Fig. 8a, b shows the result of a non-binary fingerprint convolves with the Laplacian filter. Fig. 8c, d shows the result of a binary fingerprint convolves with the Laplacian filter. As you can see, compared to the non-binary result, binary fingerprints emphasize the gradient, which makes the model easier to learn how to restore the texture of fingerprints.
Eq. 4, 5, 6, 7, 8 shows the loss function of a single-task used in PGT-Net. In addition to the mean squared error (MSE), we also add SSIM & Laplacian loss. SSIM in loss function can help to improve the performance in terms of SSIM in our experiments. And Laplacian loss can calculate the gradient of fingerprints, which allows the model to identify the orientations of fingerprints [3]. Eq. 9 shows the weighting of the loss function are empirically set as 0.7 for non-binary task (main task) and 0.3 for binary task (guided/supported task).
\[L_{MSE}=\frac{1}{N}\sum_{ij}(x_{ij}-y_{ij})^{2} \tag{4}\]
\[L_{Laplacian}=\sum_{ij}(\textit{Lap\ filter}(x_{ij})-\textit{Lap\ filter}(y_{ij})) ^{2} \tag{5}\]
\[SSIM(x,y)=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}\mu_{y}+C_{2})}{(\mu_{x}^{ 2}+\mu_{y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})} \tag{6}\]
\[L_{SSIM}=1-SSIM(x,y) \tag{7}\]
\[Single\ task\ loss=0.1L_{MSE}+0.2L_{Lap}+0.7L_{SSIM} \tag{8}\]
\[Total\ Loss=0.3\timesbinary\ task\ loss \tag{9}\] \[+0.7\times\textit{non\_binary\ task\ loss}\]
## IV Network Model on Edge Devices
The fingerprint-related applications are usually on edge devices with limited computing resources. As a result, the denoising algorithm can not be very complicated or needs heavy computations.
PGT-Net is also a neural network model with high scalability. One can adjust the number of the residual blocks or the architecture easily. As long as one follows the data flow described in section III, they can also get a model with excellent performance with acceptable downgrade but lower complexity.
PGT-Net-Edge provides an example of simplifying the model to reduce the computation resources on edge devices. One can follow similar steps to streamline their model according to their demands. Fig. 15(a) shows the architecture of the PGT-Net-Edge model. The PGT-Net-Edge reduces the complexity be three ways described below.
#### Iv-1 Residual Blocks Quantity Reduction
The number of residual blocks can be easily reduced, and the reduction can depend on the application. For example, in PGT-Net-Edge, the number of sigmoid residual blocks is reduced to zero, which handles the binary supported task initially.
#### Iv-2 Residual Blocks Output Channel Reduction
The output channels of residual blocks can also be adjusted according to the application. And reduce the output channel can significantly reduce the number of parameters. For example, in the original PGT-Net, all residual blocks have 64 output channels. While in PGT-Net-Edge, stage 1 to 28 has 32 output channels, and stage 29 to 32 has only 16 output channels.
#### Iv-3 Dynamic Fixed Point Quantization
[30] has proposed an algorithm that quantifies the floating-point weights or feature maps to dynamic-fixed-point. Compared to static-fixed-point quantization, dynamic-fixed-point quantization is more precise under the same precision and thus performs better. The experimental result of PGT-Net-Edge after quantization will be shown in section VI-A.
Fig. 8: Fingerprint convolves with the Laplacian filter. (a) normal fingerprint (b) normal fingerprint with Laplacian filter (c) binary fingerprint (d) binary fingerprint with Laplacian filter
## V Datasets
### _Denoising Datasets_
We use the FW9395 and FT-lightnoised two datasets for training. Both of the datasets are divided into training, validation, and testing and provided by FocalTech. It is described in Table I. Both of the datasets consist of a pair of blurry wet and ground-truth fingerprint images.
#### V-A1 FT-lightnoised Dataset
The FT-lightnoised dataset is collected by the Focaltech optical image sensor. The sensor captures the entire clean and wet fingerprints. After aligning the clean and wet pair, we obtain the small-area fingerprint images by cropping the entire images to size 176 x 36. Fig. 10a shows an example image pair.
#### V-A2 FW9395-synthetic Dataset
The dataset, FW9395-synthetic dataset, we used in the research is collected by the Focaltech capacitive image sensor. Because of the limitation of the hardware, the capacitive image sensor is hard to produce aligned wet & clean fingerprint pairs. So we used Gaussian noise to simulate the wet fingerprints noise. Fig. 10b shows an example pair of images in the FW9395-synthetic dataset. And the synthetic process was introduced in Section V-B2.
### _Data Preprocessing_
#### V-B1 Generate Binary Fingerprint Images
Binary fingerprints are used as supporting information to improve the wet fingerprints' denoising performance in our work. Friction ridges of a finger contain important features of fingerprint, therefore, recovering correct ridges plays a crucial role in the task. Thus, we generated reliable binary data from the original ground truth by fingerprint enhancer 1 which was based on Hong's research [5], so that these binary data will lead the model to learn better ridges features. Fig. 10c shows an example of a non-binary fingerprint and its corresponding binary fingerprint. The ridges of the fingerprints will set to 1, while the valleys of which will be 0.
Footnote 1: Fingerprint enhancer,[https://github.com/Utkarsh-Deshmukh/Fingerprint-Enhancement-Python](https://github.com/Utkarsh-Deshmukh/Fingerprint-Enhancement-Python).
#### V-B2 Synthesize Blurry Fingerprint Images
In FW9395, images are collected by a capacitive sensor, which has the difficulty to get aligned wet blurry and clean normal fingerprint data. To have correspondence between noised fingerprint and normal fingerprint for neural network training, we need to synthesize blurry fingerprint images from clean normal fingerprint images through the following steps.
1. Binarize the fingerprint. 1. Ridges in the original image will have a pixel value of 255 in the binarized image (white) 2. Valleys in the original image will have a pixel value of 0 in the binarized image (black)
2. Create Gaussian kernel with size 13, 15, 17, 19, 21, std_dev = 1
3. Set the following parameter 1. Appearance probability of noise = \(0.2\) 2. Darkness of noise = \(-0.2\) (the smaller the value, the darker the noise) 3. Darkness range = \((-0.01\,\ 0.01)\)
4. Select a pixel on the ridge (x, y), apply Gaussian kernel.
5. Set darkness_value = \((darkness\ of\ noise)+random(darkness\ range)\)
6. Update Gaussian kernel with the multiplication of Gaussian kernel and darkness_value
7. Apply Gaussian kernel to the chosen area.
The synthetic results are displayed in Fig. 9.
### _Recognition Datasets_
The fingerprint recognition tool is provided by FocalTech whose algorithm is based on the referred to SIFT [31], which is the same as the fingerprint recognition software applied to smartphones. The tool requires two inputs: the enrolled and the identified fingerprint images. The enrolled images are clean fingerprints and will be registered on each finger into the FocalTech recognition tool. The identified images are noisy fingerprints with water, grease, or sweat. After the identified fingerprint images are restored by the model, they will be put into the recognition tool to examine if the restored outputs are able to be recognized. The FT-lightnoised recognition dataset contains fingerprints from four people with different fingers. The FW9395 recognition dataset contains fingerprints from six people with different fingers. Table II summarizes the detailed information of the datasets and their corresponding FRR when the denoising algorithm is not applied.
## VI Experimental Results
### _Performance Summarization_
Table III summarizes the denoising performance of different models. Table IV summarizes the recognition performance of different models. Note that because the PGT-Net-Edge is optimized for the FW9395 dataset, it may perform poorly on the FT-lightnoised dataset, which has more complicated noise. Also, since the FW9395 dataset has simpler noise synthesized
Fig. 9: FW9395 synthetic data examples.
artificially, the differences between single-task and multi-task are not very much. For the more complicated FT-lightnoised dataset, we can see the value of the proposed multi-task model.
To summarize, Fig. 11 & Fig. 12 show the results for the FT-lightnoised dataset. Fig. 13 shows the results for the FW9395 dataset. The three figures mentioned above are all based on PGT-Net-block-84, the model with the best performance. As you can see from those figures, the proposed algorithm has an outstanding result, with different noise levels or sensors.
As mentioned in section IV, the PGT-Net-Edge reduces the complexity be three ways described below.
* Residual blocks quantity reduction.
* Residual blocks output channel reduction.
* Dynamic fixed point quantization.
Table V shows the experimental results after quantization. There are few performance downgrades, but the model size has been quartered after quantization from float32 to the 8-bit dynamic-fixed point. As you can see from the tables, there are only a few performance downgrades compared to the
Fig. 11: PGT-Net-block-84 denoised and recognition results on the FT-lightnoised dataset (light and medium noise).
Fig. 14: The denoised images and recognition results with different residual scaling settings.
Fig. 12: PGT-Net-block-84 denoised and recognition results on the FT-lightnoised dataset (heavy noise).
Fig. 10: Fingerprint pairs on different datasets. (a) FT-lightnoised wet and clean pair (b) FW9395 wet and clean pair (c) clean fingerprint and corresponding binary fingerprint
PGT-Net-block-84, which have similar dataflow but a higher quantity of parameters.
### _Performance Comparison_
Table VI summarizes the experiment results of the fingerprint denoising-related works mentioned above. Table VII summarizes the experiment results of fingerprint recognition compared with related works. According to these two table, we observe that PGT-Net performs well on denoising and recognition tasks. From Table VII, PGT-Net yields substantial improvement in fingerprint recognition tasks, with a minor increase in the FAR. The FAR slightly elevate from 0.02% to 0.07%. One potential explanation is that during the restoration process of our proposed model, features belonging to fingerprints from other classes are inadvertently restored, as PGT-Net did not incorporate fingerprint categories in its training process. We consider this to be an acceptable trade-off when weighed against the enhancement in FRR.
### _Ablation Studies_
Here we present ablation experiments to analyze the contribution of each component of our model. Evaluation is performed on the FW9395 and FT-lightnoised datasets.
#### Iv-C1 Single-task versus Multi-task Model
Our model yields better performance as Multi-task model(Fig. 5). Fig. 15 shows an architecture of the PGT-Net-single-task; it is similar to the original PGT-Net model but in the single-task version, and the number of the residual blocks are the same, so they have a similar quantity of parameters. Also, both single-task and multi-task have the same residual scaling setting with \(\alpha\) = 24, as described in Eq. 1, and thus a fair comparison. We demonstrate the evaluation results of single-task and Multi-task model training with FT-lightnoised dataset.
fingerprint from the first column. Still, in some details, multi-task has a more accurate fingerprint contour (marked in red boxes), and such information makes the denoised fingerprint of multi-task recognized successfully. In conclusion, the binary branch did help in fingerprint restoration.
#### V-B2 Binary Progressive Guided Task
Here we use some techniques to improve the performance of our Multi-task model.
* _Progressive Guided Concatenation Path:_ The model with the "binary guided concatenate path" is shown in Fig. 5, and the model without the "Progressive guided concatenation path" is modified from the original PGT-Net with the "Progressive guided concatenation path" begin deleted. Experiments shows that the concatenate path can improve the performance.
* _Separate the Training Process:_
* Phase 1: Train the binary-related parameters in our model first.
* Phase 2: Do the regular training. Such a technique can improve the performance because the binary path in the model is pre-trained and can produce a more precise binary fingerprint contour to guide the model toward better performance more accurately. Fig. 15c shows the partial model that can be trained during the 2-Phase training process. Yellow blocks are trainable both in Phases 1 and 2. Gray blocks are only trainable in Phase 2.
Table VIII summarizes the denoising performance with different settings. As you can observe from the table, the "progressive guided concatenation path" and "progressive 2-stage training" can make the model performs better.
#### V-B3 Residual scaling factor \(\varepsilon\)
As described previously in section III-A1
* Each residual block has a constant \(\varepsilon\) for residual scaling, as shown in Fig. 6
* The scaling factor \(\varepsilon\) can be either positive or negative, as shown in Fig. 7.
* In our work, we set \(\alpha\) to 24 in Eq. 1
If we set the \(\varepsilon\) as the FENet [3], such that all the residual blocks have positive \(\varepsilon\), namely \(\varepsilon=0.01\times\alpha-current\)\(stage)\), with \(\alpha=61\) or \(85\), the training loss will significantly increase, and thus a poor recognition performance.
There are two different settings with positive \(\varepsilon\), as described below:
* \(\alpha=61\) because the maximum stage of PGT-Net-block-84 is 60. And the model architecture of this setting is exactly the same as PGT-Net-block-84 (Fig. 5).
* \(\alpha=85\) because there are 84 residual blocks in PGT-Net-block-84. Although the model architecture is similar to PGT-Net-block-84, it still exists some differences. The stage of the non-binary branch starts at 61 instead of 25, ends at stage 84 instead of 48, as shown in Fig. 15c. The non-binary branch starts at 61 because it has the input comes from the binary output, which ends at stage 60.
The training loss of the residual scaling setting mentioned above is shown in Fig. 16. The proposed residual scaling can further reduce the training loss and thus achieve better denoising and recognition performance.
Table IX summarizes the denoising performance for the F-lightnoised dataset, and Table X summarizes the recognition performance for the FT-lightnoised dataset. The proposed residual scaling can indeed perform better in our experiments.
Fig. 14 shows the denoised output of different residual scaling settings. Those images denoised with all positive residual scaling settings can't be recognized successfully, but the output fingerprint of proposed residual scaling can. Although all of them can produce denoised clear fingerprint output, on some details (marked in red boxes), proposed residual scaling makes
Fig. 16: Training loss of different residual scaling setting.
Fig. 15: PGT-Net variants. (a) The architecture of PGT-Net-Edge (b) Single-task version of PGT-Net-block-84 (c) Two-phase training process of PGT-Net
the denoised fingerprint texture more precise and thus have lower FRR.
## VII Conclusion
In this work, we presented PGT-Net for wet fingerprint denoising. The proposed methodologies have proven to be effective in multiple types of sensors. And can recover the noisy fingerprints covered with real and synthetic noise. PGT-Net uses residual blocks as the fundamental structures. And considering that the fingerprints get by the sensor would be small and thin, multi-task architecture is added in PGT-Net to use the results of supported tasks to guide the main task to reach better performance. Also, the proposed residual scaling did a great job at reducing the training loss.
The proposed data flow is simple and scalable. One can easily modify the model according to the proposed data flow and will get a result with an excellent denoising performance.
The convenience of fingerprint recognition in our daily usage has also been greatly improved. The proposed methodologies have proven effective in both optical and capacitive sensors. The FRR has been reduced from 17.75 % to 4.47 % for the FT-lightnoised dataset. FRR has been reduced from 9.45 % to 1.09 % for the FW9395 dataset.
|
2306.10792 | NAR-Former V2: Rethinking Transformer for Universal Neural Network
Representation Learning | As more deep learning models are being applied in real-world applications,
there is a growing need for modeling and learning the representations of neural
networks themselves. An efficient representation can be used to predict target
attributes of networks without the need for actual training and deployment
procedures, facilitating efficient network deployment and design. Recently,
inspired by the success of Transformer, some Transformer-based representation
learning frameworks have been proposed and achieved promising performance in
handling cell-structured models. However, graph neural network (GNN) based
approaches still dominate the field of learning representation for the entire
network. In this paper, we revisit Transformer and compare it with GNN to
analyse their different architecture characteristics. We then propose a
modified Transformer-based universal neural network representation learning
model NAR-Former V2. It can learn efficient representations from both
cell-structured networks and entire networks. Specifically, we first take the
network as a graph and design a straightforward tokenizer to encode the network
into a sequence. Then, we incorporate the inductive representation learning
capability of GNN into Transformer, enabling Transformer to generalize better
when encountering unseen architecture. Additionally, we introduce a series of
simple yet effective modifications to enhance the ability of the Transformer in
learning representation from graph structures. Our proposed method surpasses
the GNN-based method NNLP by a significant margin in latency estimation on the
NNLQP dataset. Furthermore, regarding accuracy prediction on the NASBench101
and NASBench201 datasets, our method achieves highly comparable performance to
other state-of-the-art methods. | Yun Yi, Haokui Zhang, Rong Xiao, Nannan Wang, Xiaoyu Wang | 2023-06-19T09:11:04Z | http://arxiv.org/abs/2306.10792v2 | # NAR-Former V2: Rethinking Transformer for Universal Neural Network Representation Learning
###### Abstract
As more deep learning models are being applied in real-world applications, there is a growing need for modeling and learning the representations of neural networks themselves. An efficient representation can be used to predict target attributes of networks without the need for actual training and deployment procedures, facilitating efficient network deployment and design. Recently, inspired by the success of Transformer, some Transformer-based representation learning frameworks have been proposed and achieved promising performance in handling cell-structured models. However, graph neural network (GNN) based approaches still dominate the field of learning representation for the entire network. In this paper, we revisit Transformer and compare it with GNN to analyse their different architecture characteristics. We then propose a modified Transformer-based universal neural network representation learning model NAR-Former V2. It can learn efficient representations from both cell-structured networks and entire networks. Specifically, we first take the network as a graph and design a straightforward tokenizer to encode the network into a sequence. Then, we incorporate the inductive representation learning capability of GNN into Transformer, enabling Transformer to generalize better when encountering unseen architecture. Additionally, we introduce a series of simple yet effective modifications to enhance the ability of the Transformer in learning representation from graph structures. Our proposed method surpasses the GNN-based method NNLP by a significant margin in latency estimation on the NNLQP dataset. Furthermore, regarding accuracy prediction on the NASBench101 and NASBench201 datasets, our method achieves highly comparable performance to other state-of-the-art methods.
## 1 Introduction
With the maturity of deep learning technology, an increasing number of deep network models of various sizes and structures are being proposed and implemented in academic research and industrial applications. In this process, the rapid deployment of networks and the design of new networks that meet task requirements are significant. To address this issue, researchers propose using machine learning models to solve the deployment and design problems of the models themselves. One popular strategy is encoding the input neural network and utilizing the resulting neural network representation to predict a specific target attribute directly without actually executing the evaluation program. In recent years, we have witnessed success in accelerating model deployment and design processes with the help of neural network representations [2; 3; 11; 10; 9; 8; 20]. Taking the advantages of latency predictors [2; 3; 7; 6; 20; 31], significant time cost and expertise efforts can be saved by not having to carry out the time-consuming process of compilation, deployment, inference, and latency
evaluation when engineers choose networks for application. Through the use of accuracy predictors [2; 9; 10; 12; 20; 8; 26], researchers can avoid the resource-intensive process of network training and instead perform a forward inference process to evaluate the accuracy of a multitude of networks. This measure dramatically reduces the time cost associated with network design.
Although the vanilla Transformer is designed for natural language processing, Transformer architecture has found widespread adoption across diverse fields owing to its strengths in global modeling and parallelizable computation [4; 5; 14; 19; 34; 30]. Very recently, several researchers have attempted to learn appropriate representations for neural networks via Transformer [2; 9]. These methods have indeed achieved leading performance on relevant tasks. Nevertheless, they are mainly designed for encoding the architecture of cells (basic micro units of repeatable neural networks) in cell-structured networks. As shown in the latency prediction experiment in NAR-Former [2], poor generalization performance occurs when the depth of the input architecture reaches hundreds of layers. In the development process of neural network representation learning, Graph neural network (GNN) [32; 35] is also a promising technique for learning neural network representations [3; 24; 23; 22; 20]. They model the input neural architecture as a directed acyclic graph (DAG) and operate on the graph-structured data, which comprises the node information matrix and adjacency matrix. Recently, the NNLP [3] introduced a dedicated latency prediction model based on GNNs, which is capable of encoding the complete neural network having hundreds of layers and achieving a cutting-edge advance.
In fact, both cell-structured architectures and complete neural networks are widely used in various applications. Cell-structured models offer good scalability, allowing for easy scaling by adding or removing cells. This adaptability makes them suitable for addressing problems of different complexities and data sizes, while also facilitating incremental model development and deployment.omplete neural networks provide better flexibility in connectivity and can achieve higher accuracy in certain cases. Furthermore, in some cases, such as in latency estimation, encoding the complete network is necessary. To handle various network architectures in different tasks, both GNN-based and Transformer-based models are necessary. However, this issue of utilizing multiple architectures can introduce constraints that may not be conducive to practical applications. For instance, when a designed network requires specific attributes, having similar model structures and high-accuracy predictions for different attributes can reduce code redundancy and improve work efficiency.
In this paper, we build upon the research conducted in NAR-Former [2] and present a novel framework called NAR-Former V2 for universal neural network representation learning. Our framework can handle cell-structured networks and learning representations for the entire network. To accomplish this, we incorporate graph-specific properties into the vanilla Transformer and introduce a graph-aided attention-based Transformer block. This approach combines the strengths of both Transformer and graph neural networks (GNNs). Extensive experiments are conducted to evaluate our proposed framework. Results show that:(1) our method can be applied to predict different attributes, can outperform the state-of-the-art method in latency prediction on the NNLQP dataset [3], and can achieve promising results in accuracy sorting prediction on the NAS-Bench-101 and NAS-Bench-201 datasets [1; 21]; (2) our method has good scalability, which is capable of encoding network having only a few operations or complete neural networks that have hundreds of operations.
## 2 Related work
### Representation and attribute prediction of neural networks
Neural network representation learning is the base for evaluating the attributes of different networks via machine learning models. Early methods [33; 26] construct representation models for learning neural network representation based on LSTM and MLP. Peephole [33] inputs the embedding of each layer to LSTM to predict accuracy, which neglects the topological structure and is limited to handling only sequential architectures. Later, in order to better capture the structural information of the network, an accuracy predictor [8] uses a binary path encoding with a length equal to the number of possible paths from input to output given in terms of operations, where the element at the corresponding position of the path presenting in the input network is set to 1. When the neural network is regarded as a directed acyclic graph, the adjacency matrix describes the connection between nodes, so it is naturally used to encode the topological structure of the neural network. NAS-Bench-101 [1] proposed to encode the given neural network as a concatenated vector of a flat adjacency matrix
and a list of node labels. Many other methods [3; 7; 11; 20; 22; 23] realize accuracy and latency prediction by directly inputting the original two-dimensional adjacency matrix together with the node information matrix to GNN, which can realize the explicit encoding of the input network topology. Recently, other methods have focused on enhancing the original GNN or introducing transformers to obtain more meaningful neural network representations [16; 12; 9; 2].
### Transformer
Transformer [34] is a self-attention-based neural network architecture that has revolutionized natural language processing [15; 30; 17] and has been adopted in many other fields [4; 5; 14; 2; 9; 18; 19]. Transformer has recently been successfully introduced into neural network representation learning [9; 2]. TNASP [9] inputs the sum of operation type embedding matrix and Laplacian matrix into standard Transformer. NAR-Former [2], on the other hand, encodes each operation and connection information of this operation into token and inputs all tokens into a proposed multi-stage fusion transformer. Excellent attribute prediction results have been achieved on cell-based dataset by using these methods. However, the strong long-range modeling ability of the self-attention mechanism may also result in subtle local variation affecting the representation of all tokens. Due to the potential impact of this feature on the generalization ability, although NAR-Former [2] has made attempts to encode complete neural networks, the results are still unsatisfactory.
### Graph neural network
GNNs are designed to handle graph-structured data, which is a fundamental representation for many real-world problems such as social network analysis and recommendation systems [27; 29]. Given that neural networks can be viewed as graphs, GNN-based models have emerged as a prominent and widely adopted approach for neural network representation learning [3; 7; 11; 20; 22; 23]. GNNs show generalization ability through a simple mechanism of aggregating information from neighbors. For instance, the recently proposed GNN-based model [3] can obtain representations of neural networks with hundreds of layers and achieves new state-of-the-art results in latency prediction, even if the input network structure has not been seen during training. Nevertheless, the simple structural characteristics of GNNs, which contribute to its strong generalization ability, also lead to the need for further improvement in the performance of methods based on original GNN in cellular structure and complete neural network representation learning. Therefore, it is a promising approach for neural network representation learning to combine the Transformer and GNN to leverage the strengths of both models.
Figure 1: Diagrams of three modules. (a) The vanilla Transformer block [25]. (b) The GNN layer1 with mean aggregator [32]. (c) The proposed graph-aided attention Transformer block.
Method
### Motivation
As mentioned in the Sec. 1, Transformer-based models have demonstrated remarkable performance in encoding and learning representations of neural networks when the input is in the form of cells. However, when dealing with complete deep neural networks (DNNs) consisting of hundreds of layers, and the depth of the input data is unknown during training, they may sometimes exhibit poorer performance compared to GNN-based methods. Additionally, as highlighted in [3], real-world applications often show significant differences in the topologies and depths between training and test samples. Consequently, representation learning models must possess strong generalization abilities for handling unseen data. In this regard, GNN-based models appear to achieve better performance.
This observation has prompted us to reconsider the two types of inputs, namely cells and complete DNNs, as well as the two representation learning models, the Transformer and GNN. Through a detailed comparative analysis of the structures of the Transformer and GNN, we speculate that the insufficient generalization capability of Transformer-based methods may be attributed to its structure and computation characteristics. As we know, the self-attention structure in transformers is a crucial design that allows for the effective extraction of global features in a data-driven manner. However, this structure becomes a double-edged sword when learning network representations. For input neural networks with depths of hundreds of layers, the Transformer's impressive capability to capture global information can sometimes lead to excessive sensitivity. This stems from the fact that the Transformer models interactions between all tokens using its self-attention mechanism, treating the entire sequence as a fully connected graph. This dense attention mechanism can give rise to a particular issue: even a subtle variation, such as reducing the kernel size in layer "i" from \(5\times 5\) to \(3\times 3\), can affect the representation of all other layers, ultimately leading to significant differences in the final representation. As a result of this issue, the trained model may be biased toward fitting the training data. Consequently, when the model is employed for inferring architectures outside the training data distribution, it yields inferior results and demonstrates poorer generalization performance. The corresponding experiments are presented in Sec. 4.4.
### Transformer grafted with GNN
Fig.1 shows the vanilla transformer block, GNN block, and our proposed graph-aided attention Transformer block. As shown in Fig.1 (a), the vanilla Transformer block has two major parts:
\[\hat{H}^{l}=\mathrm{SelfAttn}(\mathrm{LN}(H^{l-1}))+H^{l-1}, \tag{1}\] \[H^{l}=\mathrm{FFN}(\mathrm{LN}(\hat{H}^{l}))+\hat{H}^{l}, \tag{2}\]
where \(H^{l}\) is the feature for the layer \(l\). \(\hat{H}^{l}\) is an intermediate result. SelfAttn, FFN, and LN refer to self-attention, feed-forward network, and layer normalization, respectively. GNN block just has one major part, where the representation is updated following:
\[\hat{H}^{l}=\mathrm{GraphAggre}(H^{l-1},A)+W_{r}^{l}H^{l-1}, \tag{3}\] \[H^{l}=\mathrm{L}_{2}(\hat{H}^{l}), \tag{4}\]
where \(\mathrm{GraphAggre}(H^{l-1},A)=W_{a}^{l}(\mathrm{Norm}(A)H^{l-1})\). \(A\in\mathbb{R}^{N\times N}\) is the adjacency matrix, and \(\mathrm{L}_{2}\) denotes the \(\mathrm{l}2\)-normalization function. \(W\) with different superscripts and subscripts represents different learnable transformation matrices.
Comparing formulas (3) and (4) with formulas (1) and (2), we can observe two major differences between the Transformer block and GNN block:
* The Transformer utilizes self-attention to fuse information from a global perspective, while the GNN uses graph aggregation to fuse neighbor information based on the adjacency matrix.
* The Transformer block includes an additional FFN (Feed-Forward Network) component, which enhances information interaction between channels.
The advantage of self-attention lies in its data-driven structure, allowing for flexible adjustment of information fusion weights based on the input data. On the other hand, the advantage of GNN is that graph aggregation focuses on the topological structure. These two advantages are not contradictory to each other. Consequently, we have naturally come up with an idea to combine the strengths of self-attention and graph aggregation. This approach inherits the flexibility of self-attention while benefiting from the good generalization capability of graph aggregation.
To implement this idea, we consider the neural network encoded as a graph, with the operations or layers in the network treated as nodes. Assuming the graph has \(N\) nodes, the transformer layer we have designed for universal neural network representation learning (Fig. 1 (c)) is calculated as follows:
\[\widetilde{H}^{l}=\mathrm{TAEnhance}(H^{l-1},D), \tag{5}\] \[\hat{H}^{l}=\mathrm{L}_{2}(\mathrm{GraphAttn}(\widetilde{H}^{l}, A)+W_{r}^{l}\widetilde{H}^{l}),\] (6) \[H^{l}=\mathrm{GFFN}(\mathrm{LN}(\hat{H}^{l}))+\hat{H}^{l}, \tag{7}\]
where \(D\in\mathbb{R}^{N\times 1}\) is a vector that records the number of nodes directly connected to each node. The Graph-aided Attention (GraphAttn) module is responsible for performing attention calculations using the properties of the graph structure to adjust global self-attention. The Type-aware Enhancement module (TAEnhance) is utilized to further enhance the representation. We introduce Grouped Feed Forward Network (GFFN) by introducing group linear transformation into the original FFN. In the following sections, we will provide a detailed introduction to each component.
Graph-aided attentionIn the proposed graph-aided attention, we employ the adjacency matrix to govern the attention calculation range. Moreover, the adjacency matrix characterizes the inter-layer connection relationships within the neural network, enabling the model to acquire topology knowledge. Hence, we define this module as the Graph-aided Attention module:
\[X^{l}=\mathrm{Sigmoid}(W_{q}^{l}\widetilde{H}^{l}+b_{q}^{l}), \tag{8}\] \[S^{l}=(X^{l}X^{lT}/\sqrt{d})\odot A,\] (9) \[Z^{l}=W_{a}^{l}(\mathrm{Norm}(S^{l})\widetilde{H}^{l})+b_{a}^{l}. \tag{10}\]
The \(\mathrm{Norm}(\cdot)\) means that for each node, the attention weights between it and other nodes are transformed to (0, 1) by dividing it by the sum. The character \(b\) with different superscripts and subscripts represents different learnable biases. The \(d\) refers to feature dimension of \(X^{l}\). Note that simply using the adjacency matrix to control the attention map in self-attention is insufficient. To make this approach work, we have discarded the original softmax operation in self-attention and replaced it with a linear attention mechanism. This is because the softmax operation tends to focus excessively on the current node while neglecting neighboring nodes. Consequently, we have inserted a sigmoid activation function before the linear attention to ensure that all values in the attention map are positive. For further comparisons with the original self-attention, please refer to the supplementary.
Type-Aware enhancement moduleThe connection between a layer and other layers is related to the type of that layer. Therefore, the number of connected layers in each layer can be used to assist the model in learning the type of layer. By fully utilizing the internal characteristics of this graph-structured data, it is beneficial to improve the learned representations. The enhanced representation is obtained by:
\[\mathrm{TAEnhance}(H^{l-1},D)=\mathrm{Sigmoid}(W_{d}^{l}D+b_{d}^{l})\odot H ^{l-1}. \tag{11}\]
### Universal representation learning for neural network
In this subsection, we will present the construction of a comprehensive framework for neural network encoding and representation based on the proposed enhanced Transformer block. We will also explain how this framework can be utilized to predict network attributes. The overall system, as depicted in
Fig. 2, is composed of three consecutive stages: neural network encoding, representation learning, and attribute prediction.
Neural network encodingWe have taken inspiration from the tokenizer used in NAR-Former [2] and made certain modifications to encode the input neural network. For a given network consisting of \(N\) layers or operations, whether it is a cell architecture or a complete DNN, we represent it as a sequence feature comprising vectors corresponding to each layer: \(T=(t_{1},t_{2},\cdots,t_{N})\in\mathbb{R}^{N\times C}\). Each vector encapsulates both the operation and position information: \(t_{i}=(t_{i}^{\mathrm{op}},t_{i}^{\mathrm{os}})\in\mathbb{R}^{C}\).
Following the encoding scheme proposed in NAR-Former [2], we use the position encoding formula [34; 13] to transform the single real-valued numbers (e.g. operation labels and node position indices) of relevant information into a higher-dimensional space. We denote this mapping scheme as \(f_{\mathrm{PE}}(\cdot)\).
For the node position encoding \(t_{i}^{\mathrm{pos}}\), since our improved transformer can obtain the topology information of the network with the help of the adjacency matrix, we only encode the self-position of the node with \(f_{\mathrm{PE}}(\cdot)\). For the operation encoding \(t_{i}^{\mathrm{op}}\), there are slight differences in the specific encoding content for input networks of different scales. If the input is in the form of a cell architecture [1; 21], there are usually no more than ten different options for each architecture operation. In this case, we directly assign category labels to all possible operations, and then use the function \(f_{\mathrm{PE}}(\cdot)\) to encode the label of the operation to obtain \(t_{i}^{\mathrm{op}}\). However, when a complete DNN is used as input [3], more abundant operational information can be extracted and encoded. In this case, we first use one-hot vectors, which ensure the same distance between different categories, to encode the type of operation (e.g. convolution, batch normalization, ReLU, concatenation). Then use the function \(f_{\mathrm{PE}}(\cdot)\) to encode the properties (e.g. kernel size, number of groups) of the operation, which is then concatenated with the one-hot type vector as \(t_{i}^{\mathrm{op}}\).
Representation learningThe model for learning neural network representations \(H^{K}\) is constructed by stacking multiple instances of our proposed enhanced Transformer blocks. These improvements, specifically tailored for neural network data, allow the model to learn representations that are more meaningful and exhibit enhanced generalization capabilities.
Attributes predictingTaking the representation \(H^{K}\) as input, the target attribute can be predicted by using the predicting head:
\[\hat{y}=-\mathrm{logsigmoid}(\mathrm{FC}(\mathrm{ReLU}(\mathrm{FC}(\mathrm{ ReLU}(\mathrm{FC}(H^{K}))))). \tag{12}\]
Currently, among the various attributes of the network, accuracy and latency are the two main types of predicted objects. Because they have extremely high acquisition costs, and are the primary manifestation of network performance and efficiency.
For latency prediction, due to the strong correlation between batch size, memory access, parameter quantity, and FLOPs with network latency, the encoding corresponding to these characteristics and the representation \(H^{K}\) are input into the predicting head together. To train the whole latency prediction model, the mean square error (MSE) function is adopted to measure the difference between predicted results and the ground truths.
For accuracy prediction, in addition to MSE loss function, architecture consistency loss (AC_loss) and sequence ranking related loss (SR_loss) proposed by NAR-Former [2] are also used. Following NAR-Former, we employed a hierarchical fusion strategy in accuracy prediction experiments. We
Figure 2: Overview of attribute prediction model.
use a simplified approach, which computes the weighted sum of the outputs of each transformer layer with adaptive weights.
## 4 Experiments
In this section, we conduct experiments on NNLQP [3], NAS-Bench-101 [1], and NAS-Bench-201 [21] to evaluate the performance of our NAR-Former V2. A series of ablation experiments were performed to corroborate the effectiveness of our design details. More details about implementation and analysis will be provided in the supplementary materials.
### Implementation details
Model detailsFor latency experiments, the number of GraphAttn-based Transformer blocks is set to 2, which is the same as the baseline [3]. As for accuracy predicting, we fix the number of Transformer blocks to 6 to align with the standard Transformer used in the baseline [2].
Training detailsAll experiments were trained using the Adam optimizer. We used a linear learning rate decay strategy with a warm-up, in which the learning rate uniformly increased to 0.001 during the first 10% of the training steps and then gradually decayed to 0. The batch size was fixed at 16. Our models are trained on a machine with a GeForce RTX 3090 GPU. To avoid randomness, each model was trained for 12 times and the two experiments with the best and worst indicators were discarded.
### Latency prediction
We conduct latency prediction on the recently released NNLQP dataset [3], which comprises 20000 complete deep learning networks and their corresponding latencies on the target hardware. This dataset has 10 different types of networks (referring to the first column of Tab. 1), with 2000 networks per type. Following NNLP [3], we use Mean Absolute Percentage Error (MAPE) and Error Bound Accuracy (Acc(\(\delta\))) to measure the deviations between latency predictions and ground truths. The lower the MAPE, the higher the prediction accuracy, while the opposite is true for Acc(\(\delta\)).
Here, we considered two different scenarios. In the first scenario, the training and testing sets are from the same distribution. We constructed the training set with the first 1800 samples from each of the ten network types, and the remaining 2000 networks were used as the testing set. The detailed results are shown in Tab. 1. When testing with all test samples, the average MAPE of our method is 0.4% lower than that of NNLP [3], and the average Acc(10%) is 1.16% higher than that of NNLP. When tested on various types of network data separately, except for the NASBench201 family, our method consistently outperforms NNLP. This indicates that our improved transformer, which utilizes the structural characteristics of the graph, has learned more reasonable representations than the original GNN.
The second scenario has more practical application significance, that is, the network type needed to be inferred is not seen during the training progress. There are ten sets of experiments in this part, with each set taking one type of network as the test set, while all samples from the other nine types of networks are used as the training set. As shown in Tab. 2, it can be seen that using only FLOPs and memory access information to predict latency is not enough. Suffering from the gap between the accumulation of kernel delays and the actual latency, kernel-based methods (TPU[7] and nn-Meter[6]) perform worse than the GNN-based model NNLP that directly encodes and predicts
\begin{table}
\begin{tabular}{l|c c|c c c} \hline \hline & \multicolumn{3}{c}{MAPE\(\downarrow\)} & \multicolumn{2}{c}{Acc(10\%)\(\uparrow\)} \\ Test Model & NNLP [3] & Ours & NNLP [3] & Ours \\ & avg / best & avg / best & avg / best & avg / best \\ \hline All & 3.47\% / 3.44\% & 3.07\% / 3.00\% & 95.25\% / 95.50\% & 96.41\% / 96.30\% \\ \hline AlexNet & 6.37\% / 6.21\% & 6.18\% / 5.97\% & 81.75\% / 84.50\% & 81.90\% / 84.00\% \\ EfficientNet & 3.04\% / 2.82\% & 2.34\% / 2.22\% & 98.00\% / 97.00\% & 98.50\% / 100.0\% \\ GoogleNet & 4.18\% / 4.12\% & 3.63\% / 3.46\% & 93.70\% / 93.50\% & 95.95\% / 95.50\% \\ MnasNet & 2.60\% / 2.46\% & 1.80\% / 1.70\% & 97.70\% / 98.50\% & 99.70\% / 100.0\% \\ MobileNetV2 & 2.47\% / 2.37\% & 1.83\% / 1.72\% & 99.30\% / 99.50\% & 99.90\% / 100.0\% \\ MobileNetV3 & 3.50\% / 3.43\% & 3.12\% / 2.98\% & 95.35\% / 96.00\% & 96.75\% / 98.00\% \\ NasBench201 & 1.46\% / 1.31\% & 1.82\% / 1.18\% & 100.00\% / 100.0\% & 100.00\% / 100.0\% \\ SqueezeNet & 4.03\% / 3.97\% & 3.54\% / 3.34\% & 93.25\% / 93.00\% & 95.95\% / 96.50\% \\ VGG & 3.73\% / 3.63\% & 3.51\% / 3.29\% & 92.55\% / 96.50\% & 98.55\% / 96.00\% \\ ResNet & 3.34\% / 3.25\% & 3.11\% / 2.89\% & 98.40\% / 98.50\% & 98.55\% / 99.00\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Latency prediction on NNLQP [3]. Training and test sets have the same distribution.
the entire network. Benefiting from considering the entire input network and grafting GNN into the transformer, our method achieves the best MAPE and Acc(10%) on the average indicators of 10 experimental groups. Compared with the second-best method NNLP, the average Acc(10%) of our method has an marked increase of 8.08%.
### Accuracy prediction
#### 4.3.1 Experiments on NAS-Bench-101
NAS-Bench-101 [1] provides 423624 different cell architectures and the accuracies of the complete neural network constructed based on each cell on different datasets. Following [9], 0.1% and 1% of the whole data is used as training set and another 200 samples are used for validation. We use Kendall's Tau [36] to evaluate the correlation between the predicted sequence and the real sequence, and a higher value indicates a better results. The Kendall's Tau is calculated on the whole dataset or 100 testing samples. We report the average results of our predictor in 10 repeated experiments. **Results** are shown in Tab. 3. When only 424 samples were available for training, our method achieves the highest Kendall's Tau. We achieves 0.773 when tested using the whole testing set, which is 0.8% and 8.9% higher than the transformer-based model [22] and GNN-based model [22], respectively. This proves that the modifications we made to the transformer based on inspiration from GNN are effective.
#### 4.3.2 Experiments on NAS-Bench-201
NAS-Bench-201 [21] is another cell-based dataset, which contains 15625 cell-accuracy pairs. Following [9], 5% and 10% of the whole data is used as training set and another 200 samples are used for validation. We use Kendall's Tau [36] computed on the whole dataset as the evaluation metric in this part. Average results of our predictor of 10 runs are reported. **Results** are shown in Tab. 4. The conclusion of this experiment is similar to Sec. 4.3.1. When compared with the second-best method, a substantial improvement (2.5%) of Kendall's Tau can be seen in the setting of training with 781 samples.
\begin{table}
\begin{tabular}{l l c c} \hline \hline \multirow{2}{*}{Backbone} & \multirow{2}{*}{Model} & \multicolumn{2}{c}{Training Samples} \\ & & 0.1\% & 0.1\% & 1\% \\ & & (424) & (424) & (4236) \\ \hline \multicolumn{3}{c}{Test Samples} \\ & & 100 & all & all \\ \hline CNN & ReNAS [10] & 0.634 & 0.657 & 0.816 \\ \hline LSTM & NAO [28] & 0.704 & 0.666 & 0.775 \\ & NAO+SE & 0.732 & 0.680 & 0.787 \\ \hline \multirow{3}{*}{GNN} & NP [22] & 0.710 & 0.679 & 0.769 \\ & NP + SE & 0.713 & 0.684 & 0.773 \\ \cline{1-1} & CTNAS [11] & 0.751 & - & \(\cdot\) \\ \hline \multirow{3}{*}{Transformer} & TNASP [9] & 0.752 & 0.705 & 0.820 \\ \cline{1-1} & TNASP + SE & 0.754 & 0.722 & 0.820 \\ \cline{1-1} & NAR-Former [2] & 0.801 & 0.765 & **0.871** \\ \cline{1-1} & NAR-Former V2 & **0.802** & **0.773** & 0.861 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy prediction on NAS-Bench-101 [1]. “SE” denotes the self-evolution strategy proposed by TNASP [9].
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multirow{2}{*}{Test Model} & \multirow{2}{*}{FLOPs} & FLOPs & m-Meter & TPU & BRP & NNLP [3] & Ours \\ & & & \multicolumn{1}{c}{AMCA [6]} & (7) & \multicolumn{1}{c}{JANs [20]} & (avg/ best) & \\ \hline \multirow{8}{*}{MAPE} & AlexNet & 44.65\% & 15.45\% & 7.20\% & 10.55\% & 31.68\% & 10.64\% / 9.71\% & 24.28\% / 18.29\% \\ & EfficientNet & 58.36\% & 53.96\% & 18.93\% & 16.74\% & 51.97\% & 21.46\% / 18.72\% & 13.20\% / 11.37\% \\ & GoogleNet & 30.76\% & 32.54\% & 11.71\% & 8.10\% & 25.48\% & 13.28\% / 10.90\% & 6.51\% / 6.15\% \\ & MnasNet & 40.31\% & 35.96\% & 10.69\% & 11.61\% & 17.26\% & 12.07\% / 10.86\% & 7.16\% / 6.93\% \\ & MobileNetVVV & 37.42\% & 35.27\% & 6.43\% & 12.68\% & 20.42\% & 8.87\% / 7.34\% & 6.73\% / 5.65\% \\ & MobileNetVVVV & 64.64\% & 57.13\% & 35.27\% & 9.97\% & 58.13\% & 14.57\% / 13.17\% & 9.06\% / 8.72\% \\ & NasBench201 & 80.41\% & 33.52\% & 9.57\% & 58.94\% & 13.28\% & 9.60\% / 8.19\% & 9.21\% / 7.89\% \\ & ResNet & 21.18\% & 18.91\% & 15.58\% & 20.05\% & 15.84\% & 7.54\% / 7.12\% & 6.80\% / 6.44\% \\ & SqueezeNet & 29.89\% & 39.19\% & 8.69\% & 24.60\% & 42.55\% & 9.84\% / 9.52\% & 7.08\% / 6.56\% \\ & VGG & 69.34\% & 66.63\% & 19.47\% & 38.73\% & 30.95\% & _F.06\% / 7.17\% & 15.40\% / 14.26\% \\ \hline \multirow{8}{*}{Acc(10\%)} & Average & 47.70\% & 37.26\% & 15.35\% & 21.70\% & 30.76\% & 11.35\% / 10.72\% & 10.35\% / 19.15\% \\ \cline{1-1} & AlexNet & 6.59\% & 40.50\% & 75.44\% & 57.10\% & 15.20\% & 59.0\% & 6.40\% & 24.05\% / 28.06\% \\ \cline{1-1} & EfficientNet & 0.05\% & 0.05\% & 23.40\% & 17.00\% & 10.16\% & 25.37\% / 28.80\% & 44.01\% / 50.20\% \\ \cline{1-1} & GoogleNet & 12.75\% & 9.00\% & 47.40\% & 69.00\% & 12.55\% & 36.30\% / 48.75\% & 80.10\% / 83.35\% \\ \cline{1-1} & MnasNet & 6.20\% & 9.80\% & 60.95\% & 44.65\% & 34.30\% & 55.89\% / 61.25\% & 73.46\% / 81.60\% \\ \cline{1-1} & MobileNetVVV & 6.90\% & 8.05\% & 80.75\% & 33.99\% & 29.05\% & 63.03\% / 72.50\% & 78.46\% / 83.08\% \\ \cline{1-1} & MobileNetVVV & 0.05\% & 0.05\% & 23.45\% & 46.25\% & 13.85\% & 42.36\% / 49.65\% & 68.43\% / 70.50\% \\ \cline{1-1} & NasBench201 & 0.00\% & 10.55\% & 60.65\% & 2.50\% & 43.45\% & 60.70\% / 70.60\% & 63.13\% / 71.70\% \\ \cline{1-1} & ResNet & 26.50\% & 29.80\% & 39.45\% & 27.30\% & 39.80\% & 72.88\% / 76.40\% & 77.24\% / 79.70\% \\ \cline{1-1} & SqueezeNet & 16.10\% & 21.35\% & 36.20\% & 25.65\% & 11.85\% & 56.69\% & 60.40\% & 50.1\% / 9.25\% \\ \cline{1-1} & VGG & 4.80\% & 2.10\% & 25.50\% & 2.60\% & 13.20\% & 71.04\% / 73.75\% & 45.21\% / 45.30\% \\ \cline{1
_Compared to NAR-Former, NAR-Former V2 achieves comparable accuracy prediction performance with fewer parameters. In latency prediction experiments, NNLP outperforms NAR-Former by a significant margin, and NAR-Former V2 exhibits a clear advantage over NNLP (a direct comparison experiment is provided in the supplementary material). In summary, by incorporating the strengths of GNN, the universal representation learning framework NAR-Former V2 is significantly enhanced. NAR-Former V2 addresses the shortcomings of NAR-Former, which was overly sensitive when handling complete network structures, while still retaining the outstanding performance of NAR-Former when handling cell-structured networks._
### Ablation studies
In this section, we conducted a series of ablation experiments on the NNLQP dataset to investigate the impact of various modifications. The results from Rows (2) and (3) in Table 5 indicate that for type encoding without numerical relationships, using one-hot vectors with equidistant properties across different categories is more suitable. Comparing Row (3) in Table 5 with Row (4), we observe that introducing GNN characteristics into the Transformer improves the model's ability to learn effective representations and achieve more accurate predictions compared to using the original GNN. When replacing the FFN with the GFFN module with eight groups (Row (5)), the number of model parameters reduces to approximately one eighth of that in Row (4), without a significant decrease in prediction accuracy. Compared to Row (5), Row (6) demonstrates an increase of 0.35% in ACC(10%) and 0.95% in ACC(5%). This confirms the role of the type-aware enhancement module in further refining and enhancing the rationality of the representations.
To verify our hypothesis regarding the generalization ability of the network and the effectiveness of the proposed graph-aided attention, we conducted comparative experiments in scenarios where the training and testing data have different distributions. The results of these experiments are presented in Table 6. In order to perform the experiment on global attention, we excluded the step of multiplying the adjacency matrix \(A\) in Equation 9, and instead replaced \(S^{l}\) with \(X^{l}X^{lt}/\sqrt{d}\). Results in Table 6 demonstrate that incorporating the adjacency matrix to restrict the scope of attention calculation is indeed beneficial for latency prediction on unseen data. The model utilizing graph-aided attention exhibited a significant improvement of 7.68% in ACC(10%) compared to the model using global attention.
## 5 Conclusion
In this paper, we combine the strengths of Transformer and GNN to develop a universal neural network representation learning model. This model is capable of effectively processing models of varying scales, ranging from several layers to hundreds of layers. Our proposed model addresses the limitations of previous Transformer-based methods, which exhibited excessive sensitivity when dealing with complete network structures. However, it still maintains exceptional performance when handling cell-structured networks. In future work, we will focus on optimizing the design of the representation learning framework and applying it to a broader range of practical applications. Such as using the proposed model to search for the best mixed precision model inference strategies.
\begin{table}
\begin{tabular}{l c c} \hline \hline Attention & MAPE\(\downarrow\) & ACC(10\%)\(\uparrow\) \\ \hline Global & 16.88\% & 36.32\% \\ Local & 13.20\% & 44.01\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: The influence of using different attentions. Test on EfficientNet.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Row & Structure & Op & Op & Graph- & GFFN & TA- & MAPE\(\downarrow\) & Acc(10\%)\(\uparrow\) & Acc(5\%)\(\uparrow\) \\ & Type & Attributes & Attn & & Enhance & & & \\ \hline
1(Baseline) & GNN & One-hot & Real Num & - & - & - & 3.48 & 95.26 & 77.80 \\
2 & GNN & PE & PE & - & - & - & 3.43(-0.05) & 95.11(-0.15) & 79.58(-4.78) \\
3 & GNN & One-hot & PE & - & - & - & 3.33(-0.15) & 95.57(+0.31) & 80.19(+2.39) \\ \hline
4 & Transformer & One-hot & PE & ✓ & - & - & 3.20(-0.28) & 96.00(+0.74) & 81.86(+0.40) \\
5 & Transformer & One-hot & PE & ✓ & ✓ & - & 3.20(-0.28) & 96.06(+0.80) & 81.76(+3.96) \\
6 & Transformer & One-hot & PE & ✓ & ✓ & ✓ & 3.07(0.41) & 96.41(+1.15) & 82.71(+4.91) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation studies on NNLQP [3]. “PE” denotes position encoding. |
2302.10271 | Thermal Analysis of Malignant Brain Tumors by Employing a Morphological
Differentiation-Based Method in Conjunction with Artificial Neural Network | In this study, a morphological differentiation-based method has been
introduced which employs temperature distribution on the tissue surface to
detect brain tumor's malignancy. According to the common tumor CT scans, two
different scenarios have been implemented to describe irregular shape of the
malignant tumor. In the first scenario, tumor has been considered as a polygon
base prism and in the second one, it has been considered as a star-shaped base
prism. By increasing the number of sides of the polygon or wings of the star,
degree of the malignancy has been increased. Constant heat generation has been
considered for the tumor and finite element analysis has been conducted by the
ABAQUS software linked with a PYTHON script on both tumor models to study
temperature variations on the top tissue surface. This temperature distribution
has been characterized by 10 parameters. In each scenario, 98 sets of these
parameters has been used as inputs of a radial basis function neural network
(RBFNN) and number of sides or wings has been selected to be the output. The
RBFNN has been trained to identify malignancy of tumor based on its morphology.
According to the RBFNN results, the proposed method has been capable of
differentiating between benign and malignant tumors and estimating the degree
of malignancy with high accuracy | Hamed Hani, Afsaneh Mojra | 2023-02-04T22:41:04Z | http://arxiv.org/abs/2302.10271v1 | Thermal Analysis of Malignant Brain Tumors by Employing a Morphological Differentiation-Based Method in Conjunction with Artificial Neural Network
###### Abstract
In this study, a morphological differentiation-based method has been introduced which employs temperature distribution on the tissue surface to detect brain tumor's malignancy. According to the common tumor CT scans, two different scenarios have been implemented to describe irregular shape of the malignant tumor. In the first scenario, tumor has been considered as a polygon base prism and in the second one, it has been considered as a star-shaped base prism. By increasing the number of sides of the polygon or wings of the star, degree of the malignancy has been increased. Constant heat generation has been considered for the tumor and finite element analysis has been conducted by the ABAQUS software linked with a PYTHON script on both tumor models to study temperature variations on the top tissue surface. This temperature distribution has been characterized by 10 parameters. In each scenario, 98 sets of these parameters has been used as inputs of a radial basis function neural network (RBFNN) and number of sides or wings has been selected to be the output. The RBFNN has been trained to identify malignancy of tumor based on its morphology. According to the RBFNN results, the proposed method has been capable of differentiating between benign and malignant tumors and estimating the degree of malignancy with high accuracy.
Brain tumor Tumor differentiation Morphological analysis Artificial neural network Finite element method
## 1 Introduction
Brain tumors are categorized into benign and malignant. Despite considerable advances in the diagnosis and the treatment of the brain tumors, the mortality from malignant brain tumors is still high Kateb et al. (2009). Malignant tumors are cancerous and are made up of cells that grow out of control Baish and Jain (2000). It has been found that cells in the peripheral areas of the malignant tumor have a high invasive rhythm to the surrounding tissue Guarino et al. (2007). Therefore, border of a malignant tumor has sharp edges that result in a polygonal shape or star-shaped morphology of the tumor Golston et al. (1992). Malignant tumors are deeply fixed in the surrounding tissue by the sharp edges. The sharpness increases rapidly since the tumor needs more invasions to the nearby tissue for the growth Condelelis and Pollard (2006). On the contrary, benign tumors are often smooth and round and easy to remove since they are not attached to the surrounding tissue Shah et al. (1995).
Surgery is usually the first step in the treatment of the brain tumors. The goal is to remove as much of the tumor
as possible while maintaining neurological function. Surgical treatments for malignant brain tumors are not easy to handle because they do not have clear borders Argani et al. (2001). In order to improve the surgeon's ability in defining the tumor's border and avoid injury to vital brain areas in the operating room, image-guided surgery is performed. Intraoperative imaging is a revolutionary tool in the modern neurosurgery Illingworth (1995). Intraoperative imaging techniques especially intraoperative MRI (iMRI), help neurosurgeons achieve the goal of maximum tumor resection with least morbidity Schulder and Carmel (2003). The main drawbacks of using iMRI is the patient positioning during the surgery for a proper imaging and also limitation of using the surgical instruments because of the presence of a strong magnetic field.
In order to avoid limitations and high expenses of intraoperative imaging, many researches have focused on improving the performance of the preoperative tumor imaging techniques. These techniques mainly include magnetic resonance imaging (MRI) and computed tomography scan (CT scan). In the MRI, an injected contrast agent is used which makes the cancerous tumor brighter than the surrounding normal tissue. During the scan, there is a rapid increase in the signal intensity of a malignant tumor immediately 1 to 2 minutes after the injection. The intensity decreases in the following minutesKobayashi and Brechbiel (2005). For a benign mass, the rise in the intensity is much slower. Inaccuracy in the time and the intensity measurements results in a probability that benign and malignant tumors have overlap in their morphological appearance Barentsz et al. (1996). It was proved that the specificity of MRI to correctly predict a benign tumor is limited. Moreover, Specificity of MRI would be decreased by reducing the tumor size. Therefore, MRI is usually recommended after a malignancy is detected by other methods in order to have more information about the extent of the cancer.
CAT or CT scanning is an accurate medical test that combines x-ray with computerized technology to detect malignancy. The main drawback of this method is using high doses of radiation, leading to the possibility of lung cancer or breast cancer as a consequence Lee et al. (2004). X-rays also damage DNA itself Spotheim-Maurizot and Davidkova (2011). CT scanning provides images in shades of grey; occasionally the shades are similar, making it difficult to distinguish between the normal and abnormal tissues. To overcome such deficiency a contrast agent may be injected into the bloodstream. Main problems of the injection include pathological side-effects such as nausea and vomiting, hypotension and extravasation of the contrast which can be severe enough to require skin grafting Rull and Tidy (2015).
During recent years, many researches have focused on improving the procedure of defining tumor morphology in the preoperative imaging techniques. Wu et al. (2012) used the level set method to segment ultrasound breast tumors automatically and used a genetic algorithm to detect indicative features for the support vector machine (SVM) to detect tumor malignancy. The proposed system could discriminate benign from malignant breast tumors with high accuracy and short feature extraction time. Huang et al. (2013) evaluated the value of using 3D breast MRI morphological features to differentiate malignant and benign breast tumor. Malignancy of a tumor was investigated by using a number of extracted morphological features in the breast MRI. Jen and Yu (2015) introduced a method for abnormality detection in the mammograms based on an abnormality detection classifier (ADC) which extracts a couple of distinctive features including first-order statistical intensities and gradients. In this method, image preprocessing techniques were used to obtain more accurate breast tissue segmentation. Han et al. (2015) provided an improved segmentation algorithm which is a combination of the fuzzy clustering segmentation and the fuzzy edge enhancement. Results showed that the fuzzy clustering segmentation is highly efficient for complex brain tissues, and the images after the fuzzy clustering segmentation provide a solid foundation for 3D processing and help to acquire better 3D visualization of the brain tumors. Ramya and Sasirekha (2015) developed a robust segmentation algorithm in order to diagnose tumors in the MR images. In this method, a 4th-order partial differential equation was employed to denoise images to improve segmentation accuracy. Zhang et al. (2016) proposed a wavelet energy-based method to classify the MR images. This approach had a three-stage system which detects characteristics indicative of the abnormal brain tissues. Shirazi and Rashedi (2016) used a combination of support vector machine (SVM) and mixed gravitational search algorithm (MGSA) to improve the classification accuracy in the mammography images. Xia et al. (2016) proposed a novel voting ranking random forests (VRRF) method for the image classification and developed a center-proliferation segmentation (CPS) method. This method showed good performance in the image classification with strong robustness.
In the present study, a palpation-based method was used for scanning of the brain tissue in order to detect and follow the malignancies. The method avoids the main aforementioned drawbacks of the imaging techniques since it is only based on the tissue palpation. It is called "tactile thermography" and maps the thermal parameters of the tissue mainly the temperature and the heat flux on the tissue surface. Sadeghi-Goughari and Mojra (2015) estimated thermal parameters of the brain tumors by introducing the tactile thermography method as a new noninvasive thermal imaging method. In this method, the brain tissue was mechanically and thermally loaded and the temperature and the heat flux variations on the tissue surfaces were recorded. A number of thermal parameters were extracted and optimized by an artificial neural network to verify tumor existence and its depth. The main objective of this study is to evaluate the capability of the proposed method in detecting sharp morphology of the tumor indicative of the tumor malignancy and also to verify its sensitivity to the sharpness increase that can be used in the follow-up procedure for evaluating the tumor growth.
To this end, a malignant tumor is simulated in the brain tissue by two scenarios that represent the invading sharp edges of the tumor. Moreover, the tumor is considered as a heat source in the thermal analysis since the cell number and the
overall cell metabolism are considerably increased in the tumor relative to the normal tissue. By conducting a thermal analysis, the temperature distribution on the tissue surface is obtained for tumors with varying penetration into the surrounding tissue. By the use of the artificial neural network, a number of variables are extracted from the temperature map and will be used as the malignancy indicative features.
## 2 Materials and Methods
Figure 1 and Figure 2 are CT scan images of two malignant brain tumors. It can be inferred that while a benign tumor has an almost smooth round shape, malignancy can be identified by the existence of invading edges. Two major morphologies of the malignant brain tumors are identified by solid lines in Figure 1 and Figure 2. The first morphology resembles a polygonal shape while the second one resembles a star. In the present numerical analysis, the tumor malignancy was simulated in two different scenarios:
1. Tumor was considered as an n-sided polygonal based prism. Area of the polygonal base and consequently the volume of the tumor and the distance of uppermost vertex (D) were kept unchanged for all number of sides (n) (Figure 3).
2. Tumor was considered as a star polygonal based prism. This star was formed by an inscribed circle with constant radius of R=10mm and different corner vertices. Tumor penetration inside the nearby tissue was increased by increasing the number of corner vertices. Each group of two intersecting edges was called a wing. The number of wings was increased by keeping the star polygonal base area constant (and the tumor volume) (Figure 4).
Geometrical dimensions of the sample brain tissue containing a malignant tumor is presented in Table 1. In order to describe mechanical behavior of the brain tissue, the elasticity parameters of the brain tissue obtained by Soza et al. (2005)) were used(Table 2). The Young's modulus of tumor was considered to be 10 times of the brain tissue while the Poisson's ratio was the same Shiddiqi et al. (2010).
Study of energy transport in the biological systems involves various mechanisms including conduction, advection and metabolism Gore and Surawicz (2003). In this study these phenomena were considered in a steady-state thermal
\begin{table}
\begin{tabular}{l l l l l} \hline \hline X & Y & Z & D & R \\ \hline
120 & 60 & 25 & 12 & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dimensions of simulated tissue containing a malignant tumor, all dimensions are in millimeters.
Figure 1: CT scan image of a malignant tumor resembles a polygon presented as scenario 1.
Figure 3: Mid cross section of the brain tissue model (rectangular cuboid) in the ABAQUS environment including (a) a pentagonal based prismatic tumor; (b) a decagonal based prismatic tumor.
Figure 2: CT scan image of a malignant tumor resembles a star presented as scenario 2.
analysis; a constant thermal conductivity equal to 0.6 W/m2K was considered for the brain tissue Elwassif et al. (2006), blood perfusion and metabolic activities were considered by assuming a constant heat generation equal to 100000 W/m3 for the tumor. A convective heat transfer between the top tissue surface and the surrounding environment was considered with a convective heat transfer coefficient equals 20 W/m2K. The bottom surface of the brain tissue sample was assumed to have a constant temperature of 33.1\({}^{\circ}\)C equal to the temperature of the blood vessel which was assumed to be in contact with the tissue. Side surfaces were insulated.
ABAQUS software (version 6.14) was employed to perform finite element analysis (FEA) under 3D axisymmetric conditions. A compressive strain was applied to the top surface of the tissue and the temperature variation was recorded on it. In order to measure temperature variations while the tissue was loaded mechanically, the step named "COUPLED TEMP-DISPLACEMENT" was selected. A "time increment" and a "time period" should be assigned to the step for that the default values were assumed. For both rectangular cube and prism, a 4-noded tetrahedral with thermal coupling (C3DAT) element type was used.
Rather than thermal boundary conditions, mechanical boundary conditions were also defined. The bottom surface of the tissue was totally fixed while the top surface was loaded by a compressive strain equals to 6% which corresponds to a 4 mm compression. Adhesion of the malignant tumor to its surrounding tissue was satisfied by applying 'TIE' constraint between the brain tissue and the tumor.
Mesh independency was examined for the tissue sample including a decagonal based prismatic tumor. Three stages of mesh refinement were performed and temperature values were measured. For three models with 21796, 29725 and 44029 elements, the computational time was 19.27, 23.44 and 29.82 seconds, respectively. The maximum relative difference between the temperature values in the models with 21796 elements and 29725 elements was less than 1%, so the computational grid with 21796 elements was selected (Table 3).
Similarly, mesh independency was examined for the tissue sample including the star polygonal based prismatic tumor with 10 wings. For three models with 22362, 29869 and 43366 elements, the computational time was 21.49, 23.56 and 30.81 seconds, respectively. The maximum relative error between the temperature values in the models with 22362 elements and 29869 elements was less than 1%, so the computational mesh with 22362 elements was selected (Table 3).
Figure 4: Mid cross section of the brain tissue model (rectangular cuboid) in the ABAQUS environment including (a) a 4 wing star polygonal based prismatic tumor; (b) a 10 wing star polygonal based prismatic tumor.
The sample model of the brain tissue was numerically analyzed while a tumor with irregular borders was included as a prism with a polygonal base or a star polygonal base. The number of sides of the polygon and wings of the star were changed to the study the effects of the irregularity increase on the thermal parameters. Table 4 lists the number of sides and wings of the tumor model. A total number of 98 polygonal based prisms and 98 star polygonal based prisms were modeled and analyzed by the ABAQUS software linked with a PYTHON script to automatically change the number of sides and wings. MATLAB software (version 8.6) was used to plot the thermal outputs and extract thermal variables that would be employed as the inputs of an artificial neural network.
### Artificial neural network
Artificial neural networks (ANNS) try to model the processing capabilities of actual human nervous systems. They are consist of numerous simple computing components, called neurons that generates an input layer, one or more hidden layers and an output layer Steuber and Jaeger (2013). ANNs are mostly used for functions approximation which may be single variable or multivariable. In order to recognize inputs patterns, suitable weights for the connections must be derived. Therefore, the network should be trained to obtain proper desired results.
In this study, radial base function neural network (RBF) was used that employs the radial basis functions as the activation functions Park and Sandberg (1991). The output of this network is a linear combination of neuron parameters and radial basis function of the inputs. Properties of the employed network are listed in Table 5.
Performance of this network was evaluated by mean squared error (MSE) that is the average of the squares of the errors and error is the difference between the desired value of a variable and the estimated value by the network (equation 3). The ideal performance of a neural network occurs when values of the MSE are zero. The error is defined as equation 1:
\[e_{i}=\hat{x}_{i}-x_{i} \tag{1}\]
Where \(\hat{x}_{i}\) is the estimated value and \(x_{i}\) is the desired value that is obtained from the ABAQUS runs in this study. Mean of the errors for all data (98 data for each tumor model) is then calculated by equation 2:
\[\mu=\frac{1}{n}\sum_{i=1}^{n}e_{i} \tag{2}\]
\begin{table}
\begin{tabular}{l
\[MSE=\frac{1}{n}\sum_{i=1}^{n}e_{i}^{2} \tag{3}\]
Where \(n\) is the number of the network inputs. The variance is the average of the squared differences from the mean (equation 4):
\[\sigma^{2}=\frac{1}{n}\sum_{i=1}^{n}(e_{i}-\mu)^{2} \tag{4}\]
In order to have a better criterion for evaluating the network performance, it's better to use the root mean square error (RMSE) which has the same unit as the variable being estimated (equation 5). The square root of the variance is known as the standard deviation (equation 6).
\[RMSE=\sqrt{\frac{1}{n}\sum_{i=1}^{n}e_{i}^{2}} \tag{5}\]
\[\sigma=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(e_{i}-\mu)^{2}} \tag{6}\]
## 3 Results and Discussion
Figure 5 is a display of the temperature contours in the mid cross section of the model obtained from the ABAQUS runs. Tumor existence in the tissue increases the temperature in the vicinity of the tumor. At further distance from the tumor margin, the temperature reduces and the rate of reduction is not proportional to the distance from the tumor center. Temperature gradients are considerable in the tumor vicinity. It can be also inferred from Figure 7 that the area of the tumor-affected region depends on the tumor shape to a great extent. While the tumor effect spreads smoothly over the whole tissue area for a polygonal based prismatic tumor, it has a localized distribution for a star polygon. Consequently, temperature distribution pattern can be correlated with the shape of the malignant tumor.
Temperature variation was studied on the paths defined in Figure 2(a) and Figure 3(a). and by varying the number of sides and wings of the tumor (Figure 3(a). The diagrams show that the location of the maximum temperature on the tissue surface corresponds to the location of the tumor center inside the tissue where the tumor distance is the minimum relative to the tissue surface. Moreover, while the tumor volume was kept unchanged, increasing the number of sides and wings results in the elevation of the maximum surface temperature. These achievements offer two opportunities:
1. The temperature variation is indicative of the tumor existence. Therefore, temperature map can be used for the tumor detection task.
Figure 5: Temperature contours in the mid cross section of the brain tissue model including: a pentagonal, heptagonal, decagonal and 15 sided polygonal based prismatic tumor (top from left to right) and a 5 wing, 7 wing, 10 wing, and 30 wing star polygonal based prismatic tumor (bottom from left to right).
2. For a specific malignant tumor, the temperature map can be recorded in the successive examinations. Comparison between the maps can be indicative of the malignancy progression in a period of time.
In Figure 7 variation of the maximum temperature on the tissue surface is investigated by increasing the number of sides and wings. For the polygonal based tumor model, increase of the number of sides from 3 to 100 results in the elevation of the maximum surface temperature from \(29.7^{\circ}\)C to \(30.5^{\circ}\)C. For the star polygonal based tumor, by increasing the number of the wings from 3 to 100, maximum surface temperature increases from \(30.3^{\circ}\)C to \(30.8^{\circ}\)C. However, maximum temperature variations are less than \(0.02^{\circ}\)C and \(0.01^{\circ}\)C by increasing the number of sides and wings more than 20, respectively. Therefore, sensitivity of the surface temperature to the variation of the sides and wings of the tumor reduces when the sharpness of the tumor morphology increases.
In order to find a quantitative criterion for the tumor detection and the malignancy progression, the temperature curve on the tissue surface was interpolated by a 4th-order Fourier series (equation 7). Fitting error was less than 1% for all tumor models. By fitting this curve to each set of data achieved from the ABAQUS runs, 10 coefficients were extracted for each tumor which are \(a_{i}^{\prime}\) s,\(b_{i}^{\prime}\) s and w in equation 7.
Figure 6: Temperature distribution on a path which passes from the center of the top tissue surface for brain tissue sample including a) polygonal based prismatic tumors; b) star polygonal based prismatic tumors.
\[T(x)=a_{0}+\sum_{i=1}^{4}(a_{i}coswx+b_{i}sinwx) \tag{7}\]
These coefficients were used as the inputs of a radial basis function (RBF) artificial neural network (ANN). This network (RBFNN) provided the link between the inputs that were the surface temperature interpolating function's coefficients and outputs that were the corresponding number of sides or wings of the malignant tumor.
### Polygonal based prismatic tumor
The RBFNN was trained to estimate number of sides by using 98 samples of the brain tissue including a polygonal based prismatic tumor with different number of sides of the polygonal base. 68 samples from the whole datasets were used randomly for training the network and 30 remaining samples were used for the testing procedure.
For a better comprehension of the extent and the distribution of the coefficients of equation 7, the mean \(\bar{x}\) the minimum \(x_{min}\), and maximum \(x_{max}\) of these coefficients for the polygonal base tumor model are listed in Table 6. The employed neural network transfer function was "Tansig"that takes values only between \(-1\) and \(1\). Therefore, the coefficients were normalized in the range of \([-11]\). Normalization of the dataset would prevent misinterpretation in defining the contribution of each input in the neural network.
The box plot of the normalized coefficients is plotted in Figure 8 that is a standardized way of displaying the distribution of data based on five characteristics: minimum, maximum, median, first quartile, and third quartile. In a box plot, the rectangle expands from the first quartile to the third quartile and the inner line represents the median. A segment inside the rectangle shows the median and whiskers above and below the box show the minimum and the maximum of the data. It can be inferred from Figure10 that \(a_{0}\),\(a_{1}\),\(a_{2}\) and \(a_{3}\) have the most compact distributions which means that these coefficients are not too sensitive to the variation of the number of sides and consequently do not have major contribution in the proposed network training. Contribution of these coefficients in the network is providing a general similarity between the patterns of temperature variations. On the contrary, \(b_{1}\),\(b_{3}\) and \(b_{4}\) have the widest distributions. Moreover, based on the position of the median, a coefficient may have normal or abnormal distribution. According to the box plot, \(a_{2}\),\(a_{4}\), \(b_{3}\), and \(b_{4}\) have the most normal distributions that mean that by varying the number of sides, distribution of these
Figure 7: Correlation between the maximum surface temperature and number of sides of the polygonal based prismatic tumor and number of wings of the star polygonal based prismatic tumor.
coefficients is not concentrated below or above the medians. Normal distribution facilitates the network training and reduces the normal deviations. Distribution of \(a_{0}\), \(a_{1}\) and \(a_{3}\) are far from being normal.
Performance of the proposed RBFNN in data training was evaluated by four different plots in Figure9. The top left figure is a plot of the network output that has overlap with the corresponding real values. The top right figure is a plot the linear regression of the real or desired values versus the network outputs. Slope of the fitted line by the RBFNN was close to 1 that means that the estimated values are very close to the real values. RMSE was equal to \(4.14210^{-13}\). Table 7 provides the value of the evaluating parameters for both training and testing datasets.
### Star polygonal based prismatic tumor
For the second tumor model, the RBFNN was trained to estimate the number of tumor wings by using 98 samples of the tissue including star polygonal based prismatic tumors. Number of the training and testing datasets were 68 and 30, respectively. Mean \(\bar{x}\), minimum \(x_{mean}\), and maximum \(x_{max}\) of the extracted coefficients are listed in Table 8.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Coefficient & \(x_{min}\) & \(\bar{x}\) & \(x_{max}\) \\ \hline \(a_{0}\) & 28.8860 & 29.1522 & 29.1666 \\ \(a_{1}\) & 0.6511 & 0.9437 & 0.9583 \\ \(a_{2}\) & 0.1349 & 0.2394 & 0.2462 \\ \(a_{3}\) & 0.0326 & 0.0701 & 0.0737 \\ \(a_{4}\) & 0.0101 & 0.0203 & 0.0223 \\ \(b_{1}\) & 0.0017 & 0.0042 & 0.0062 \\ \(b_{2}\) & 0.000983 & 0.0034 & 0.0059 \\ \(b_{3}\) & -0.000574 & 0.0018 & 0.0043 \\ \(b_{4}\) & -0.000914 & 0.0010 & 0.0031 \\ \(w\) & 51.7375 & 52.6726 & 53.0506 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Variation of the Fourier series coefficients for the polygonal based prismatic tumor.
Figure 8: Box plot of the normalized coefficients for the polygon base prismatic tumor.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & \multicolumn{2}{l}{Training dataset} & \multicolumn{3}{l}{Testing dataset} \\ \hline RMSE & \(\mu\) & \(\sigma\) & RMSE & \(\mu\) & \(\sigma\) \\ \hline \(4.14210^{-13}\) & \(2.09910^{-14}\) & \(4.16710^{-13}\) & \(3.31010^{-12}\) & \(-1.93710^{-12}\) & \(2.72710^{-12}\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Evaluating parameters of the proposed RBFNN for the polygonal based prismatic tumor.
The box plot of the normalized coefficients is plotted in Figure 10. Similar to the polygonal based prismatic tumor, the box plot shows that \(a_{0}\),\(a_{1}\),and \(a_{2}\) have the most compact distributions and \(a_{4}\), \(b_{1}\) and \(b_{2}\) have the most normal distributions.
Performance of the RBFNN was evaluated by the prescribed parameters for both training and testing datasets (Table9). For both malignant tumor models, R-value was close to 1 and insignificant values of RMSE are indicative of perfect fit of the designed network to the numerical data. RMSE, errors mean (\(\mu\)) and standard deviation (\(\sigma\)) are almost zero.
## 4 Conclusion
In the present study, a morphologically malignant tumor was simulated in a cuboid sample of the brain tissue and thermal effect of the tumor on the surrounding tissue was investigated. Considering CT-scan images, morphology of the malignant tumor was defined by two major scenarios; polygonal based prismatic tumor and star polygonal based prismatic tumor. Main characteristic of both morphologies is having corner vertices and multiple edges. The tumor was considered as a biological heat source in the tissue and the temperature map on the tissue surface was obtained. An
\begin{table}
\begin{tabular}{l c c c} \hline Coefficient & \(x_{min}\) & \(\bar{x}\) & \(x_{max}\) \\ \hline \(a_{1}\) & 0.8480 & 1.0025 & 1.0119 \\ \(a_{2}\) & 0.1652 & 0.2044 & 0.2118 \\ \(a_{3}\) & 0.0370 & 0.0441 & 0.0470 \\ \(a_{4}\) & 0.0087 & 0.0124 & 0.0155 \\ \(b_{1}\) & 0.0015 & 0.0039 & 0.0060 \\ \(b_{2}\) & 0.00019 & 0.0030 & 0.0055 \\ \(b_{3}\) & -0.0015 & 0.0012 & 0.0043 \\ \(b_{4}\) & -0.0019 & 0.0007 & 0.0044 \\ \(w\) & 51.2349 & 51.6295 & 52.1607 \\ \hline \end{tabular}
\end{table}
Table 8: Variation of the Fourier series coefficients for the star polygonal based prismatic tumor.
\begin{table}
\begin{tabular}{l c c c c c} \hline & \multicolumn{2}{c}{Training dataset} & \multicolumn{3}{c}{Testing dataset} \\ \hline RMSE & \(\mu\) & \(\sigma\) & RMSE & \(\mu\) & \(\sigma\) \\ \hline \(7.02810^{-13}\) & \(-2.09910^{-13}\) & \(6.74210^{-13}\) & \(2.84610^{-12}\) & \(1.98710^{-12}\) & \(2.07210^{-12}\) \\ \hline \end{tabular}
\end{table}
Table 9: Evaluating parameters of the proposed RBFNN for the star polygonal based prismatic tumor.
Figure 9: Performance plots of data training by the proposed RBFNN for the polygonal based prismatic tumor.
interpolating function was fitted to the temperature map and ten distinct variables were extracted. The tumor growth and malignancy progression was linked to the increase of the corner vertices of the tumor. Subsequently, 98 polygonal based prismatic tumors with different number of sides and 98 star polygonal based prismatic tumors with different number of wings were modeled and thermally analyzed. The aforementioned variables were extracted for all tumor models. Numerical results showed that the temperature of the normal tissue is affected by the tumor existence and the pattern of temperature variation has agreement with the tumor morphology. Moreover, the extracted thermal variables for all tumor models were used as the inputs of a radial base function neural network (RBFNN) and the number of sides and wings were estimated. The RBFNN analysis offered that the proposed method has the potential to be employed as a quantitative tool for measuring the malignancy progression over time.
|
2303.01055 | Physics-informed neural networks for solving forward and inverse
problems in complex beam systems | This paper proposes a new framework using physics-informed neural networks
(PINNs) to simulate complex structural systems that consist of single and
double beams based on Euler-Bernoulli and Timoshenko theory, where the double
beams are connected with a Winkler foundation. In particular, forward and
inverse problems for the Euler-Bernoulli and Timoshenko partial differential
equations (PDEs) are solved using nondimensional equations with the
physics-informed loss function. Higher-order complex beam PDEs are efficiently
solved for forward problems to compute the transverse displacements and
cross-sectional rotations with less than 1e-3 percent error. Furthermore,
inverse problems are robustly solved to determine the unknown dimensionless
model parameters and applied force in the entire space-time domain, even in the
case of noisy data. The results suggest that PINNs are a promising strategy for
solving problems in engineering structures and machines involving beam systems. | Taniya Kapoor, Hongrui Wang, Alfredo Nunez, Rolf Dollevoet | 2023-03-02T08:24:27Z | http://arxiv.org/abs/2303.01055v2 | # Physics-informed neural networks for solving forward and inverse problems in complex beam systems
###### Abstract
This paper proposes a new framework using physics-informed neural networks (PINNs) to simulate complex structural systems that consist of single and double beams based on Euler-Bernoulli and Timoshenko theory, where the double beams are connected with a Winkler foundation. In particular, forward and inverse problems for the Euler-Bernoulli and Timoshenko partial differential equations (PDEs) are solved using nondimensional equations with the physics-informed loss function. Higher-order complex beam PDEs are efficiently solved for forward problems to compute the transverse displacements and cross-sectional rotations with less than \(1e-3\) percent error. Furthermore, inverse problems are robustly solved to determine the unknown dimensionless model parameters and applied force in the entire space-time domain, even in the case of noisy data. The results suggest that PINNs are a promising strategy for solving problems in engineering structures and machines involving beam systems.
PINNs, complex system, Euler-Bernoulli beam, Timoshenko beam, double-beam system.
## I Introduction
Complex engineering issues in real-life scenarios are often characterized by the connection between various subsystems and uncertainty in behavior caused by internal and external variables and their interactions. Furthermore, the design and maintenance of complex systems, such as engineering structures and machines, is made challenging by the unpredictable collective behaviors and properties of these concurrently operating and interacting components. These issues are typically difficult to analyze through conventional methods [1]. Most of these complex engineering systems are continuous, and partial differential equation (PDE) models are used to characterize and understand their behavior. These PDE models are used to simulate a wide range of engineering phenomena, ranging from multiple beam systems in suspension bridge cables (Timoshenko beam equations) [2] to catenary-pantograph interactions in railways (damped beam equations) [3] to simulating air turbulence that disrupts flight (Navier-Stokes equations) [4, 5], among many others [6, 7, 8, 9, 10, 11]. Solutions to governing PDEs enable real challenges such as structural health monitoring [12, 13, 14] and optimal structural design [15, 16] to be addressed.
The development of algorithms for diagnostics and prognosis is an issue in maintaining complex engineering systems [1]. Insights could be obtained by solving the forward and inverse problems for the governing PDEs of interest to forecast the system's behavior and minimize unexpected downtimes of complex systems. These equations range in complexity from being extremely nonlinear (Navier-Stokes equation [17]) to incorporating intricate higher-order boundary conditions (fourth-order beam equations [18]). In practice, these equations are too complicated to be solved analytically and must be solved numerically. Numerical methods such as the finite-difference and finite-element methods have been used to approximate the solutions of these PDEs. Despite their success in practice, these methods encounter some difficulties, such as mesh creation, which is more difficult for complex geometries in higher dimensions [19].
In recent years, scientific machine learning, which combines scientific computing with machine learning methodologies to estimate PDEs solutions, has made remarkable developments and has emerged as a viable alternative to the aforementioned numerical methods. The review papers [19, 20, 21] extensively discuss state-of-the-art breakthroughs in scientific machine learning. However, data-driven methods require a large amount of data, which is possibly computationally expensive and susceptible to noise in some engineering systems [22]. One possible way to mitigate the effects of these problems is to collocate the PDE residual at training points, similar to leveraging the physical equation in the training process. The underlying neural networks proposed in [22] are called physics-informed neural networks (PINNs).
PINNs utilize neural networks' universal function approximation property [23] and embed the well-posed physical equations modeled by PDEs in the loss function. Prior knowledge of physical principles works as a regularization agent in neural network training, restricting the space of admissible solutions and improving function approximation accuracy. As a result, given some knowledge of the physical features of the problem and some training data, PINN can be utilized to identify a high-fidelity solution. PINNs have already proven to be a very effective paradigm for approximating solutions of PDEs, as discussed in the review papers [19, 20].
However, several challenges for PINNs have also been found [24]. One such challenge for PINNs is to learn relevant physical phenomena for more complex problems with large coefficients in the physical equation [25]. A sequence-to-sequence learning task was proposed in [25] as a remedy to
this problem. However, this can be computationally expensive when the scale is large. In [26], the importance of using nondimensional equations in the PINN framework was highlighted for cardiovascular blood flow. We build on these works and address the challenge of multiscale complex beam systems. Accordingly, this paper uses nondimensional PDEs instead of dimensional PDEs in the loss function. This provides a way to simulate realistic physical equations with computational tractability.
Measuring quantities of interest in beam systems through lab experiments can prove to be difficult, as it necessitates specialized prototypes, training, and safety during the testing process, increasing the overall cost of the experiment. PINNs offer a simulation-based solution as a mesh-free method that does not require discretizing the domain into a finite number of elements, making it computationally inexpensive compared to numerical methods. PINNs can effectively integrate incomplete or noisy information with prior physical knowledge. The proposed framework converts dimensionalized PDEs to a nondimensionalized form, increasing the suitability for neural networks and enabling the prediction of deflections and rotations for any material, resulting in a more generalizable method.
This paper provides a framework to simulate complex structural systems consisting of two or more basic structural systems connected by an elastic layer. In particular, the forced vibration of two elastically connected beams is studied, which is commonly encountered in the mechanical, construction, and aeronautical industries [6]. These double-beam systems in engineering structures have received significant attention in the scientific community and are considered complex systems. Studies have been conducted to predict the dynamics of these systems under various loading and force conditions, such as those found in papers [27, 28, 29, 30, 31, 32, 33, 34, 35], among others. These studies include the use of analytical and closed-form solutions [31, 36, 37, 38, 39]; however, analytical methods have limitations in applicability, as they may be useful only for specific types of problems and can become complex for systems with many variables or nonlinear equations. Other approaches, such as the state-space method presented in [33, 40], may also be computationally expensive for systems with a large number of states. Additionally, modal analysis methods as presented in [6, 41] have been used to study the natural frequencies and modes of vibration, but they do not provide information on the full response of the system and cannot be used to predict the time-domain response at any instant.
The considered governing equations are modeled using Euler-Bernoulli and Timoshenko theory. In addition to solving the forward problem and computing the physical quantities of interest, we also solve the inverse problem. For the inverse problem, one may not necessarily have complete information about the inputs to the PDEs, such as initial or boundary data, coefficients or applied forces. This lack of knowledge makes the forward problem ill-posed, and subsequently, the forward problem cannot be solved uniquely. In this paper, access to data for quantities of interest is leveraged to determine the PDEs' unknown inputs, for instance, the model parameters and applied forces.
The main contributions of the current paper are as follows,
* To the best of the authors' knowledge, this is the first work to use physics-informed machine learning to solve the forward and inverse problems of Euler-Bernoulli and Timoshenko complex beam models.
* We address a challenge for PINNs in solving multiscale complex beam PDEs and propose a framework for using nondimensional equations in the loss function.
* The proposed nondimensional PINN framework is employed to address ill-posed inverse problems for complex systems and to identify the unknown model parameters and the applied force on the beam components. This is achieved by utilizing data from indirect measurements such as the displacement and cross-sectional rotations of the beams.
* The presented methodology is robust to noise and can accomodate potential uncertainty in the measurement data, making it well suited for real-world applications where data are incomplete or uncertain.
The rest of the article is organized as follows. In Section II, the PINN method is presented to simulate the dimensional Euler-Bernoulli beam equation. Due to the limitations of PINNs in simulating the dimensional Euler-Bernoulli beam equation, an alternative approach of using nondimensional equations in the PINN's loss function is proposed and successfully used to solve the dimensionless Euler-Bernoulli equation in Section III. Section IV first applies the proposed framework to simulate the Timoshenko beam model for solving forward and inverse problems. The forward problem of the Euler-Bernoulli double-beam equation is then solved. Section IV covers with forward and inverse Timoshenko double-beam system problems. Section V concludes this paper.
## II PINNs for Dimensional PDEs
In this section, the method of PINNs to simulate PDEs is presented in brief using an abstract dimensional PDE. The method is then used to simulate the dimensional Euler-Bernoulli equation. The following abstract dimensional PDE is considered with implicit initial and boundary conditions:
\[\tilde{\mathcal{K}}(\bar{x},\bar{t}):=\mathcal{D}[\bar{u}](\bar{x},\bar{t}; \bar{\lambda})-\bar{f}(\bar{x},\bar{t})\quad\forall(\bar{x},\bar{t})\in\bar{ \Omega}\times\bar{T}\subset\mathbb{R}^{\mathrm{d}}\times\mathbb{R}, \tag{1}\]
where \(\mathcal{D}[\cdot]\) denotes the differential operator, \(\bar{u}\) is the quantity of interest, \(\bar{x}\in\bar{\Omega}\subset\mathbb{R}^{\mathrm{d}}\), \(\bar{t}\in\bar{T}\subset\mathbb{R}\) for \(d\geq 1\), \(\bar{\Omega}\) denotes the spatial boundary contained in the d-dimensional Cartesian spatial space and \(\bar{T}\) denotes the temporal domain, \(\bar{\lambda}\in\mathbb{R}\) is the model parameter, \(\bar{f}(\bar{x},\bar{t})\) is the external force, and \(\bar{K}\) is the notation for the abstract physical equation.
Deep neural networks are the core for PINNs in which inputs \((\bar{x},\bar{t})\) map to output (\(\bar{u}\)) through an iterative composition of hidden layers. The composition consists of weights (\(w\)), biases (\(b\)), and linear or nonlinear activation function(s) (\(\sigma\)).
To train the neural network, one needs training set (\(\Delta\)), consisting of spatial boundary points (\(\Delta_{\mathrm{b}}\)), temporal boundary points (\(\Delta_{\mathrm{i}}\)) and interior points (\(\Delta_{\mathrm{int}}\)). As a result, the training set can be written as \(\Delta=\Delta_{\mathrm{i}}\cup\Delta_{\mathrm{b}}\cup\Delta_{\mathrm{int}}\). In this work, \(\Delta_{\mathrm{i}}\), \(\Delta_{\mathrm{b}}\), and \(\Delta_{\mathrm{int}}\) are considered to have \(N_{\mathrm{i}}\), \(N_{\mathrm{b}}\) and
training points respectively. The total number of training points is denoted by \(N_{\rm train}\). To approximate the quantity of interest \(\bar{u}\), one needs to minimize the loss function containing the physical model in the form of a PDE with initial and boundary conditions of (1). No additional data are required in the loss function for forward problems. The loss function \(\bar{\mathcal{L}}\) is defined as follows:
\[\bar{\mathcal{L}}(\theta)=\underset{\theta}{\mathrm{Min}}(\frac{1}{N_{\rm train }}\sum_{n=1}^{N_{\rm train}}||\bar{\mathcal{K}}(\bar{x}_{\rm n},\bar{t}_{n})|| ^{2}) \tag{2}\]
where \((\bar{x}_{\rm n},\bar{t}_{\rm n})\) represents the training tuple for each n. Minimizing this loss function using a suitable optimization algorithm provides optimal parameters \(\theta=\{w,b\}\).
Now, we employ the PINN algorithm for the dimensional Euler-Bernoulli beam equation and evaluate the corresponding performance. The dynamic Euler-Bernoulli beam equation is given by
\[\rho A\bar{u}_{\bar{\mathrm{t}}\bar{\mathrm{t}}}+EI\bar{u}_{88\bar{\mathrm{k} }\bar{\mathrm{k}}\bar{\mathrm{k}}}=\bar{f}(\bar{x},\bar{t})\quad\bar{x}\in[0,\bar{l}],\bar{t}\in[0,t_{\rm end}] \tag{3}\]
Here, \(\bar{l}\) and \(t_{\rm end}\) refer to the length of the beam and final time, respectively. This equation models the transverse displacement of beam \(\bar{u}\) in the space-time domain subject to the external transverse force \(\bar{f}\) as shown in Fig. 1. This work considers a uniform cross-sectioned beam with constant material properties throughout the beam. The parameters \(\rho\) and \(A\) denote the density and cross-sectional area of the beam, respectively. The parameters \(E\) and \(I\) are Young's modulus and the moment of inertia of the beam, respectively. The external force \(\bar{f}\) acts nonuniformly on the body, and \(\bar{u}\) is the transverse displacement of the beam, which is the only unknown in the governing PDE. In addition, \(u_{\rm tt}\) represents the second order partial derivative of u with respect to t, and \(u_{\rm xxxx}\) represents the fourth order partial derivative of \(u\) with respect to x. The goal of the forward problem is to compute the transverse displacement of the beam supplemented with the initial and boundary conditions. For this study, simply supported beams are considered, which rest on two supports and are free to move horizontally. Real-world applications of simply supported beams include railway tracks, and bridges, to name a few. Mathematically, the simply supported boundary condition for 3 is given by,
\[\bar{u}(0,\bar{t})=\bar{u}(\bar{l},\bar{t})=\bar{u}_{\bar{\mathrm{x}}\bar{ \mathrm{k}}}(0,\bar{t})=\bar{u}_{\bar{\mathrm{x}}\bar{\mathrm{k}}}(\bar{l}, \bar{t})=0\]
For the numerical experiment, the parameter values of aluminium-like material are considered in the physical equation, which are widely used for making beams. The parameter values taken for the problem are \(\rho=2\times 10^{3}\)kg/m\({}^{3}\), \(A=5\times 10^{-2}\) m\({}^{2}\), \(E=10^{10}\)N/m\({}^{2}\), and \(I=4\times 10^{-4}\)m\({}^{4}\). Additionally, the beam is taken to be \(\pi^{2}\) meters long, and the external force \(\hat{f}\) is taken to be \(EI(1-16\pi^{2})\sin{(\bar{x}/\pi)}\cos(4\bar{c}\bar{f}/\pi)/\bar{l}^{3}\)N, where \(c=\sqrt{\frac{EI}{\rho A}}\). Taking the final time to be \(\pi^{2}/200\), the PDE to be solved takes the form,
\[10^{2}\bar{u}_{\bar{\mathrm{t}}\bar{\mathrm{t}}}+4\times 10^{6} \bar{u}_{\bar{\mathrm{x}}\bar{\mathrm{x}}\bar{\mathrm{x}}\bar{\mathrm{x}}}=\\ 4\times 10^{6}(1-16\pi^{2})\sin{(\bar{x}/\pi)}\cos(800\bar{t}/ \pi)/\pi^{3}, \tag{4}\]
in the domain \(\bar{x}\in[0,\pi^{2}]\) and \(\bar{t}\in[0,\pi^{2}/200]\). For (4) to be well-posed the initial condition of the beam is taken to be \(\sin(\bar{x}/l)\) with zero initial velocity, where \(l=\sqrt{l}\).
For training the neural network, \(16000\) random training points are generated with the distribution \(N_{\rm i}=2000\), \(N_{\rm b}=4000\), and \(N_{\rm int}=10000\). The neural network consists of \(4\) hidden layers with \(20\) neurons in each hidden layer. The \(\tanh\) activation function, which is one of the most commonly used activation functions in the PINN literature, as described in the review paper [20], is chosen. The loss function (2) consists of the initial condition, boundary condition and PDE. The PDE is regularized in the loss function with the residual parameter \(0.1\)[42]. The L-BFGS optimizer, which is again one of the most commonly used optimizers in the PINN literature [20] is used to minimize the loss function. As shown in Fig. 2\(15000\) epochs are performed. However, the figure clearly illustrates that the optimizer does not converge to the solution, and a vast training loss of \(10^{14}\) is obtained. Additionally, the graph shows that the optimizer is stuck in the local minima and hence will not converge even if the number of epochs is increased for the same neural network configuration.
In [14, 43], the problem of free vibrations in the Euler-Bernoulli single-beam equation was successfully solved by PINNs, where the coefficients of the PDE were taken to be unity. This shows that PINNs can simulate the beam equations, and the challenge lies in the multiscale coefficient values that arise when dealing with a real-life physical equation. The non-convergence in our case is due to the high value of coefficients, which is due to the dimensional equation. Consequently, a pressing need arises to transform the dimensional form of the equation into a nondimensional form. It may be possible that for some configurations containing hundreds of hidden layers and neurons, this problem may be solved without the need to non-dimensionalizing the PDE. However, nondimensionalization aims to provide computational tractability.
Fig. 1: Simply supported beam with varying transverse force.
Fig. 2: L-BFGS training loss vs. the number of epochs for the dimensional Euler-Bernoulli beam equation.
## III PINNs for NonDimensional PDEs
This section presents the proposed framework of using nondimensional equations in the PINN loss function. The method for nondimensionalizing the governing PDE is described first. Then, the algorithms for forward and inverse problems using dimensionless equations in PINNs are presented. To nondimensionalize the abstract PDE given by (1), the following transformations are performed,
\[\bar{x}=\xi_{1}(x);\quad\bar{t}=\xi_{2}(t);\quad\bar{u}=\xi_{3}(u);\quad\bar{f}= \xi_{4}(f), \tag{5}\]
where, \(\xi_{1}\), \(\xi_{2}\), \(\xi_{3}\), and \(\xi_{4}\) are suitable functions that map the dimensional quantities \(\bar{x}\), \(\bar{t}\), \(\bar{u}\), and \(\bar{f}\) to the corresponding nondimensional quantities. After substituting the above transformations in (1) and introducing the dimensionless parameter \(\lambda\), one obtains
\[\mathcal{K}(x,t):=\mathcal{D}[u](x,t;\lambda)-f(x,t)\quad\forall(x,t)\in \Omega\times T\subset\mathbb{R}^{d}\times\mathbb{R} \tag{6}\]
The proposed framework uses dimensionless equations to simplify and stabilize the problem computationally. By nondimensionalizing the variables and parameters, they are kept within a specific range, resulting in improved performance and generalization of the neural network. Furthermore, dimensionless equations generate more interpretable solutions by eliminating the units of measure, making it easier to understand the underlying physical phenomena and to compare results across different physical systems in the form of ratios and parameters. Hence, using dimensionless equations in PINNs can enhance the neural network's computational stability, generalization, and interpretability.
### _PINN Framework for Forward Problems_
\(\mathcal{K}\), the nondimensional PDE corresponding to the dimensional PDE \(\bar{\mathcal{K}}\), is now used in the loss function \(\mathcal{L}\) defined as follows:
\[\mathcal{L}(\theta)=\underset{\theta}{\mathrm{Min}}(\frac{1}{N_{\mathrm{train }}}\sum_{n=1}^{N_{\mathrm{train}}}||\mathcal{K}(x_{\mathrm{n}},t_{\mathrm{n}}) ||^{2}) \tag{7}\]
A schematic representation of the proposed PINN-based framework is illustrated in Fig 4. The algorithm for the forward problem can be compactly written as follows.
### _Nondimensional Euler-Bernoulli Beam Equation_
We now test the nondimensional equation in the PINN framework and evaluate the corresponding performance. To nondimensionalize (3), following transformations are used:
\[u=\bar{u}/l;\quad x=\bar{x}/l;\quad t=c\bar{t}/l^{2};\quad f=\bar{f}l^{3}/(EI) \tag{8}\]
Upon substituting these values in (3), one obtains
\[u_{\mathrm{tt}}+u_{\mathrm{xxxx}}=f(x,t)\quad x\in[0,\pi],t\in[0,1] \tag{9}\]
where \(f(x,t)=(1-16\pi^{2})\sin{(x)}\cos(4\pi t)\), with initial and boundary conditions
\[u(x,0)=\sin(x),\quad u_{t}(x,0)=0\]
\[u(0,t)=u(\pi,t)=u_{\mathrm{xx}}(0,t)=u_{\mathrm{xx}}(\pi,t)=0\]
For the error estimation, the relative percentage error (\(\mathcal{R}\)) used in [42] is chosen. Here, \(u^{*}\) is the prediction and \(u\) is the analytical solution.
\[\mathcal{R}=\frac{||u^{*}-u||_{2}}{||u||_{2}}\times 100\]
Fig. 4: PINN framework for beam systems: For forward problems, the loss function comprises the nondimensional PDEs and the boundary and initial conditions. For inverse problems, the nondimensional PDEs are supplemented with extra data and potential initial/boundary conditions.
Fig. 3: Nondimensional Euler-Bernoulli beam equation Color bar represents **Left:** Predicted solution (\(u^{*}\)); **Right:** Absolute error in prediction (\(|u-u^{*}|\))
The same neural network architecture as the previous case is chosen to solve this resulting nondimensional PDE. A low training loss is obtained, indicating that the PINN is trained successfully. The analytical solution for this case is \(u(x,t)=\sin(x)\cos(4\pi t)\), which is used to quantify the error in the approximated solution. The nondimensional displacement of the Euler-Bernoulli beam is computed within \(\mathcal{R}=5.3e-4\) percent. The nondimensional displacement prediction using PINN is shown in Fig. 3.(a). Fig. 3,(b) shows the absolute error between the exact and predicted solutions.
The contour plot for the approximate solution shows the dynamics of a simply supported beam under a force, where the x-axis represents the position along the length of the beam, the y-axis represents the time, and the colors represent the displacement of the beam. In Fig. 3.(b) the red regions indicate high displacement, while the blue regions indicate low displacement. There is a strong displacement at the position of the beam when a substantial force is applied, which is consistent with the known physics of this system. The network accurately capture the displacement behavior of the beam, which is evident by the smooth and continuous transition of colors across the plot.
The contour plot for the error in Fig. 3.(b) shows the difference between the approximate solution obtained from the network and the true solution. The x-axis represents the position along the length of the beam, the y-axis represents the time, and the colors represent the error. The red regions indicate high error, while the blue regions indicate low error. The areas where the training point concentration is low account for more error, and areas where the concentration of training points is more have relatively low error. One approach to reduce the error is to have more training points in the regions of high error. However, the overall error is low, which indicates that the network accurately capture the displacement behavior of the beam.
From Fig. 3.(b), the PINNs are found to solve the dimensionless Euler-Bernoulli beam equation accurately and hence, for all further experiments, nondimensional PDEs are simulated using PINNs. Additionally, the nondimensional displacement is henceforth referred to as displacement for conciseness. Next, we describe the inverse problem-solving strategy using nondimensional equations.
### _PINN Framework for Inverse Problems_
The abstract dimensionless PDE described by (6) is well-posed, and the forward problem can be solved uniquely. However, in the case of an inverse problem, the problem is ill-posed and either the initial/boundary conditions or the parameters/forces are unknown. Hence the generic abstract PDE can be re-written as,
\[\mathcal{K}^{{}^{\prime}}(x,t):=\mathcal{D}[u](x,t;\lambda)-f(x,t)\quad \forall(x,t)\in\Omega\times T\subset\mathbb{R}^{d}\times\mathbb{R} \tag{10}\]
The aim of the inverse problem is to predict the unknown parameter \(\lambda\) or the force function \(f(x,t)\), when data are provided for the observable \(u\) in some part of the training domain. In this paper, \(u_{\mathrm{data}}\) denotes the available data for the inverse problem at \(N_{\mathrm{data}}\) points. The prediction of the unknown parameter requires additional information in the loss function as shown in Fig 4. It is essential for the Jacobian matrix utilized in the inverse operation study employing neural networks to exhibit a nonzero determinant, to be invertible, and to possess a reasonable ratio between its largest and smallest eigenvalues to guarantee a unique solution and ensure computational stability. The algorithm for the inverse problem is the same as for the forward problem with a minor modification in the loss function. In addition to the output \(u\), the PINNs now predict the unknown parameter, force, initial or boundary conditions of the physical problems by leveraging the known data. The loss function for the inverse problem can be defined as
\[\mathcal{L}^{{}^{\prime}}(\theta)=\underset{\theta}{\mathrm{Min}} (\frac{1}{N_{\mathrm{train}}}\sum_{n=1}^{N_{\mathrm{train}}}||\mathcal{K}(x_{ \mathrm{n}},t_{\mathrm{n}})||^{2}+\\ \frac{1}{N_{\mathrm{data}}}\sum_{n=1}^{N_{\mathrm{data}}}||u_{ \mathrm{data}}(x_{\mathrm{n}},t_{\mathrm{n}})-u_{\mathrm{pred}}(x_{\mathrm{n} },t_{\mathrm{n}})||^{2}) \tag{11}\]
Next, the algorithm for the PINN framework is presented to solve inverse problems.
```
Goal: To predict the unknown parameter \(\bar{\lambda}\) or function \(\tilde{f}(\bar{x},\bar{t})\). Step1: Nondimensionalize the governing PDE to approximate the dimensionless parameter \(\lambda\) or function \(f(x,t)\). Step2: Choose the training set from the space-time domain \(\Omega\times T\), and augment with (\(x_{\mathrm{data}},t_{\mathrm{data}}\)) at which additional data (\(u_{\mathrm{data}}\)) are provided. Step3: Construct a feedforward deep neural network with inputs \((x,t)\) and outputs \(u\), \(\lambda\) or \(f(x,t)\). Step4: Minimize the loss function (11) with a suitable optimization algorithm, and find the optimal parameters. Step5: Use the optimal parameters to approximate the parameter \(\lambda^{*}\) or the function \(f^{*}(x,t)\).
```
**Algorithm 1** Inverse PINN algorithm
Here, \(u_{\mathrm{pred}}\) denotes the prediction of \(u\) by the neural network section implementing the PINN algorithm for forward and inverse problems of dimensionless beam equations.
## IV Numerical Experiments and Discussion
In the following subsections, five numerical experiments are presented. The experiments are conducted in a progressive manner, beginning with simple models such as a single beam system and then progressing to more complex ones such as a double beam connected to a Winkler foundation. To verify the proposed method, we first investigate forward and inverse problems for a single beam, which serves as the proof of the concept. Then, we apply the method to more intricate cases of double-beam systems to simulate forward and inverse problems.
### _Timoshenko Beam Forward Problem_
The Euler-Bernoulli theory of beams is widely used in the literature and has been successfully applied in structures such
as the Eiffel Tower and Ferris wheels. However, it does not consider the effects of transverse shear deformations, which are often significant in the vertical displacements of short and thick beams [44]. Timoshenko beam theory provides a mathematical framework for analyzing thick-beam bending [44]. According to Timoshenko theory, upon the action of an external force, the beam undergoes some cross-sectional rotation in addition to transverse displacement. Mathematically, the dynamics are modeled by a coupled system of PDEs with two variables: transverse displacement and cross-sectional rotation. The model is given by
\[\begin{split}\rho I\bar{\theta}_{\mathrm{tt}}-EI\bar{\theta}_{ \mathrm{sx}}-kAG(\bar{w}_{x}-\bar{\theta})=0\\ \rho A\bar{w}_{\mathrm{tt}}-kAG(\bar{w}_{\mathrm{s\bar{s}}}-\bar{ \theta}_{\mathrm{\bar{s}}})=\bar{g}(\bar{x},\bar{t}),\end{split} \tag{12}\]
where \(\rho\), \(A\), \(E\) and \(I\) have the usual meaning as in the case of the Euler-Bernoulli beam; \(k\) is called the Timoshenko shear coefficient; \(G\) is the shear modulus; and \(\bar{g}(\bar{x},\bar{t})\) is the external force acting on the beam. The transverse displacement is \(\bar{w}(\bar{x},\bar{t})\) and \(\bar{\theta}(\bar{x},\bar{t})\) is the cross-sectional rotation of the beam at position \(\bar{x}\) and time \(\bar{t}\). After nondimensionalizing (12) and taking the resulting parameters [45] to be unity, the nondimensional equation can be written as follows:
\[\begin{split}\theta_{\mathrm{tt}}-\theta_{\mathrm{sx}}+(\theta- w_{\mathrm{x}})=0\\ w_{\mathrm{tt}}+(\theta-w_{\mathrm{x}})_{\mathrm{x}}=g(x,t) \end{split} \tag{13}\]
We consider the external force [46] to be \(g(x,t)\) = \(\cos(t)-\frac{\pi}{2}\sin(x)\cos(t)\) and the computational domain to be \(x\in[0,\pi]\) and \(t\in[0,1]\). To make (13) well-posed, the initial and boundary conditions are supplemented as:
\[\theta(x,0)=\frac{\pi}{2}\cos(x)+\left(x-\frac{\pi}{2}\right),\quad\theta_{t} (x,0)=0\]
\[w(x,0)=\frac{\pi}{2}\sin(x),\quad w_{t}(x,0)=0\]
\[\theta(0,t)=\theta(\pi,t)=w(0,t)=w(\pi,t)=0\]
To estimate the error in the approximated solutions, the analytical solution for the considered problem is used, which is
\[w(x,t)=\frac{\pi}{2}\sin(x)\cos(t)\]
When analytical solutions are not available, there are various ways to validate the PINN solution. One approach is to compare the solutions with those obtained using numerical methods such as finite difference, finite element, finite volume or spectral methods. This can be done by comparing the predicted solutions from the PINNs with the solutions from the numerical simulation for the same physical equation. Another approach is to compare the solutions obtained through PINNs with experimental data. One can compare the predicted solutions from the PINNs with values experimentally measured over space and time. Finally, one can validate the solutions obtained through PINNs by checking if they satisfy the known physical constraints of the system. In summary, one can use available experimental data, numerical methods or physical constraints to evaluate the accuracy of the solution obtained using PINNs.
The difficulty of solving a system of PDEs is greater than that solving a single PDE, but the neural network structure used for the Euler-Bernoulli equation is successful in approximating solutions for Timoshenko beams. In particular, the transverse displacement of the beam is computed within \(\mathcal{R}=3.3e-4\) percent, and the cross-sectional rotation is
Fig. 5: Timoshenko single beam; Color bar represents **Left:** Cross-sectional rotation (\(\theta^{*}\)); **Right:** Transverse displacement (\(w^{*}\)).
Fig. 6: Timoshenko single beam absolute error in predictions **Left:**\(|\theta-\theta^{*}|\); **Right:** Absolute error \(|w-w^{*}|\).
approximated within \(\mathcal{R}=2.8e-3\) percent. Approximated solutions and absolute errors in predicting the transverse displacement and cross-sectional rotation are presented in Figs. 5 and 6. Fig. 5 demonstrates that when a sinusoidal force is applied to a Timoshenko beam, the beam bends more than it rotates. As indicated by the scale in the figures, the maximum deflection is \(1.44\) and the maximum rotation is \(0.32\). Additionally, the low error in predictions demonstrates that even with the increase in the PDE complexity, the PINN successfully solves the Timoshenko PDE with comparable results to the Euler-Bernoulli equation.
For comparison with the PINN solution, we use the finite difference method (FDM). Specifically, we employ a central difference scheme to approximate space derivatives and a leapfrog scheme to approximate time derivatives. This approach allows us to solve problems with second-order accuracy in space and time. The results for the Timoshenko beam show that PINNs can achieve a higher level of accuracy than the FDM even with a smaller number of training points. Specifically, \(30,000\) points are used in the FDM scheme while only \(16,000\) points were used for training with PINNs and Table I indicates that PINNs perform better than FDM.
### _Timoshenko Beam Inverse Problem_
This section addresses the inverse problem for the Timoshenko beam, with the aim to determine the material properties of a beam leveraging the PDE and beam's displacement and rotation data. In structural engineering, the inverse problem of a Timoshenko beam PDE is significant for determining the beam system's structural behavior and for health monitoring. This helps engineers infer the internal material properties and unknown forces from observed responses such as displacement and rotation measurements. The PINN solves this problem by combining the knowledge of physics and deep learning. The PINN uses a neural network to learn the mapping between the unknown parameters of the PDE and observed data while incorporating the constraints of physics in the form of PDEs. This parameter identification aids in providing crucial information for structural diagnosis and repair and helps engineers ensure the safety and stability of structures. The Timoshenko model for parameter estimation is presented as follows.
\[\begin{split}\alpha\theta_{\text{tt}}-\theta_{\text{xx}}+( \theta-w_{\text{x}})=0\\ w_{\text{tt}}+(\theta-w_{\text{x}})_{\text{x}}=g(x,t)\end{split} \tag{14}\]
In the context of the inverse problem of the Timoshenko beam, the PINN is trained on the observed deflections and rotations of the beam, and the material properties are treated as the unknowns to be estimated. In this case, the force g(x, t) applied to the beam is considered to be known, and the only unknown in the model is \(\alpha\). This makes the problem ill-posed, requiring additional data at a priori to predict the unknown parameter. For \(\alpha=1\), the transverse displacement and cross-sectional rotation data obtained from the forward problem is supplied to approximate the parameter value. This data is not error-free and comes with \(10^{-3}\) percent error for transverse displacement and with \(10^{-4}\) percent error for cross-sectional rotation. As shown in Fig. 7 the additional data is supplied on \(5000\) points (red dots) at five positions on the beam \((x=0.2,0.8,1.8,2.6,3)\). In practice, this data can be collected using sensors installed at the corresponding locations on the beam as shown in Fig. 7.
To solve the inverse problem, the neural network consist of \(1600\) random training points with the distribution \(N_{\text{i}}=200\), \(N_{\text{b}}=400\), and \(N_{\text{int}}=1000\). To regularize the PDE term in the loss function, a regularization parameter of \(1\) was chosen [22]. Using the L-BFGS optimizer \(5000\) iterations are performed and the other parameters are kept the same as in the forward Timoshenko problem. At \(t=0.5\), the unknown parameter \(\alpha=1.0136\) is learned.
We perform a comparison between the PINN and DNNs, as using a numerical iterative method for inverse problems is computationally expensive. From PINNs, at \(t=0.5\), the unknown parameter \(\alpha=1.0136\) is learned. We utilize DNNs to identify the parameters of a Timoshenko single beam. We use the same architecture for DNN as used by the PINN. The predicted value of alpha is \(0.6124\) using DNN. PINNs are more accurate than DNNs for the inverse problem of beam systems.
However, there are several issues that one may need to take care of while solving inverse problems through the presented framework. First, to avoid overfitting, the minimum training data points required to solve the problem should be determined empirically by gradually increasing the number of training points until the model's performance is satisfactory. Second, for some physical problems, noisy data may lead to nonconvergence of the optimization algorithm. Hence, suitable
Fig. 7: Data to learn the parameters for the Timoshenko single-beam: **Blue dots** Collocation points. **Red dots** Additional data points of rotations (\(\theta\)) and displacement (u). **Black dots** Initial and boundary points.
filtering or preprocessing of data may be required before using the PINN framework. Finally, for every run of the neural network, one may learn a different parameter or function value; due to the convergence of the optimizers at different local minima, it may be useful to find the statistics of the inverse problem solution through multiple runs.
Experimental results for single beam equations illustrate that PINNs can efficiently solve forward and inverse problems for single beams. In this study, we investigate the ability of PINNs to handle more complex systems, specifically double-beam systems connected by a Winkler foundation, as depicted in Fig. 10.
### _Euler-Bernoulli Double-Beam Forward Problem_
In this section, and for all further experiments, forced transverse vibrations of two parallel beams are studied. Structurally, two parallel beams of equal lengths joined by a Winkler massless foundation are considered. Both beams are considered slender and have homogeneous material properties. The transverse displacement of both beams is governed by the following system of PDEs [29]:
\[\begin{split} m_{1}\bar{w}_{1_{\text{tr}}}+K_{1}\bar{w}_{1_{ \text{ssss}}}+k(\bar{w}_{1}-\bar{w}_{2})&=\bar{f}_{1}(\bar{x}, \bar{t})\\ m_{2}\bar{w}_{2_{\text{tr}}}+K_{2}\bar{w}_{2_{\text{execs}}}+k( \bar{w}_{2}-\bar{w}_{1})&=\bar{f}_{2}(\bar{x},\bar{t})\end{split} \tag{15}\]
Here, \(\bar{w}_{1}\) and \(\bar{w}_{2}\) are the beam displacements for the first and the second beams respectively. The distributed continuous forces acting transversely on the beams are \(\bar{f}_{1}\) and \(\bar{f}_{2}\) as shown in Fig. 10. The product of the density and the cross-sectional area of the beams is given by \(m_{1}=\rho_{1}A_{1}\) for the first beam and \(m_{2}=\rho_{2}A_{2}\) for the second beam. The parameters \(K_{1}\) and \(K_{2}\) denote the flexural rigidity of the beams and are given by \(K_{1}=E_{1}I_{1}\) and \(K_{2}=E_{2}I_{2}\). The stiffness modulus of
Fig. 10: Double beam system connected by a Winkler foundation.
Fig. 9: Derived quantities for the Euler-Bernoulli double beam. Scattered points represent the exact solution and the continuous line refers to the derived solution. **Top:** First beam **Left** Bending moment; **Mid** Velocity; **Right** Acceleration. **Bottom:** Second beam **Left** Bending moment; **Mid** Velocity; **Right** Acceleration.
the Winkler elastic layer connecting both beams is given by \(k\). For simplicity, we consider \(m_{1}=m_{2}\), and \(K_{1}=K_{2}\), and nondimensionalize (15). After taking all the resulting parameters to be unity, the nondimensional equation has the same form as (15) with unit coefficients. The initial conditions are,
\[w_{1}(x,0)=\sin(x),\quad w_{1_{\text{t}}}(x,0)=0\] \[w_{2}(x,0)=\frac{\pi}{2}\sin(x),\quad w_{2_{\text{t}}}(x,0)=0\]
All four ends of the beams are assumed to be simply supported, expressed as,
\[w_{1}(0,t)=w_{1}(\pi,t)=w_{1_{\text{xx}}}(0,t)=w_{1_{\text{xx}}} (\pi,t)=0\] \[w_{2}(0,t)=w_{2}(\pi,t)=w_{2_{\text{xx}}}(0,t)=w_{2_{\text{xx}}} (\pi,t)=0\]
The external acting force is
\[f_{1}(x,t)=\left(1-\frac{\pi}{2}\right)\sin(x)\cos(t)\] \[f_{2}(x,t)=\left(\frac{\pi}{2}-1\right)\sin(x)\cos(t)\]
For the considered problem, the analytical solution is given by,
\[w_{1}(x,t)=\sin(x)\cos(t),\quad w_{2}(x,t)=\frac{\pi}{2}\sin(x)\cos(t)\]
In addition to computing the beam displacements, derived quantities such as velocity, acceleration, and bending moment are also computed for this problem. These derived quantities also help in the prognosis and diagnostics of the system. For instance, the bending moment estimates the bending effect when an external force is applied to a structural element. Estimating the bending moment can be used to quantify the bending upon the action of applied forces. The beam is the most common structural member vulnerable to bending moments because it can bend at any point along its length when subjected to an external force.
For simulating Euler-Bernoulli double beams, the same neural network architecture as for the single Euler-Bernoulli beam is considered. The only change is in the residual parameter, which is \(1\) for this case. The results are illustrated in Figs. ( 8- 9) and Table II. The absolute difference between the PINN predicted solution and the exact solution for the first beam is approximately \(10^{-4}\), and for the second beam, it is approximately \(10^{-3}\), as shown in Fig. 8. The bending moment, velocity and acceleration are computed using the neural network's autodifferentiation and backpropagation features. Table II describes the efficiency in the computation of these quantities at \(t=1\) for both beams. The relative percent error in computing the transverse displacement of the beams on the order of \(10^{-5}\), and for acceleration, this error is on the order of \(10^{-2}\), which is very low and shows the potential
Fig. 11: Timoshenko double beam. Scattered points represent the exact solution, and the continuous line refers to the predicted solution. **Top:** First beam **Left** Displacement (\(w_{1}\)); **Right** Rotation (\(\theta_{1}\)). **Bottom:** Second beam **Left** Displacement (\(w_{2}\)); **Right** Rotation (\(\theta_{2}\)).
of physics-informed learning. Fig. 9 illustrates the computed velocity, bending moment, and acceleration of both beams.
### _Timoshenko Double-Beam Forward Problem_
The double-beam system modeled by Euler-Bernoulli theory can also be modelled using Timoshenko theory under the same assumptions as described for the single Timoshenko equations [28]. In addition to providing the transverse displacement of the beams, Timoshenko theory also provides the cross-sectional rotation of both beams through the system of PDEs [28] given by
\[kA_{1}G(\bar{\theta}_{1_{\text{a}}}-\bar{w}_{1_{\text{a}}})+ \rho A_{1}\bar{w}_{1_{\text{a}}}+K(\bar{w}_{1}-\bar{w}_{2})=\bar{f}_{1}(\bar{x},\bar{t})\] \[EI_{2}\bar{\theta}_{2_{\text{a}}}+GA_{2}k(\bar{w}_{2_{\text{a}}}- \bar{\theta}_{2})-\rho I_{2}\bar{\theta}_{2_{\text{a}}}=0\] \[kA_{2}G(\bar{\theta}_{2_{\text{a}}}-\bar{w}_{2_{\text{a}}})+ \rho A_{2}\bar{w}_{2_{\text{a}}}+K(\bar{w}_{2}-\bar{w}_{1})=\bar{f}_{2}(\bar{x },\bar{t})\] \[EI_{1}\bar{\theta}_{1_{\text{a}}}+GA_{1}k(\bar{w}_{1_{\text{a}}} -\bar{\theta}_{1})-\rho I_{1}\bar{\theta}_{1_{\text{a}}}=0 \tag{16}\]
where \(\bar{w}_{i}(\bar{x},\bar{t})\) and \(\bar{\theta}_{i}(\bar{x},\bar{t})\), \(i=1,2\) denote the transverse displacement and cross-sectional rotation of the beams respectively. \(K\) is the stiffness modulus of the Winkler elastic layer. G is the shear modulus and \(k\) is the Timoshenko shear coefficient. The rest of the parameters have the usual meanings as described earlier. For simplicity, we consider \(A_{1}=A_{2}\), and \(I_{1}=I_{2}\) and nondimensionalize (16). With some additional assumptions, the non-dimensional equation has the same form as (16) with unit coefficients. For the numerical experiment the initial state of the double beam system is taken to be
\[\theta_{1}(x,0)=\left(\frac{\pi}{2}\cos(x)+\left(x-\frac{\pi}{2} \right)\right),\quad\theta_{1_{\text{t}}}(x,0)=0\] \[w_{1}(x,0)=\frac{\pi}{2}\sin(x),\quad w_{1_{\text{t}}}(x,0)=0\] \[\theta_{2}(x,0)=\frac{2}{\pi}\left(\frac{\pi}{2}\cos(x)+\left(x -\frac{\pi}{2}\right)\right),\quad\theta_{2_{\text{t}}}(x,0)=0\] \[w_{2}(x,0)=\sin(x),\quad w_{2_{\text{t}}}(x,0)=0\]
Simply supported boundary conditions are provided to make the problem wellposed,
\[\theta_{1}(0,t)=\theta_{1}(\pi,t)=w_{1}(0,t)=w_{1}(\pi,t)=0\] \[\theta_{2}(0,t)=\theta_{2}(\pi,t)=w_{2}(0,t)=w_{2}(\pi,t)=0\]
Here, \(f_{1}(x,t)\), \(f_{2}(x,t)\) and the analytic solutions are as follows,
\[f_{1}(x,t)=\cos(t)(1-\sin(x)),\] \[f_{2}(x,t)=\frac{2}{\pi}\cos(t)-\frac{\pi}{2}\sin(x)\cos(t)\] \[\theta_{1}(x,t)=\left(\frac{\pi}{2}\cos(x)+\left(x-\frac{\pi}{2} \right)\right)\cos(t)\] \[\theta_{2}(x,t)=\frac{2}{\pi}\left(\frac{\pi}{2}\cos(x)+\left(x -\frac{\pi}{2}\right)\right)\cos(t)\] \[w_{1}(x,t)=\frac{\pi}{2}\sin(x)\cos(t),\quad w_{2}(x,t)=\sin(x) \cos(t)\]
Two experiments are performed, varying the number of training points, as shown in Table III. Table IV shows the relative percent error in approximating the transverse displacement and cross-sectional rotations for both beams. For cross-sectional rotations \(\theta_{1}\) and \(\theta_{2}\), the magnitude of the percent error remains the same even for fewer training points.
Using a large number of training points can increase the training time and may not be feasible for problems with many parameters. In these cases, using fewer training points can lead to less accurate solutions, but they can be obtained relatively faster. This approach allows engineers to make informed decisions about the parameters, and once optimal parameters have been identified, forward solutions can be recalculated with higher accuracy by using more training points. This is referred to as training with fewer points for the forward problem. The
absolute difference between the predicted and exact solutions of \(\theta_{1}\), \(w_{1}\), \(\theta_{2}\) and \(w_{2}\), even for \(1600\) training points is very small as shown in Fig. 11 Fig. 12. Fig. 11 presents the PINNs prediction for a double Timoshenko beam. The scattered points refer to the exact solution, and the continuous line represents the predicted solution. The force is applied uniformly in both beams; however, the deflection and rotation of the first beam are greater than those of the second beam. The results in Fig. 12 indicate that, for the second beam, a larger number of training points (16000) results in a more accurate prediction of deflection and rotation than a smaller number of training points (1600). Conversely, for the first beam, a smaller number of training points (1600) results in a more accurate prediction of the quantity of interest than a larger number of training points (16000). In any case, the difference in absolute error is relatively small, demonstrating that even with fewer training points, PINNs can still produce accurate predictions.
### _Timoshenko Double-Beam Inverse Problem_
The applied force on structural systems is critical for structural design and condition assessment. In design, control, and diagnosis, accurate estimation of dynamic forces acting on a structure is essential. These details can be used to evaluate the structural condition. For example, understanding the impact of heavy vehicles on bridge structures can aid in detecting early damage to them. Indirect force determination is of special interest when the applied forces cannot be measured directly, while the responses can be measured easily.
For the inverse problem, three distinct experiments are performed on (16). First, the unknown parameter is learned from the Timoshenko double-beam system. We consider the unknown parameter to be \(\rho A_{1}\) from (16). For the value of \(\rho A_{1}=1\), the data for transverse displacement and cross-sectional rotation are provided at some points in the computational domain. Second, the unknown applied function on the first beam is learned by providing noise-free simulated displacement and cross-sectional rotation data. For this case, all other parameters, initial and boundary conditions are considered to be known, and only the function \(f_{1}(x,t)\) is unknown. Third, the same force function is predicted by providing noisy displacement and cross-sectional rotation data. The data generated for learning the function in the second case are corrupted with noise to be used in the third case. The exact solution for the function to be learned in the second and third cases is \(\cos(t)(1-\sin(x))\).
The inverse problem in engineering refers to the process of estimating unknown parameters or functions from a set of measured data. In PINNs, the inverse problem is usually solved by training a neural network to fit the measured data and the known physical laws. However, the measured data can be affected by various sources of noise, which can make estimation of the quantity of interest more challenging. The noise can make the measured data unreliable, and the neural network may not be able to accurately estimate the unknown
Fig. 14: Data to learn material properties for the Timoshenko double beam: **Blue dots** Collocation points. **Red dots** Additional data points of displacement and rotation for the double beam at one location. **Black dots** Initial and boundary points.
Fig. 13: Timoshenko double-beam inverse problem: absolute error in the prediction of force when the additional data of rotation and deflections provided at five locations has **left:** no noise **right:** 20 percent Gaussian noise.
Fig. 15: Data to learn force for the Timoshenko double beam: **Blue dots** Collocation points. **Red dots** Additional data points of displacement and rotation for the double beam at six different locations. **Black dots** Initial and boundary points.
parameters or functions. In such a scenario, the optimizer of the neural network does not necessarily converge to local minima.
The same neural network architecture is used as in the forward double-beam Timoshenko problem, with residual parameter \(1\) to regularize the physical equation in the loss function. Here, \(2500\) epochs are performed using the L-BFGS optimizer to train the neural network. For learning the parameter, \(5000\) data points are provided at \(x=1.8\), as shown in Fig. 14. The exact value of the unknown parameter is \(\rho A_{1}=1\) in (16), and the predicted value of the parameter using the PINN framework is 1.0208, which is close to the desired value. Even for a system of four PDEs, by only providing data at one particular beam location, the unknown parameter is learned successfully using PINNs. This shows that PINNs can handle large complex systems of PDEs efficiently.
The function \(f_{1}(x,t)\), the applied force on the first beam is predicted in the second experiment. As illustrated in Fig. 15, the data for transverse displacement and cross-sectional rotation are provided at \(6\) different locations with \(5000\) data points at each location.
For the third experiment, the data provided for learning the unknown function \(f_{1}(x,t)\) are provided with \(10\%\) and \(20\%\) Gaussian noise and the corresponding performance in learning the function is shown in Table V. Even with \(10\%\) and \(20\%\) noise, the relative error percent between analytic and predicted force is lower, as seen in TABLE V. Fig. 13 shows the force prediction along the beam when rotation and deflection observations are available at five points. The results demonstrate that the PINN is more precise in its predictions when the data are free from noise compared to when they are noisy. Despite the presence of noise in the data, the absolute error remains within the magnitude of \(10^{-2}\), which is comparable to the error observed when data are not noisy. To be more precise, Fig. 13 shows the absolute difference error of the PINN predicted and exact force at \(t=0.5\) with \(0\) percent and \(20\) percent noise. Even with \(20\) percent noise, the unknown force is learned with less than \(1\%\) error over the entire space-time domain, demonstrating that PINN is a very accurate and robust approach.
The minimum number of data points required to estimate the model parameters depends on several factors, such as the complexity of the physics, the number of physical parameters in the model, and the quality of the data. More data points and more complex physics require more neural network capacity, resulting in a larger neural network with more hyperparameters. In practice, more data points lead to overfitting. The minimum training data points required for a PINN framework are determined empirically by gradually increasing the number of training points until the model's performance is satisfactory.
Finally, a sensitivity analysis is carried out to examine the influence of input variables, specifically the displacement and rotation, on the output variable, which is the force. The analysis involves adding \(20\%\) Gaussian noise to the displacement data while no noise is added to the rotation data. The resulting mean accuracy of the force is 0.14313413. In contrast, when \(20\%\) noise is introduced to the rotation data with the displacement data remaining unaltered, the mean accuracy of the force is \(0.204627\). The results of this analysis show that the force is more sensitive to rotations than the displacement data.
## V Conclusions
The design and maintenance of complex structural systems are challenging due to the multiscale interaction of their components. It is desirable to predict the behavior of these complex systems by solving the governing model of interest. Recently, PINNs have emerged as a viable method for simulating PDEs. In this work, we propose using the PINN algorithm with the nondimensionalization step aiding in the learning procedure for complex beam systems. The PINN framework successfully solves the forward and inverse problems for nondimensional single and double-beam systems. Based on the numerical experiments, the following conclusions are drawn.
First, the relative percent error in computing the beam displacement does not increase with increasing model complexity when solving the forward problem. In fact, for both Euler-Bernoulli and Timoshenko theory, the error decreases by an order of magnitude for double-beam systems compared to single-beam systems. In addition, the error in computing the bending rotation is comparable for single and double Timoshenko beam systems. This nonincrease in error as the model complexity increases suggests that the PINN framework is appropriate for simulating large-scale systems with multiple connected components.
Second, it is demonstrated that PINNs precisely discover the unknown force function and model parameters through their inverse problem-solving capability. The proposed algorithm successfully learns the model parameter with less than \(3\%\) error for the single Timoshenko beam. In addition, for the double beam Timoshenko system, the unknown function is approximated on the whole space-time domain with less than \(0.05\%\) error, demonstrating the algorithm's effectiveness for solving inverse problems.
Third, physical quantities such as velocity, acceleration, and bending moment characterize the system's behavior. Even though the derived quantities are not directly trained in the neural network, they are approximated with less than \(2e-2\%\) error for the Euler-Bernoulli double-beam system.
Fourth, the algorithm's ability to use fewer training points in forward problems and to accomodate noisy data in inverse problems is exploited. The obtained results show that even with \(1600\) training points, the double Timoshenko beam displacement is predicted on the entire space-time domain with less than \(5e-3\%\) error. In the case of the inverse problem, the force function is discovered with less than \(0.2\%\) error even when the data used in the learning procedure contains \(20\%\)
Gaussian noise. These findings imply that the algorithm is accurate and robust under the tested noise levels.
To summarize, PINNs enable the simulation of complex structural systems with multiple interacting components efficiently, accurately, and robustly. In the future, this approach could be extended to estimate displacements for various input forces and mechanical vibration modes and incorporate robust methods to account for stochasticities.
## Acknowledgment
The authors would like to express their appreciation to the anonymous reviewers and editors for their valuable comments and feedback, which have significantly improved the quality of this work. The authors extend their appreciation to Prof. Siddhartha Mishra for his insightful suggestion to compare our proposed methodology with numerical methods.
|
2306.04095 | PANE-GNN: Unifying Positive and Negative Edges in Graph Neural Networks
for Recommendation | Recommender systems play a crucial role in addressing the issue of
information overload by delivering personalized recommendations to users. In
recent years, there has been a growing interest in leveraging graph neural
networks (GNNs) for recommender systems, capitalizing on advancements in graph
representation learning. These GNN-based models primarily focus on analyzing
users' positive feedback while overlooking the valuable insights provided by
their negative feedback. In this paper, we propose PANE-GNN, an innovative
recommendation model that unifies Positive And Negative Edges in Graph Neural
Networks for recommendation. By incorporating user preferences and
dispreferences, our approach enhances the capability of recommender systems to
offer personalized suggestions. PANE-GNN first partitions the raw rating graph
into two distinct bipartite graphs based on positive and negative feedback.
Subsequently, we employ two separate embeddings, the interest embedding and the
disinterest embedding, to capture users' likes and dislikes, respectively. To
facilitate effective information propagation, we design distinct
message-passing mechanisms for positive and negative feedback. Furthermore, we
introduce a distortion to the negative graph, which exclusively consists of
negative feedback edges, for contrastive training. This distortion plays a
crucial role in effectively denoising the negative feedback. The experimental
results provide compelling evidence that PANE-GNN surpasses the existing
state-of-the-art benchmark methods across four real-world datasets. These
datasets include three commonly used recommender system datasets and one
open-source short video recommendation dataset. | Ziyang Liu, Chaokun Wang, Jingcao Xu, Cheng Wu, Kai Zheng, Yang Song, Na Mou, Kun Gai | 2023-06-07T01:31:12Z | http://arxiv.org/abs/2306.04095v2 | # (Technical Report)
###### Abstract.
Recommender systems play a crucial role in addressing the issue of information overload by delivering personalized recommendations to users. In recent years, there has been a growing interest in leveraging graph neural networks (GNNs) for recommender systems, capitalizing on advancements in graph representation learning. These GNN-based models primarily focus on analyzing users' positive feedback while overlooking the valuable insights provided by their negative feedback. In this paper, we propose PANE-GNN, an innovative recommendation model that unifies **P**ositive **A**nd **N**egative **E**dges in **G**raph **N**eural **N**etworks for recommendation. By incorporating user preferences and dispreferences, our approach enhances the capability of recommender systems to offer personalized suggestions. PANE-GNN first partitions the raw rating graph into two distinct bipartite graphs based on positive and negative feedback. Subsequently, we employ two separate embeddings, the interest embedding and the disinterest embedding, to capture users' likes and dislikes, respectively. To facilitate effective information propagation, we design distinct message-passing mechanisms for positive and negative feedback. Furthermore, we introduce a distortion to the negative graph, which exclusively consists of negative feedback edges, for contrastive training. This distortion plays a crucial role in effectively denoising the negative feedback. The experimental results provide compelling evidence that PANE-GNN surpasses the existing state-of-the-art benchmark methods across four real-world datasets. These datasets include three commonly used recommender system datasets and one open-source short video recommendation dataset.
Recommender system; Negative feedback; Graph neural networks +
Footnote †: [leftmargin=*] organization
in Figure 2) reveal a decrease in performance compared to NGCF that does not utilize negative feedback. It suggests that directly incorporating negative feedback may not always yield benefits.
**Challenges**. The aforementioned observations underscore the challenge of developing effective algorithms that can effectively incorporate negative feedback into recommender systems. The under-utilization of negative feedback in current approaches motivates us to explore the usage of negative feedback through GNNs in order to enhance the quality of recommendations. However, learning high-order structural information from a signed bipartite graph faces difficulties due to the limitations of the _network homophily assumption_ and the _balance theory assumption_. The network homophily assumption posits that similar nodes are more likely to connect to each other than dissimilar nodes. Many GNN models (Beng et al., 2015; Wang et al., 2016; Wang et al., 2017) adopt a message-passing mechanism that aggregates information from local neighbors to update the embedding of the anchor node based on this assumption. However, homophily is not applicable in signed graphs where dissimilar nodes are connected by negative edges. The balance theory assumption implies that "the friend of my friend is my friend", "the enemy of my friend is my enemy", and "the enemy of my enemy is my friend". Existing methods for signed unipartite graphs (Beng et al., 2015; Wang et al., 2016; Wang et al., 2017) leverage this assumption to aggregate and propagate information across layers. However, the balance theory assumption does not match with the signed bipartite graph in recommender systems (Han et al., 2016; Wang et al., 2017; Wang et al., 2017). In real-world recommendation scenarios, users typically possess diverse interests rather than unique interests. Consequently, the fundamental idea of "the enemy of my enemy is my friend" (i.e., "two items disliked by the same user are similar") in the balance theory assumption does not accurately capture the complexity of real-world situations. These limitations necessitate the development of novel approaches to effectively leverage negative feedback in recommender systems, accounting for the unique characteristics of signed bipartite graphs and the diverse interests of users in real-world settings.
**Our idea**. The key idea revolves around utilizing high-order structural information from both the positive graph (i.e., user-item interaction graph containing only positive feedback edges) and the negative graph (i.e., user-item interaction graph containing only negative feedback edges) simultaneously. To enhance recommendations by incorporating negative feedback, this paper presents a novel recommendation model called PANE-GNN (unifying Positive And Negative Edges in Graph Neural Networks for recommendation). In this model, each user or item is assigned two embeddings, i.e., interest embedding and disinterest embedding, to capture the user's interests and disinterests, respectively. Taking into account the network homophily assumption, we devise two message-passing mechanisms for the positive graph and the negative graph. On the positive graph, interest embeddings are propagated and updated, capturing the user's interests. On the other hand, on the negative graph, disinterest embeddings are propagated and updated, capturing the user's disinterests or items they explicitly dislike. Furthermore, to generate robust embeddings that remain invariant to graph perturbations, we utilize graph contrastive learning on the negative graph and its perturbed version. This approach enhances the model's ability to capture relevant patterns in the presence of graph noise.
The main three contributions of this work are as follows:
* We propose a novel GNN-based recommendation model called PANE-GNN. The model performs message passing on both the positive graph and the negative graph to effectively incorporate positive and negative feedback (Section 3.2.1).
* We design contrastive learning on the negative graph (Section 3.2.2), a new ranking method with a disinterest-score filter (Section 3.2.3), and a dual feedback-aware Bayesian personalized ranking loss (Section 3.3), all of which improve recommendation accuracy through the integration of positive and negative feedback signals.
* The proposed PANE-GNN is extensively evaluated on four real-world datasets (Section 4). The experimental results demonstrate that PANE-GNN outperforms state-of-the-art GNN-based recommendation methods.
## 2. Related Work
We provide a review of existing work about 1) recommender systems based on GNNs, and 2) graph neural networks on signed graphs.
### Recommender Systems based on GNNs
Recently, GNNs have become the new state-of-the-art approach in many recommendation problems (Han et al., 2016; Wang et al., 2017). The main advantage of using GNNs for recommender systems is that it can capture higher-order structural information in the observed data. Based on the message-passing architecture of GNNs, NGCF (Wang et al., 2017) adopts the
Figure 1. An example of video recommendation from YouTube. The integration of positive and negative feedback plays a pivotal role in achieving accurate recommendation outcomes. In this example, the user prefers team sports while showing no interest in single-player sports.
Figure 2. Comparison of single-relational (NGCF) and multi-relational (GHCF) recommendation models on the ML-1M dataset.
Hadamard product between user embedding and item embedding to promote passing more messages from similar items to users. Considering that nonlinear activation contributes little to the recommendation performance, LR-GCCF (Beng et al., 2015) removes non-linearities from the original graph convolutional network (GCN) model (Zhou et al., 2017) and adds a residual network structure on it to alleviate the over-smoothing problem in the graph convolution aggregation. Likewise, LightGCN (Liu et al., 2017) removes both feature transformation and nonlinear activation and only retains neighborhood aggregation for collaborative filtering. The simplified model has higher computational efficiency and is much easier to implement and train.
Our proposed method differs from the above methods in that we consider the negative feedback information in the observed data and devise a novel message-passing process that takes into account both positive and negative feedback.
### Graph Neural Networks on Signed Graphs
Most of the previous work focus on building GNNs for unsigned graphs where there are only positive edges. Currently, signed graphs where each edge has a positive or negative sign, have become increasingly ubiquitous in the real world. For example, the users in a social network may hold common or opposite political views. Since the network homophily assumption is the theoretical basis of the message-passing mechanism in GNNs, those unsigned GNNs cannot be applied to signed graphs directly. As a pioneering work of signed GNNs, SGCN (Garshan et al., 2016) assigns balanced embedding and unbalanced embedding for each node and propagates the two embeddings in the signed graph based on balance theory. Furtherly, SNEA (Shi et al., 2017) optimizes the message-passing process in SGCN by assigning different importance coefficients to each node pair connected with different edges. Inspired by adversarial learning, ASiNe (Shi et al., 2017) plays a minimax game in the positive graph and negative graph by leveraging a generator and a discriminator for positive edges and negative edges in a signed graph, respectively. SiReN (Shi et al., 2018) generates positive embeddings and negative embeddings for each node in a signed graph via a GNN model and a multilayer perceptron (MLP) model, respectively. Then SiReN adopts an attention layer to integrate the two embeddings into the final embeddings.
Unlike the existing methods based on the balance theory assumption, which may not be directly applicable to the signed bipartite graph in recommender systems, the proposed method in this work takes a different approach. It splits the raw rating graph into two distinct graphs and emphasizes the propagation of information within each graph based on the type of edges.
## 3. Method
In this section, we introduce the notations used in the paper, present the architecture of PANE-GNN, and describe its optimization objective.
### Notations
In the given raw rating graph \(\mathcal{G}=(\mathcal{U},\mathcal{I},\mathcal{E})\), where \(\mathcal{U}\) represents the set of users, \(\mathcal{I}\) represents the set of items, and \(\mathcal{E}\) represents the set of edges, we split the graph into two edge-disjoint graphs: the positive graph \(\mathcal{G}p=(\mathcal{U},\mathcal{I},\mathcal{E}p)\) and the negative graph \(\mathcal{G}n=(\mathcal{U},\mathcal{I},\mathcal{E}n)\). Here, \(\mathcal{E}p\) represents the edges corresponding to positive ratings, and \(\mathcal{E}n\) represents the edges corresponding to negative ratings. The union of \(\mathcal{E}p\) and \(\mathcal{E}n\) gives the set of all edges \(\mathcal{E}\). In the positive graph \(\mathcal{G}p\), we aim to learn the interest embeddings for users and items, denoted as \(\mathbf{z}_{u}\) and \(\mathbf{z}_{i}\), respectively. These embeddings capture the relationship between liking and being liked. In contrast, in the negative graph \(\mathcal{G}n\), we focus on learning the disinterest embeddings for users and items, represented as \(\mathbf{v}_{u}\) and \(\mathbf{v}_{i}\), respectively. These embeddings capture the relationship between disliking and being disliked. For a comprehensive overview of the notations used in this paper, please refer to Table 1.
### Model architecture
The architecture of the PANE-GNN model is depicted in Figure 3. It consists of three key technical designs: message passing on the positive graph \(\mathcal{G}p\) and the negative graph \(\mathcal{G}n\), contrastive learning on the negative graph \(\mathcal{G}n\), and ranking with a disinterest-score filter. In the message passing stage, information propagation takes place on both \(\mathcal{G}p\) and \(\mathcal{G}n\). This solution allows the model to leverage the structural information present in both graphs to enhance the representation learning process. The contrastive learning stage focuses on the negative graph \(\mathcal{G}n\). By employing contrastive learning, the model denoises the negative feedback and generates robust embeddings that remain invariant to graph perturbations. Finally, the ranking method with a disinterest-score filter is applied to generate the final recommendations. This method incorporates the learned embeddings from both the positive and negative graphs to rank the items and filter out items that do not align with the user's interests.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Notation** & **Description** \\ \hline \(\mathcal{U}\) & Set of users. \\ \(\mathcal{I}\) & Set of items. \\ \(\mathcal{E}_{p}\) & Set of positive edges. \\ \(\mathcal{E}_{n}\) & Set of negative edges. \\ \(\mathcal{E}=\mathcal{E}_{p}\cup\mathcal{E}_{n}\) & Set of all edges. \\ \(\mathcal{G}=(\mathcal{U},\mathcal{I},\mathcal{E})\) & Raw rating graph. \\ \(\mathcal{G}_{p}=(\mathcal{U},\mathcal{I},\mathcal{E}_{p})\) & Positive graph. \\ \(\mathcal{G}_{n}=(\mathcal{U},\mathcal{I},\mathcal{E}_{n})\) & Negative graph. \\ \(\mathcal{G}_{d}=(\mathcal{U},\mathcal{I},\mathcal{E}_{d})\) & Distorted graph from \(\mathcal{G}_{n}\). \\ \(N=\mathcal{I}\{U,\mathcal{I}\}\) & Number of all nodes in \(\mathcal{G}\). \\ \(\mathbf{A}_{p},\mathbf{A}_{n},\mathbf{A}_{d}\in\mathbb{R}^{N\times N}\) & Adjacency matrices of \(\mathcal{G}_{p}\), \(\mathcal{G}_{n}\), \(\mathcal{K}\)\(\mathcal{G}_{d}\). \\ \(\mathcal{N}_{p}(u)\), \(\mathcal{N}_{n}(u)\), \(\mathcal{N}_{d}(u)\) & Neighbor sets of user \(u\) in \(\mathcal{G}_{p}\), \(\mathcal{G}_{n}\), \(\&\&\)\(\)\(\mathcal{G}_{d}\). \\ \(\mathcal{N}_{p}(i)\), \(\mathcal{N}_{n}(i)\), \(\mathcal{N}_{d}(i)\) & Neighbor sets of item \(i\) in \(\mathcal{G}_{p}\), \(\mathcal{G}_{n}\), \(\&\)\(\&\)\(\mathcal{G}_{d}\). \\ \(\text{Z}\in\mathbb{R}^{N\times H}\) & Interest embedding matrix. \\ \(\mathbf{v}_{i}\in\mathbb{R}^{N\times H}\) & Disinterest embedding matrix. \\ \(\mathbf{z}_{u},\mathbf{v}_{i}\in\mathbb{R}^{H}\) & Interest embeddings on \(\mathcal{G}_{p}\). \\ \(\mathbf{v}_{u},\mathbf{v}_{i}\in\mathbb{R}^{H}\) & Disinterested embeddings on \(\mathcal{G}_{n}\). \\ \(\hat{\mathbf{v}}_{u},\hat{\mathbf{v}}_{i}\in\mathbb{R}^{H}\) & Disinterested embeddings on \(\mathcal{G}_{d}\). \\ \hline \(H\) & Embedding size. \\ \(K\) & Layer number of graph neural networks. \\ \(p\) & Probability of edge removing. \\ \(b\) & Feedback-aware coefficient. \\ \(\delta\) & Filtering threshold. \\ \(\lambda_{1}\) & Contrastive learning coefficient. \\ \(\lambda_{2}\) & L2 regularization coefficient. \\ \(\tau\) & Temperature coefficient. \\ \hline \hline \end{tabular}
\end{table}
Table 1. Frequently used notations in this paper.
#### 3.2.1. Message passing on \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\)
In contrast to prior work that primarily focuses on message passing on the positive graph \(\mathcal{G}\mathcal{p}\), PANE-GNN takes into account the high-order structural information in the negative graph \(\mathcal{G}n\) as well. In PANE-GNN, we introduce two types of embeddings: interest embeddings and disinterest embeddings. These embeddings capture the relationships between liking and being liked, as well as disliking and being disliked, respectively, for each user or item. To effectively aggregate and propagate these embeddings, PANE-GNN utilizes a technique called light graph convolution (LGC) (Gardner et al., 2017), which allows the embeddings to be updated and combined within the respective graph structures. In the message passing process on the positive graph \(\mathcal{G}_{p}\), the interest embeddings \(\mathbf{z}_{u}^{(k+1)}\) and \(\mathbf{z}_{i}^{(k+1)}\) at the (\(k\)+1)-th layer are updated by summing the normalized interest embeddings at the \(k\)-th layer:
\[\mathbf{z}_{u}^{(k+1)}=\sum_{i\in N_{\mathbf{p}}(u)}\frac{1}{ \sqrt{|N_{\mathcal{D}}(u)|}\sqrt{|N_{\mathcal{D}}(i)|}}\mathbf{z}_{i}^{(k)},\] \[\mathbf{z}_{i}^{(k+1)}=\sum_{u\in N_{\mathcal{D}}(i)}\frac{1}{ \sqrt{|N_{\mathcal{D}}(i)|}\sqrt{|N_{\mathcal{D}}(u)|}}\mathbf{z}_{u}^{(k)}. \tag{1}\]
The final interest embeddings \(\mathbf{z}_{u}\) and \(\mathbf{z}_{i}\) can be obtained by averaging the interest embeddings from all layers:
\[\mathbf{z}_{u}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{z}_{u}^{(k)},\quad\mathbf{ z}_{i}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{z}_{i}^{(k)}, \tag{2}\]
where \(K\) is the total number of layers. In Eq. (2), \(\mathbf{z}_{u}^{(0)}\) and \(\mathbf{z}_{i}^{(0)}\) are trainable parameters that represent the initial embeddings for user \(u\) and item \(i\), respectively. These embeddings are randomly initialized before the model training process begins. For the message passing process on the negative graph \(\mathcal{G}_{n}\), the disinterest embeddings \(\mathbf{v}_{u}^{(k+1)}\) and \(\mathbf{v}_{i}^{(k+1)}\) at the (\(k\)+1)-th layer are updated according to the following equations:
\[\mathbf{v}_{u}^{(k+1)}=\sum_{i\in N_{\mathbf{u}}(u)}\frac{1}{ \sqrt{|N_{\mathbf{n}}(u)|}\sqrt{|N_{\mathbf{n}}(i)|}}\mathbf{v}_{i}^{(k)},\] \[\mathbf{v}_{i}^{(k+1)}=\sum_{u\in N_{\mathbf{n}}(i)}\frac{1}{ \sqrt{|N_{\mathbf{n}}(i)|}\sqrt{|N_{\mathbf{n}}(u)|}}\mathbf{v}_{u}^{(k)}, \tag{3}\]
The final disinterest embeddings \(\mathbf{v}_{u}\) and \(\mathbf{v}_{i}\) are calculated by averaging the disinterest embeddings of all layers:
\[\mathbf{v}_{u}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{v}_{u}^{(k)},\quad\mathbf{v} _{i}=\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{v}_{i}^{(k)}, \tag{4}\]
where \(\mathbf{v}_{u}^{(0)}\) and \(\mathbf{v}_{i}^{(0)}\) are trainable parameters that are randomly initialized, similar to the initialization of interest embeddings \(\mathbf{z}_{u}^{(0)}\) and \(\mathbf{z}_{i}^{(0)}\). Correspondingly, the matrix forms of the above message-passing processes are as follows:
\[\mathbf{Z}^{\prime}\mathbf{=}\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{Z}^{(k)}, \quad\mathbf{Z}^{(k+1)}\mathbf{=}(\mathbf{D}_{p}^{-\frac{1}{2}}\mathbf{A}_{p} \mathbf{D}_{p}^{\frac{1}{2}})\mathbf{Z}^{(k)}, \tag{5}\]
Figure 3. The architecture of PANE-GNN. In model training, PANE-GNN performs message passing on both \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\) and contrastive learning on \(\mathcal{G}_{n}\) to generate interest embedding \(Z\) and disinterest embedding \(\mathbf{V}\). In model prediction, PANE-GNN recommends a sequence of items to each user based on a ranking method with a disinterest-score filter.
\[\text{V}{=}\frac{1}{K+1}\sum_{k=0}^{K}\mathbf{V}^{(k)},\quad\mathbf{V}^{(k+1)}{=}( \mathbf{D}_{n}^{-\frac{1}{2}}\mathbf{A}_{n}\mathbf{D}_{n}^{\frac{1}{2}})\mathbf{ V}^{(k)}, \tag{6}\]
where \(\mathbf{D}_{p}{=}\text{diag}(\mathbf{A}_{p}\mathbf{1}_{N\times N})\) and \(\mathbf{D}_{n}{=}\text{diag}(\mathbf{A}_{n}\mathbf{1}_{N\times N})\) are the degree matrices of \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\), respectively. Here \(N{=}|\mathcal{U}\cup\mathcal{I}|\) is the number of all nodes in \(\mathcal{G}\) and \(\mathbf{1}_{N\times N}{\in}\mathbb{R}^{N\times N}\) is a square matrix of ones.
To incorporate dense non-graph information into the model, we use a two-layer MLP model to transform the initial interest embeddings \(\mathbf{Z}^{(0)}\) into a more expressive embedding \(\mathbf{Z}^{\prime\prime}\):
\[\mathbf{Z}^{\prime\prime}{=}\text{ReLU}(\text{ReLU}(\mathbf{Z}^{(0)})\mathbf{ W}_{\text{MLP}}^{(1)})\mathbf{W}_{\text{MLP}}^{(2)}), \tag{7}\]
where \(\mathbf{W}_{\text{MLP}}^{(1)}\). \(\mathbf{W}_{\text{MLP}}^{(2)}{\in}\mathbb{R}^{H\times H}\) are two trainable weight matrices to perform feature transformation. Next, to determine the importance of the \(\mathbf{Z}^{\prime}\) and \(\mathbf{Z}^{\prime\prime}\) embeddings in generating the final interest embedding, we employ an attention mechanism. We introduce an attention layer that learns two importance scores \(\alpha_{1},\alpha_{2}{\in}\mathbb{R}^{+}\) and yields the final interest embedding \(\mathbf{Z}\):
\[\begin{split}\mathbf{Z}{=}(\alpha_{1}\mathbf{1}_{N\times H}) \odot\mathbf{Z}^{\prime}+(\alpha_{2}\mathbf{1}_{N\times H})\odot\mathbf{Z}^{ \prime\prime},\\ (\alpha_{1},\alpha_{2}){=}\text{Softmax}(\text{Tanh}(\mathbf{Z}^{ \prime}\mathbf{W}_{\text{Att}}^{(1)})\mathbf{W}_{\text{Att}}^{(2)},\text{Tanh} (\mathbf{Z}^{\prime\prime}\mathbf{W}_{\text{Att}}^{(1)})\mathbf{W}_{\text{ Att}}^{(2)}),\end{split} \tag{8}\]
where \(\mathbf{W}_{\text{Att}}^{(1)}{\in}\mathbb{R}^{H\times H},\mathbf{W}_{\text{ Att}}^{(2)}{\in}\mathbb{R}^{H\times 1}\) are two trainable weight matrices and \(\odot\) denotes the Hadamard product.
#### 3.2.2. Contrastive learning on \(\mathcal{G}_{n}\)
Positive feedback serves as a reliable indicator of users' interests, while negative feedback is more susceptible to timeliness and contains more noise compared to positive feedback (Kumar et al., 2017). To address this issue, we propose a denoising approach in PANE-GNN by distorting the raw negative graph \(\mathcal{G}_{n}\) into a new graph \(\mathcal{G}_{d}\) and applying contrastive learning between the two graphs. This approach is accomplished by applying edge removing, which is a widely used data augmentation strategy in graph contrastive learning, to the adjacency matrix \(\mathbf{A}_{n}\) of the negative graph \(\mathcal{G}_{n}\), resulting in the modified adjacency matrix \(\mathbf{A}_{d}\):
\[\mathbf{A}_{d}=\mathbf{A}_{n}\odot\mathbf{P},\quad\mathbf{P}\sim\mathcal{B}(1- p), \tag{9}\]
where \(\mathbf{P}\) is a random masking matrix drawn from a Bernoulli distribution with parameter \(p\). Then for the message passing process on \(\mathcal{G}_{d}\), the disinterest embeddings \(\tilde{\mathbf{v}}_{u}^{(k+1)}\) and \(\tilde{\mathbf{v}}_{i}^{(k+1)}\) at the (\(k{+}1\))-th layer are updated using the following equations:
\[\begin{split}\tilde{\mathbf{v}}_{u}^{(k+1)}&=\sum_{ i\in\mathcal{N}_{d}(u)}\frac{1}{\sqrt{|\mathcal{N}_{d}(i)|}\sqrt{|\mathcal{N}_{d}(i)|}} \tilde{\mathbf{v}}_{i}^{(k)},\\ \tilde{\mathbf{v}}_{i}^{(k+1)}&=\sum_{u\in\mathcal{ N}_{d}(i)}\frac{1}{\sqrt{|\mathcal{N}_{d}(i)|}\sqrt{|\mathcal{N}_{d}(u)|}} \tilde{\mathbf{v}}_{u}^{(k)},\end{split} \tag{10}\]
where \(\mathcal{N}_{d}(u){\subset}\mathcal{N}_{n}(u)\) and \(\mathcal{N}_{d}(i){\subset}\mathcal{N}_{n}(i)\) are the neighbor sets of user \(u\) and item \(i\) in \(\mathcal{G}_{d}\), respectively. The final disinterest embeddings \(\tilde{\mathbf{v}}_{u}\) and \(\tilde{\mathbf{v}}_{i}\) in \(\mathcal{G}_{d}\) are calculated by averaging the disinterest embeddings of all layers:
\[\tilde{\mathbf{v}}_{u}=\frac{1}{K+1}\sum_{k=0}^{K}\tilde{\mathbf{v}}_{u}^{(k)},\quad\tilde{\mathbf{v}}_{i}=\frac{1}{K+1}\sum_{k=0}^{K}\tilde{\mathbf{v}}_{i} ^{(k)}, \tag{11}\]
where \(\tilde{\mathbf{v}}_{u}^{(0)}{=}\mathbf{v}_{u}^{(0)}\) and \(\tilde{\mathbf{v}}_{i}^{(0)}{=}\mathbf{v}_{i}^{(0)}\). Correspondingly, the matrix form of the message-passing process on \(\mathcal{G}_{d}\) is as follows:
\[\tilde{\mathbf{V}}{=}\frac{1}{K+1}\sum_{k=0}^{K}\tilde{\mathbf{V}}^{(k)}, \quad\tilde{\mathbf{V}}^{(k+1)}{=}(\mathbf{D}_{d}^{-\frac{1}{2}}\mathbf{A}_{d} \mathbf{D}_{d}^{\frac{1}{2}})\tilde{\mathbf{V}}^{(k)}, \tag{12}\]
where \(\mathbf{D}_{d}{=}\text{diag}(\mathbf{A}_{d}\mathbf{1}_{N\times N})\) is the degree matrix of \(\mathcal{G}_{d}\).
```
0: Positive graph \(\mathcal{G}_{p}\), negative graph \(\mathcal{G}_{n}\), trainable parameters \(\Theta_{\text{Enh}}{=}\left\{\mathbf{Z}^{(0)},\mathbf{V}^{(0)}\right\}\) and \(\Theta_{\text{NN}}{=}\left[\mathbf{W}_{\text{MLP}}^{(1)}\mathbf{W}_{\text{MLP}}^{(2)}, \mathbf{W}_{\text{Att}}^{(1)},\mathbf{W}_{\text{Att}}^{(2)}\right]\), embedding size \(H\), GNNs layer number \(K\), hyperparameters \(p,b,\delta,\lambda_{1},\lambda_{2},\tau\).
0: Interest embedding matrix \(\mathbf{Z}\), disinterest embedding matrix \(\mathbf{V}\).
1: Initialize \(\Theta_{\text{Enh}}{=}\) and \(\Theta_{\text{NN}}{\text{ via the Glorot method}}\);
2: Initialize embedding matrices: \(\mathbf{Z}\leftarrow\mathbf{Z}^{(0)},\mathbf{V}\leftarrow\mathbf{V}^{(0)}, \tilde{\mathbf{V}}\leftarrow\mathbf{V}^{(0)}\);
3: Distort \(\mathcal{G}_{n}\) into \(\mathcal{G}_{d}\) according to Eq. (9);
4:while not converged do
5: Generate training set \(\mathcal{D}_{p}\) from \(\mathcal{G}_{p}\) based on Eq. (14);
6: Generate training set \(\mathcal{D}_{n}\) from \(\mathcal{G}_{n}\) based on Eq. (15);
7:for each mini-batch \(\mathcal{B}_{p}{\subset}\mathcal{D}_{p}\)do
8: Calculate \(\mathbf{Z}^{\prime}\) according to Eq. (5);
9: Calculate \(\mathbf{Z}^{\prime\prime}\) according to Eq. (7);
10: Update \(\mathbf{Z}\) according to Eq. (8);
11:endfor
12:for each mini-batch \(\mathcal{B}_{n}{\subset}\mathcal{D}_{n}\)do
13: Update \(\mathbf{V}\) according to Eq. (6);
14: Update \(\mathbf{V}\) according to Eq. (12);
15:endfor
16: Calculate \(\mathcal{L}_{\text{DB}}\) according to Eq. (17);
17: Calculate \(\mathcal{L}_{\text{CL}}\) according to Eq. (18);
18:\(\mathcal{L}_{\text{Reg}}\leftarrow\|\Theta_{\text{Enh}}\|^{2}\);
19:\(\mathcal{L}\leftarrow\mathcal{L}_{\text{DB}}+\lambda_{1}\cdot\mathcal{L}_{ \text{CL}}+\lambda_{2}\cdot\mathcal{L}_{\text{Reg}}\) Update \(\Theta_{\text{Enh}}\) and \(\Theta_{\text{NN}}\) by taking one step of gradient descent on \(\mathcal{L}\);
20:endwhile
21:return\(\mathbf{Z},\mathbf{V}\).
```
**Algorithm 1** PANE-GNN
#### 3.2.3. Ranking with a disinterest-score filter
To calculate interest scores, we utilize the matrix multiplication between the user embedding \(\mathbf{z}_{u}\) and the item embedding \(\mathbf{z}_{i}\), denoted as \(S_{\text{it}}=\mathbf{z}_{u}\mathbf{z}_{i}^{\text{T}}\). This score represents the affinity between user \(u\) and item \(i\) based on their respective interest embeddings. Similarly, the disinterest score is calculated as \(S_{\text{it}}=\mathbf{v}_{u}\mathbf{v}_{i}^{\text{T}}\). This score captures the disinterest or negative affinity between user \(u\) and item \(i\) based on their respective disinterest embeddings.
Furthermore, we leverage mini-batch learning to train PANE-GNN, then each mini-batch on \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\) are denoted as \(\mathcal{B}_{p}{\subset}\mathcal{D}_{p}\) and \(\mathcal{B}_{n}{\subset}\mathcal{D}_{n}\), respectively.
The trainable parameter group of PANE-GNN consists of two parts: the embeddings \(\Theta_{\text{Emb}}{=}\left\{\mathbf{Z}^{(0)},\mathbf{V}^{(0)}\right\}\) of the 0-th layer, and the neural network parameters \(\Theta_{\text{NN}}{=}\left\{\mathbf{W}_{\text{MLP}}^{(1)},\mathbf{W}_{\text{MLP}}^{(2)}, \mathbf{W}_{\text{Att}}^{(1)},\mathbf{W}_{\text{Att}}^{(2)}\right\}\), which include the weight matrices for the MLP layers and attention layers. The overall loss function \(\mathcal{L}\) is defined as follows:
\[\mathcal{L}=\mathcal{L}_{\text{DB}}+\lambda_{1}{\cdot}\mathcal{L}_{\text{CL} }+\lambda_{2}{\cdot}\mathcal{L}_{\text{Reg}}, \tag{16}\]
where \(\mathcal{L}_{\text{Reg}}{=}\left\|\Theta_{\text{Emb}}\right\|^{2}\) denotes the L2 regularization term of the 0-th layer embeddings. \(\lambda_{1}\) and \(\lambda_{2}\) are two hyperparameters that control the strength of contrastive learning and L2 regularization, respectively. In order to incorporate the feedback information from both \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\), we propose a dual feedback-aware BPR loss \(\mathcal{L}_{\text{DB}}\) inspired by the Bayesian personalized ranking (BPR) loss (Srivastava et al., 2015):
\[\mathcal{L}_{\text{DB}}=-\sum_{(u,i,j)\in\mathcal{B}_{p}}\ln\sigma(\hat{y}_{u,i}{-}\hat{y}_{u,j})-\sum_{(u,i,j)\in\mathcal{B}_{n}}\ln\sigma(\hat{y}_{u,j}{ -}\hat{y}_{u,i}), \tag{17}\]
where \(\sigma(x){=}\frac{1}{1+\exp(-x)}\) is the sigmoid function and \(b{>}1\) is a feedback-aware coefficient. The presence of \(b\) ensures the following priority order: positive feedback \(>\) negative feedback \(>\) no feedback. This priority implies that positive feedback is given higher importance than negative feedback, and both positive and negative feedback are considered more valuable than no feedback. In addition, we design the contrastive objective \(\mathcal{L}_{\text{CL}}\) on \(\mathcal{G}_{n}\) via the InfoNCE loss (Kumar et al., 2017):
\[\mathcal{L}_{\text{CL}}{=}-\sum_{u\in\mathcal{U}}\ln\frac{\exp(\frac{\mathbf{ v}_{u}\hat{v}_{u}^{\text{T}}}{\tau})}{\sum_{u^{\prime}\in\mathcal{U}}\exp(\frac{ \mathbf{v}_{u}\hat{v}_{u}^{\text{T}}}{\tau})}-\sum_{i\in I}\ln\frac{\exp(\frac {\mathbf{v}_{i}\hat{v}_{i}^{\text{T}}}{\tau})}{\sum_{i^{\prime}\in I}\exp( \frac{\mathbf{v}_{i}\hat{v}_{i}^{\text{T}}}{\tau})}, \tag{18}\]
where \(\tau\) is a temperature coefficient. This objective allows us to leverage the contrastive learning framework to enhance the robustness and discriminative power of disinterest embeddings in the recommendation process. The complete procedure of PANE-GNN is summarized in Algorithm 1.
## 4. Experiment
In this section, we provide descriptions of the four real-world datasets (Section 4.1) and five baselines (Section 4.2) used in our experiments. We also introduce the metrics (Section 4.3) and hyperparameter setups (Section 4.4). Furthermore, we compare the performance of different methods and conduct a comprehensive evaluation of the performance of PANE-GNN (Section 4.5).
### Datasets
We evaluate our approach on four real-world datasets: MovieLens-1M (ML-1M), Amazon-Book, Yelp, and KuaiRec.
* **ML-1M** ([http://q6e9.cn/VMQw](http://q6e9.cn/VMQw)): This widely-used movie review dataset consists of approximately 6,000 users and 4,000 movies. Users rate movies on a 5-star scale, and each user has provided at least 20 ratings.
* **Amazon-Book** ([https://61a.life/K7oer](https://61a.life/K7oer)): We selected the Amazon-Book dataset from a large crawl of product reviews on Amazon. The dataset comprises around 35,000 users, 38,000 items, and 1.9 million 5-star ratings. Similar to previous work (Kumar et al., 2017; Wang et al., 2018), we removed users or items with fewer than 20 interactions.
* **Yelp** ([https://x064.cn/jak1U](https://x064.cn/jak1U)): This dataset consists of reviews for local businesses. It includes approximately 41,000 users, 30,000 businesses, and 2.1 million 5-star ratings. Like the Amazon-Book dataset, we excluded users or businesses with fewer than 20 interactions.
* **KuaiRec** ([https://54z.life/DuQDC](https://54z.life/DuQDC)): This real-world dataset was collected from the recommendation logs of Kuaishou, a video-sharing mobile app. It contains around 7,100 users, 10,000 short videos (each with multiple tags), and a user-video interaction matrix.
For ML-1M, Amazon-Book, and Yelp, we use the threshold of 3.5 to split the original ratings as binary signals. For KuaiRec, as suggested by the authors in (Kumar et al., 2017), we use the rule of "whether the video watch ratio is higher than 2.0" to achieve binary signals. The detailed statistics of the above four datasets are shown in Table 2. In the training set of KuaiRec, the number of negative ratings is far higher than that of positive ratings, which provides a more realistic and biased training environment compared to the other three datasets.
### Baselines
We compare PANE-GNN with five state-of-the-art GNN-based recommendation models.
* **NGCF**(Xu et al., 2017): NGCF is a GNN-based recommendation framework that explicitly incorporates high-order collaborative signals from the user-item bipartite graph through embedding propagation.
* **LR-GCCF**(Xu et al., 2017): LR-GCCF incorporates the GCN model into the recommender system. Instead of employing non-linear transformations in the GCN, LR-GCCF utilizes linear embedding propagations. Additionally, it introduces a residual network structure to address the over-smoothing issue that can arise from applying multiple layers of graph convolutions.
* **LightGCN**(Hu et al., 2017): LightGCN redesigns a light graph convolution structure specific to recommendations by abandoning the use of feature transformation and nonlinear activation. This approach aims to simplify the model while maintaining competitive performance.
* **SGCN**(Hu et al., 2017): SGCN leverages balance theory to aggregate and propagate information in a signed graph. By considering balanced and unbalanced embeddings, SGCN effectively captures the information from both positive and negative feedback signals.
* **SiReN**(Wang et al., 2018): SiReN is designed for signed bipartite graphs. It utilizes a GNN model and an MLP model to generate two sets of embeddings for the partitioned graph. Additionally, SiReN designs a sign-aware BPR loss to differentiate the effects of high-rating and low-rating items.
### Metrics
We evaluate the effectiveness of PANE-GNN using three performance metrics: \(Precision@K\), \(Recall@K\), and \(nDCG@K\) (normalized discounted cumulative gain\(@K\)). These metrics provide insights into the accuracy, completeness, and ranking quality of the
recommendation results. _Precision@K_ measures the proportion of relevant items among the top-\(K\) recommended results for a user:
\[Precision@K=\frac{1}{|\mathcal{U}|}\sum_{u\in\mathcal{U}}\frac{|GT_{u}\cap R_{u}( K)|}{|GT_{u}|}, \tag{19}\]
where \(GT_{u}\) denotes the ground truth item set liked by user \(u\) in the test set and \(R_{u}(K)\) denotes the recommended top-\(K\) items for user \(u\). _Recall@K_ quantifies the proportion of relevant items among all correct results for a user:
\[Recall@K=\frac{1}{|\mathcal{U}|}\sum_{u\in\mathcal{U}}\frac{|GT_{u}\cap R_{u}( K)|}{|GT_{u}|}. \tag{20}\]
_nDCG@K_ is a ranking quality measurement that assigns higher values to relevant items appearing at higher ranks:
\[nDCG@K =\frac{1}{|\mathcal{U}|}\sum_{u\in\mathcal{U}}\frac{DCG_{u}@K}{ IDCG_{u}@K},\] \[DCG_{u}@K =\sum_{i=1}^{K}\frac{G_{u}(i)}{log_{2}(i+1)}\;\;\;iDCG_{u}@K =\sum_{i=1}^{K}\frac{1}{log_{2}(i+1)}, \tag{21}\]
where \(G_{u}(i)\) equals 1 if the item at rank \(i\) in the recommended list is in the ground truth item set \(GT_{u}\), and 0 otherwise.
### Hyperparameter Setups
In the experiments, we set the embedding size of PANE-GNN to 64, similar to LightGCN and SiReN. The embedding parameters of PANE-GNN are initialized using the Glorot method (Kang et al., 2017). We use the Adam optimizer (King and Ba, 2015) with a default learning rate of 5e-3 to optimize PANE-GNN. The training process of PANE-GNN employs mini-batch learning, where the default batch size is set to 1,024. We train PANE-GNN for a total of 1,000 epochs for all datasets. PANE-GNN incorporates L2 regularization with a coefficient of 0.01 on KuaiRec and 0.05 on the other three datasets. Negative sampling is employed during training, and the number of negative samples is set to 1 on KuaiRec and 40 on the other three datasets. The architecture of PANE-GNN consists of 4 layers of GNNs and 2 layers of MLP in total. The temperature value used in the contrastive loss is set to 0.8. Additionally, the dropout rate for the MLP layer or attention layer is set to 0.5. The filter in PANE-GNN utilizes a disinterest score threshold of 0.5 by default. The implementation of PANE-GNN is done using PyTorch. The source code is available at [https://reurl.cc/0ELqO6](https://reurl.cc/0ELqO6).
For ML-1M, Amazon-Book, and Yelp datasets, we perform 5-fold cross-validation by splitting each dataset into training and test sets. The training set contains 80% of the ratings, while the remaining 20% constitutes the test set. As for KuaiRec, following the suggestion in the original paper (Kang et al., 2017), we use the user-item interactions from the fully-observed small matrix as the test set, and the remaining interactions are used for training.
### Experimental Results
We conduct experiments to answer the following four key research questions:
* **RQ1:** Does PANE-GNN improve overall recommendation performance compared to other GNN-based methods (Section 4.5.1)?
* **RQ2:** How do different components in PANE-GNN affect its performance (Section 4.5.2)?
* **RQ3:** How robust is PANE-GNN in terms of different hyperparameters (Section 4.5.3)?
* **RQ4:** What are the final recommendation results of PANE-GNN from a qualitative perspective (Section 4.5.4)?
#### 4.5.1. Comparison of overall performance (RQ1)
Table 3 presents a comprehensive performance comparison between PANE-GNN and state-of-the-art GNN-based methods using the evaluation metrics _Precision@K_, _Recall@K_, and _nDCG@K_ with varying values of \(K\). Across all four datasets (ML-1M, Amazon-Book, Yelp, and KuaiRec), PANE-GNN consistently outperforms the five baseline methods, demonstrating the success and effectiveness of the designed message-passing approach on both the positive and negative graphs. Notably, the performance improvement of PANE-GNN on KuaiRec is particularly significant compared to the other datasets. For instance, PANE-GNN outperforms the runner-up LightGCN by 0.85% in terms of _Recall@_05 and 2.87% in terms of _Recall@_01. This outcome highlights the advantage of PANE-GNN when dealing with biased datasets where the number of positive ratings is considerably lower than negative ratings. In comparison to SiReN, which utilizes an attention model to integrate embeddings from the positive and negative graphs, PANE-GNN surpasses it in empirical evaluation. It is because PANE-GNN generates the disinterest embedding \(\mathbf{V}\) from the negative graph, which provides a comprehensive user profile and enables the filtering of irrelevant items. Interestingly, SGCN, which relies on the balance theory assumption, performs poorly compared to other methods. This finding suggests that the balance theory assumption, designed for signed unipartite graphs, is not suitable for real-world recommendation scenarios where users typically have diverse interests.
#### 4.5.2. Ablation studies (RQ2)
The ablation studies on PANE-GNN are conducted to investigate the functions of different components. Four variants of PANE-GNN are designed and evaluated:
* **Variant-A**: Using message passing on the negative graph \(\mathcal{G}_{n}\).
* **Variant-B**: Using message passing on the positive graph \(\mathcal{G}_{p}\).
* **Variant-C**: Using message passing on both \(\mathcal{G}_{p}\) and \(\mathcal{G}_{n}\).
* **Variant-D**: Introducing graph contrastive learning on Variant-C.
The results of the ablation studies on the ML-1M and KuaiRec datasets are presented in Table 4. The observations from the ablation studies are as follows.
**Variant-A**: Variant-A, which only uses message passing on the negative graph \(\mathcal{G}_{n}\), exhibits poor performance in all metrics on both datasets. It indicates that positive feedback is crucial for recognizing users' interests, and negative feedback alone cannot replace it, although it helps recognize users' dislikes.
**Variant-B** vs. **Variant-C**: Comparing Variant-B (message passing only on \(\mathcal{G}p\)) and Variant-C (message passing on both \(\mathcal{G}p\) and \(\mathcal{G}_{n}\)), it is observed that Variant-C, which integrates the structural information from the negative graph, performs better. It suggests
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Dataset** & **\#User** & **\#Item** & **\#Rating** & **Density (\%)** & **Ratio** \\ \hline ML-1M & 6,040 & 3,952 & 1,000,209 & 4.19 & 1:0.73 \\ Amazon-Book & 35,736 & 38,121 & 1,960,674 & 0.14 & 1:0.24 \\ Yelp & 41,772 & 30,037 & 2,116,215 & 0.16 & 1:0.47 \\ KuaiRec & 7,176 & 10,728 & 761,425 & 0.98 & 1:13.30 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Statistics of four real-world datasets. “Ratio” denotes the number ratio between positive and negative ratings in the training set.
that incorporating the negative graph enhances the model's performance.
**Variant-C** vs. **Variant-D**: Introducing the contrastive learning loss on \(\mathcal{G}_{n}\) in Variant-D further improves the model's performance. For instance, Variant-D achieves a 3.14% higher _Recall_@10 than Variant-C on the KuaiRec dataset. It demonstrates the effectiveness of contrastive learning for learning accurate disinterest embeddings from the negative graph.
**Variant-D** vs. **PANE-GNN**: Comparing Variant-D and the full PANE-GNN, it is observed that leveraging the disinterest-score filter in ranking consistently improves the performance of Variant-D. It confirms the accuracy of disinterest scores and the effectiveness of the disinterest-score filter.
#### 4.5.3. Hyperparameter sensitivity analysis (RQ3)
To evaluate the sensitivity of PANE-GNN to different hyperparameters, we conduct a comprehensive hyperparameter sensitivity analysis on ML-1M and KuaiRec. We systematically vary the values of key hyperparameters and measure their impact on the model performance in terms
\begin{table}
\begin{tabular}{c c c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Variant**} & \multirow{2}{*}{**Description**} & \multicolumn{3}{c|}{\(K=5\)} & \multicolumn{3}{c|}{\(K=10\)} & \multicolumn{3}{c}{\(K=15\)} \\ & & & \(Precision@K\) & _Recall@K_ & _nDCG@K_ & \(Precision@K\) & _Recall@K_ & _nDCG@K_ & _Precision@K_ & _Recall@K_ & _nDCG@K_ \\ \hline \multirow{6}{*}{**Variant-D**} & A & MP on \(\mathcal{G}_{n}\) & 0.64\(\pm\)0.02 & 0.13\(\pm\)0.04 & 0.68\(\pm\)0.03 & 0.62\(\pm\)0.01 & 0.26\(\pm\)0.01 & 0.67\(\pm\)0.02 & 0.61\(\pm\)0.02 & 0.43\(\pm\)0.01 & 0.70\(\pm\)0.03 \\ & B & MP on \(\mathcal{G}_{p}\) & 31.51\(\pm\)0.15 & 12.11\(\pm\)0.10 & 34.49\(\pm\)0.20 & 26.35\(\pm\)0.22 & 19.23\(\pm\)0.11 & 32.59\(\pm\)0.17 & 23.32\(\pm\)0.20 & 24.47\(\pm\)0.19 & 32.34\(\pm\)0.15 \\ & C & MP on \(\mathcal{G}_{p}\) \& \(\mathcal{G}_{n}\) & 23.65\(\pm\)0.08 & 12.87\(\pm\)0.18 & 35.92\(\pm\)0.15 & 27.49\(\pm\)0.23 & 20.35\(\pm\)0.09 & 34.13\(\pm\)0.11 & 24.17\(\pm\)0.14 & 25.67\(\pm\)0.13 & 33.79\(\pm\)0.13 \\ & D & Variant-C + GCL & 33.46\(\pm\)0.11 & 13.05\(\pm\)0.15 & 36.56\(\pm\)0.21 & 27.77\(\pm\)0.20 & 20.40\(\pm\)0.11 & 34.45\(\pm\)0.09 & 24.36\(\pm\)0.12 & 25.70\(\pm\)0.17 & 34.05\(\pm\)0.04 \\ & PANE-GNN Variant-D + Filter & 33.66\(\pm\)0.14 & 13.26\(\pm\)0.17 & 36.90\(\pm\)0.25 & 27.97\(\pm\)0.14 & 20.50\(\pm\)0.18 & 34.70\(\pm\)0.13 & 24.66\(\pm\)0.22 & 25.95\(\pm\)0.09 & 34.37\(\pm\)0.14 \\ \hline \multirow{6}{*}{**Variant-D**} & A & MP on \(\mathcal{G}_{n}\) & 5.54\(\pm\)0.00 & 5.13\(\pm\)0.01 & 6.54\(\pm\)0.01 & 5.40\(\pm\)0.01 & 10.29\(\pm\)0.00 & 8.09\(\pm\)0.02 & 5.61\(\pm\)0.01 & 15.78\(\pm\)0.01 & 10.08\(\pm\)0.00 \\ & B & MP on \(\mathcal{G}_{p}\) & 24.22\(\pm\)0.10 & 33.25\(\pm\)0.13 & 40.05\(\pm\)0.13 & 17.61\(\pm\)0.08 & 42.19\(\pm\)0.23 & 41.60\(\pm\)0.16 & 14.64\(\pm\)0.15 & 49.36\(\pm\)0.17 & 43.83\(\pm\)0.12 \\ \cline{1-1} & C & MP on \(\mathcal{G}_{p}\) \& \(\mathcal{G}_{n}\) & 24.70\(\pm\)0.14 & 32.90\(\pm\)0.09 & 40.52\(\pm\)0.20 & 17.70\(\pm\)0.22 & 42.67\(\pm\)0.27 & 41.59\(\pm\)0.09 & 14.92\(\pm\)0.14 & 50.12\(\pm\)0.18 & 44.03\(\pm\)0.13 \\ \cline{1-1} & D & Variant-C + GCL & 24.94\(\pm\)0.18 & 34.04\(\pm\)0.22 & 41.21\(\pm\)0.06 & 19.37\(\pm\)0.14 & 45.81\(\pm\)0.04 & 44.10\(\pm\)0.28 & 15.88\(\pm\)0.19 & 53.36\(\pm\)0.06 & 46.28\(\pm\)0.18 \\ \cline{1-1} & PANE-GNN Variant-D + Filter & 25.85\(\pm\)0.10 & 34.61\(\pm\)0.11 & 41.91\(\pm\)0.20 & 19.39\(\pm\)0.17 & 46.03\(\pm\)0.07 & 44.13\(\pm\)0.05 & 16.16\(\pm\)0.16 & 53.47\(\pm\)0.15 & 46.55\(\pm\)0.10 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Results (%) of ablation studies on ML-1M and KuaiRec. Here “MP”, “GCL”, and “Filter” denote message passing, graph contrastive learning, and the disinterest-score filter, respectively.
\begin{table}
\begin{tabular}{c c c c c c c c|c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{\(K=5\)} & \multicolumn{3}{c|}{\(K=10\)} & \multicolumn{3}{c}{\(K=15\)} \\ & & & \(Precision@K\) & _Recall@K_ & _nDCG@K_ & \(Precision@K\)_ & _Recall@K_ & _nDCG@K_ & \(Precision@K\)_ & _Recall@K_ & _nDCG@K_ \\ \hline \multirow{6}{*}{**Variant-D**} & NGCF\({}^{\dagger}\) & 29.73\(\pm\)0.43 & 10.99\(\pm\)0.26 & 32.38\(\pm\)0.45 & 24.77\(\pm\)0.23 & 17.48\(\pm\)0.25 & 30.31\(\pm\)0.33 & 21.74\(\pm\)0.22 & 22.29\(\pm\)0.27 & 29.85\(\pm\)0.29 \\ & LR-GCCF\({}^{\dagger}\) & 30.52\(\pm\)0.33 & 11.40\(\pm\)0.23 & 33.30\(\pm\)0.44 & 25.39\(\pm\)0.27 & 18.02\(\pm\)0.31 & 31.17\(\pm\)0.39 & 22.20\(\pm\)0.25 & 22.92\(\pm\)0.46 & 30.66\(\pm\)0.42 \\ & LightGCN\({}^{\dagger}\) & 32.18\(\pm\)0.22 & 12.06\(\pm\)0.11 & 35.19\(\pm\)0.23 & 26.79\(\pm\)0.13 & 19.09\(\pm\)0.16 & 32.97\(\pm\)0.18 & 23.49\(\pm\)0.16 & 24.32\(\pm\)0.29 & 32.49\(\pm\)0.22 \\ & SGCN\({}^{\dagger}\) & 24.84\(\pm\)0.03 & 9.10\(\pm\)0.17 & 26.83\(\pm\)0.35 & 18.73\(\pm\)0.20 & 14.92\(\pm\)0.26 & 25.47\(\pm\)0.24 & 18.73\(\pm\)0.20 & 19.32\(\pm\)0.37 & 25.30\(\pm\)0.26 \\ & SiReN\({}^{\dagger}\) & 33.28\(\pm\)0.54 & 12.79\(\pm\)0.27 & 36.37\(\pm\)0.55 & 27.74\(\pm\)0.37 & 20.16\(\pm\)0.33 & 34.23\(\pm\)0.47 & 2
of \(Recall@10\). The results are shown in Figure 4 and the following findings were observed:
* GNNs layer number \(K\): As shown in Figure 4 (b), we observed that the \(Recall@10\) metric initially increases with an increasing number of GNNs layers on the ML-1M dataset. However, beyond a certain point, the \(Recall@10\) value starts to decrease. This observation aligns with the phenomenon of over-smoothing, where an excessive number of GNNs layers can cause the aggregation of node embeddings to become too similar, resulting in the loss of discriminative information. Additionally, as the number of GNNs layers increases, the computational efficiency of the model may be negatively impacted. Considering both the risk of over-smoothing and computational efficiency, we recommend setting \(K\) as 3 or 4 to ensure good recommendation outcomes while maintaining computational efficiency.
* Feedback-aware coefficient \(b\): From the analysis of Figure 4 (d), we observed that \(b=1\) resulted in inferior performance compared to other values of \(b\) on ML-1M. It indicates that discriminating between positive and negative feedback during the optimization process is crucial for achieving better results on ML-1M. The sub-optimal performance of \(b=1\) suggests that the model might not adequately capture the discriminative signals between positive and negative feedback when they are given equal weight. On the KuaiRec dataset, the stability of PANE-GNN's performance and its insensitivity to different values of \(b\) suggest that the dataset's inherent characteristics might diminish the significance of distinguishing between positive and negative feedback. Based on these observations, we recommend setting \(b\) as 2 or 3.
* Regularization coefficient \(\lambda_{2}\): As shown in Figure 4 (g), \(\lambda_{2}=0.1\) performs worst compared with others on ML-1M and KuaiRec. Although the L2 regularization term in Eq. (16) can prevent over-fitting, high \(\lambda_{2}\) excessively penalizes the model's parameters, resulting in underfitting. Hence, we suggest selecting \(\lambda_{2}\) from the range of [0.01, 0.05] for PANE-GNN.
* Others: We found that PANE-GNN demonstrates robustness to various other hyperparameters, including the edge removing ratio \(p\) and contrastive learning coefficient \(\lambda_{1}\).
#### 4.5.4. Case study (RQ4)
In this subsection, we evaluate the recommendation quality of PANE-GNN by analyzing the tag information of videos in KuaiRec. In Figure 5 (a), we observe that the user has a preference for outdoor sports-related videos based on the tags of liked videos in the training set. Conversely, Figure 5 (b) displays the tags of disliked videos, indicating disinterest in videos related to dressing or clothing. Figure 5 (c) and Figure 5 (d) depict the tags of the recommended videos generated by PANE-GNN before and after the filtering process, respectively. Our observations reveal the following insights: In Figure 5 (c), the recommended videos generated by PANE-GNN generally align with the user's interests depicted in Figure 5 (a), except for a few specific words such as "Wearing" and "Beauty". With the disinterest-score filter (Figure 5 (d)), PANE-GNN successfully filters out less relevant recommendations, while suggesting more relevant videos with tags like "Walking", "Outdoors", and "Countryside". These findings emphasize two key points: 1) PANE-GNN effectively captures both user interests and disinterests from the training data, and 2) the implementation of the disinterest-score filter proves to be an effective approach for generating more relevant recommendation outcomes.
Figure 4. Results of sensitivity analysis on ML-1M and KuaiRec.
Figure 5. Tag clouds of a specific user on the KuaiRec dataset. Each figure presents the tags of the top-10 videos.
Conclusion and Future Work
In this work, we address the problem of leveraging negative feedback to improve recommender systems. Existing approaches in the literature focused on GNN-based recommendation models that only consider message passing on the positive graph. To overcome this limitation and capture high-order structural information from both positive and negative graphs, we propose a novel GNN-based recommendation model called PANE-GNN. By aggregating and updating messages on these two graphs, we enable the model to effectively incorporate positive and negative feedback. Additionally, we employ contrastive learning on the negative graph to reduce noise and filter out items with high disinterest scores, ensuring the relevance of the recommended results. Experimental evaluations conducted on four real-world datasets demonstrate that PANE-GNN consistently outperforms state-of-the-art GNN-based recommendation methods. We also conduct an in-depth analysis of PANE-GNN to validate its effectiveness across different components and its robustness to hyperparameters. In the future, we plan to investigate the exposure bias issue in GNN-based recommendation models.
|
2308.03256 | Learning a Graph Neural Network with Cross Modality Interaction for
Image Fusion | Infrared and visible image fusion has gradually proved to be a vital fork in
the field of multi-modality imaging technologies. In recent developments,
researchers not only focus on the quality of fused images but also evaluate
their performance in downstream tasks. Nevertheless, the majority of methods
seldom put their eyes on the mutual learning from different modalities,
resulting in fused images lacking significant details and textures. To overcome
this issue, we propose an interactive graph neural network (GNN)-based
architecture between cross modality for fusion, called IGNet. Specifically, we
first apply a multi-scale extractor to achieve shallow features, which are
employed as the necessary input to build graph structures. Then, the graph
interaction module can construct the extracted intermediate features of the
infrared/visible branch into graph structures. Meanwhile, the graph structures
of two branches interact for cross-modality and semantic learning, so that
fused images can maintain the important feature expressions and enhance the
performance of downstream tasks. Besides, the proposed leader nodes can improve
information propagation in the same modality. Finally, we merge all graph
features to get the fusion result. Extensive experiments on different datasets
(TNO, MFNet and M3FD) demonstrate that our IGNet can generate visually
appealing fused images while scoring averagely 2.59% [email protected] and 7.77% mIoU
higher in detection and segmentation than the compared state-of-the-art
methods. The source code of the proposed IGNet can be available at
https://github.com/lok-18/IGNet. | Jiawei Li, Jiansheng Chen, Jinyuan Liu, Huimin Ma | 2023-08-07T02:25:06Z | http://arxiv.org/abs/2308.03256v1 | # Learning a Graph Neural Network with Cross Modality Interaction for Image Fusion
###### Abstract.
Infrared and visible image fusion has gradually proved to be a vital fork in the field of multi-modality imaging technologies. In recent developments, researchers not only focus on the quality of fused images but also evaluate their performance in downstream tasks. Nevertheless, the majority of methods seldom put their eyes on mutual learning from different modalities, resulting in fused images lacking significant details and textures. To overcome this issue, we propose an interactive graph neural network (GNN)-based architecture between cross modality for fusion, called IGNet. Specifically, we first apply a multi-scale extractor to achieve shallow features, which are employed as the necessary input to build graph structures. Then, the graph interaction module can construct the extracted intermediate features of the infrared/visible branch into graph structures. Meanwhile, the graph structures of two branches interact for cross-modality and semantic learning, so that fused images can maintain the important feature expressions and enhance the performance of downstream tasks. Besides, the proposed leader nodes can improve information propagation in the same modality. Finally, we merge all graph features to get the fusion result. Extensive experiments on different datasets (_i.e._, TNO, MFNet, and MFD) demonstrate that our IGNet can generate visually appealing fused images while scoring averagely 2.59% [email protected] and 7.77% mIoU higher in detection and segmentation than the compared state-of-the-art methods. The source code of the proposed IGNet can be available at [https://github.com/lok-18/IGNet](https://github.com/lok-18/IGNet).
Image fusion, graph neural network, cross-modality interaction, leader node +
Footnote †: Corresponding author: Jiansheng Chen.
## 1. Introduction
Due to the inadequacy of single-modality imaging, the resulting images are commonly defective in complex scenes [22, 21]. As a representative, Visible images are more in line with the human visual system (HVS), but susceptible to environmental factors. In this case, researchers attempt to fuse visible images with ones of another modality to counteract the disadvantages of single-modality imaging. Complementarily, infrared images can capture salient targets with thermal radiation sensors. Texture details and resolution of them often perform undesirably. Therefore, infrared and visible image fusion (IVIF) emerges as the times require, which can possess information from different modalities simultaneously. Acting as an indispensable part of multi-modality imaging technology, IVIF has drawn extensive attention to computer vision tasks, _e.g._, vehicle detection [32], video surveillance [27] and image stitching [8].
For the past decade, deep learning networks have been introduced to explore the IVIF task [43], which mainly contains convolution neural network (CNN)-based [19] and transformer-based methods [25]. These methods focus on accurate feature extraction
for inputs while promoting the fusion efficiency significantly. Compared with previous traditional approaches, deep learning-based methods can utilize more efficient feature extraction capabilities to obtain fusion results with higher efficiency. With further development, researchers also pay attention to the performance of down-stream tasks after fusion (Song et al., 2018). That is to say the results of down-stream tasks are closely related to the fusion images.
Existing mainstream IVIF methods have reached a certain height, nevertheless, there are still several drawbacks: (i) the uneven distribution of infrared and visible information extracted from networks causes fusion results only to be biased towards one modality (Han et al., 2017), which can not perform the prominent regions of source images well. (ii) since feature learning often acts separately on each single branch, the information contained in networks may lack the communication of cross modalities (Song et al., 2018). (iii) the internal design of message delivery is not well taken into account in several networks (Song et al., 2018), so some significant details of source images can not be displayed in fused results.
To alleviate the drawbacks mentioned above, in this paper, we propose an interactive GNN-based architecture between cross modality for the IVIF task, termed as IGNet. Concretely, multi-scale shallow features are first extracted by convolutions and the proposed structure salience module (SSM). Then, we construct a graph interaction module (GIM) to obtain graph structures of different branches for feature learning. Note that the interaction of cross-modality graph features enables the proposed IGNet to achieve more semantic information, which can improve the performance of downstream tasks, _e.g._, object detection, and image segmentation. In addition, the establishment of leader nodes guides the message propagation effectually to avoid image quality degradation caused by feature loss. Fig. 1 proves that our proposed IGNet maintains the superior position regardless of subjective visual results or objective marks compared with state-of-the-art methods.
In brief, the contributions can be divided into the following aspects:
* For optimizing the internal relationship of fusion and downstream (_i.e._, object detection and image segmentation) tasks, to the best of our knowledge, we are the first to apply GNN into the IVIF method. To this end, the fused results can contain faithful visual representation and feature comprehension abilities.
* We propose a graph interaction module (GIM) for getting graph structures. It can proceed cross-modality communication through graph features, which highlight the desired details of fusion results. Furthermore, the semantic-wise information can also be extracted by GIM for improving down-stream results.
* Unlike the common GNN, the leader nodes are employed for information delivery after achieving graphs. Accompanied by a leader node as a pioneer, fusion images can maintain abundant textures from source inputs.
* We conduct image fusion, detection, and segmentation experiments on TNO, M\({}^{3}\)FD, and MFNet datasets. Compared with the other seven state-of-the-art approaches, our proposed IGNet performs foremost in all tasks.
## 2. Related Works
### Infrared and Visible Image Fusion
Deep learning has promoted rapid development in the field of image fusion (Liu et al., 2017; Liu et al., 2018; Liu et al., 2019; Liu et al., 2019). In early stages, researchers are dedicated to improving the performance of fused images by CNN-based methods, which are mainly divided into three classes, _i.e._, End-to-End-based models (Liu et al., 2019; Liu et al., 2019), Encoder-Decoder models (Liu et al., 2019), and generative adversarial network (GAN)-based models (Wang et al., 2019).
More specifically, End-to-End models preset parameters before unsupervised training (Han et al., 2017). Liu _et al._(Liu et al., 2019) proposed a coarse-to-fine deep network with an end-to-end manner to learn multi-scale features from infrared and visible images. The structure details were also refined by the proposed edge-guided attention mechanism. The Encoder-Decoder models need to design a fusion rule to integrate features extracted from the encoder, and then output the fusion results from the decoder (Zhao et al., 2019). Zhao _et al._(Zhao et al., 2019) conducted a novel encoder to decompose source images into background and detail feature maps, which can highlight targets, especially in the dark. The GAN-based models require a generator and a discriminator for adversarial learning. Li _et al._(Li et al., 2019) effectively combined the attention mechanism with GAN, namely AttentionFGAN. Moreover, extensive transformer-based models have also received much attention in the IVIF task (Wang et al., 2019). Tang _et al._(Tang et al., 2019) utilized Swin Transformer and cross-domain long-range learning into the IVIF task, which connected local features with global representation.
To further explore the performance of fusion images, researchers have introduced down-stream tasks _e.g._, object detection and image segmentation, into the IVIF task. As a representative, Liu _et al._(Liu et al., 2019) proposed a unified architecture and built a multi-modality dataset for image fusion and detection. Sun _et al._(Sun et al., 2019) employed the information back-propagated by detection loss in the proposed network to obtain fused images with excellent detection results. For getting more semantic features, Tang _et al._(Tang et al., 2019) proposed a cascaded structure called SeA Fusion, which connects the fusion network with a pretrained segmentation module. Zhao _et al._(Zhao et al., 2019) conducted a novel two-stage training mode for fusion. The detection and segmentation results also performed well in this benchmark.
### Graph Neural Network
In recent years, GNN-based approaches have become increasingly popular in computer vision. Different from traditional CNN-based methods, the unique structure of GNN enables to extract and transfer more efficient features. Therefore, GNN is commonly implemented in the feature-wise tasks. As a representative, Xie _et al._(Xie et al., 2018) proposed a scale-aware network with GNN to conduct few-shot semantic segmentation. In the medical field, Huang _et al._(Huang et al., 2018) employed a semi-supervised network for medical image segmentation, which could help doctors diagnose diseases better. Recently, GNN becomes popular in the field of saliency detection. It can effectively highlight the salient mask of measured targets. Specifically, Luo _et al._(Luo et al., 2019) tried to cascade graph structures for salient object detection (SOD) with RGB-D images. Song _et al._(Song et al., 2018) devised a multiple graph module to realize the RGB-T SOD task. GNN can be also applied in Co-Saliency Detection (CSD) and Instance Co-Segmentation (ICS). Li _et al._(Li et al., 2019) presented a general adaptive GNN-based module to deal with CSD and ICS. In addition, some low-level tasks can also
perform well by using GNN as their benchmark. Li _et al_. (Li et al., 2018) proposed a novel GNN-based method for image denoising. In summary, GNN maintains sensitivity to semantic information, while handling pixel-level tasks well. Hence, our proposed IGNet can exploit the advantages of GNNs for deeper exploration of IVIF tasks, which can simultaneously improve the quality of fused images and the performance of corresponding downstream tasks.
## 3. Method
### Motivation
In the IVIF task, networks often extract features in infrared and visible branches separately, while ignoring the interaction between modalities. It may cause textures of source images can not be completely displayed in fusion results. With the information delivery during training, the occurrence of feature forgetting is inevitable as well. Besides, the fused images will directly affect the performance of the down-stream results. There is no doubt that applying an effective architecture to achieve visually appealing images can improve the accuracy of detection and segmentation. Significantly, how to obtain fused images with prominent targets, fine textures, and rich semantic information is the key to handling the above issues. Hence, it is our motivation to realize a general IVIF framework, which can obtain fusion and semantic information in pixel and feature domains concurrently.
### Overall Workflow
The proposed IGNet adopts a dual-branch framework in the feature learning stages. Subsequently, we aggregate the infrared and visible branches to achieve fusion images. The overall pipeline is illustrated in Fig. 2. To be specific, two different-scale features (_i.e_., \(\ell_{1}^{*}\) and \(\ell_{2}^{*}\)) can be generated by the first two convolutional layers, where \(*\) denotes the infrared/visible branch. Then, we modify \(\ell_{2}^{*}\) through the SSM for getting the salient-structure feature \(\ell_{3}^{*}\). It is formulated as follow:
\[\ell_{3}^{*}=\mathcal{S}(\ell_{2}^{*}), \tag{1}\]
where \(\mathcal{S}\) means the SSM. For constructing connections between source images, \(\ell_{i}^{*}\) is fed into the GIM to build a learnable graph structure with three loops. We define this process as follows:
\[\mathrm{g}_{*}=\sum_{i=1}^{3}\mathcal{G}(\ell_{i}^{*}), \tag{2}\]
where \(i\in\{1,2,3\}\), \(\mathrm{g}_{*}\) denotes graph features and \(\mathcal{G}\) is the GIM. At last, we combine decorated features to achieve final fusion results:
\[\mathrm{I}_{f}=Conv\big{(}Concat\big{(}\mathrm{g}_{ir},\mathrm{g}_{vis}\big{)} \big{)}, \tag{3}\]
where \(\mathrm{I}_{f}\) means fused images. \(Conv(\cdot)\) and \(Concat(\cdot)\) represent convolution and concatenate operations, respectively. Moreover, the employed loss function can effectively transfer information through back-propagation, which is also explicated in Section. 3.5.
### Structure Salience Module
As shown in Fig. 2, we use the SSM to optimize \(\ell_{2}^{*}\), deepening the expression of deep structure features. After passing through a convolutional layer, the SSM conducts Maxpooling and Avgpooling to coordinate detailed patches and global information simultaneously. We use Element-wise Multiplication to deal with the two pooling information, which can excavate more salient contents from infrared images. Since more detailed textures are contained in the visible branch, Element-wise Addition is exploited to enrich the overall perception instead.
Inspired by SENet (Shen et al., 2017), we also introduce attention to the SSM. Firstly, the aforementioned feature is flattened by Global Average Pooling (GAP). Secondly, we assign two Fully Connected Layers and Sigmoid to generate the corresponding channel weight. It can not only increase the salience of feature representation but also highlight parts that conform to HVS in fused images. Finally, we multiply the feature with channel weight to achieve salient-structure feature \(\ell_{3}^{*}\).
Figure 2. Pipeline of the proposed IGNet. Specifically, we feed multi-scale features into the graph interaction module (GIM) for generating graph structures in different modalities. The cross-modality interaction between graphs is depicted in detail. The leader nodes guide the information delivery from one graph to the latter. Note that we construct graphs in the infrared/visible branch with three loops, respectively. The bottom row represents the legend of the component.
### Graph Interaction Module
We design a graph structure for information learning and interaction between different modalities in GIM, which can improve the quality of fusion results. Furthermore, it enables images to contain more high-level information, so that the down-stream tasks (_i.e._, object detection, image segmentation) also perform well. The middle of Fig. 2 shows the specific workflow of the GIM.
As the infrared branch an example, we provide the former features \(\mathbf{f}_{i}^{tr}\) with different scales acting as pioneer factors to the GIM for graph generation. Note that the GIM implements three loops of graph structures with three nodes in each branch to balance the performance of fusion results and operational efficiency. Detailed ablation experiments are conducted in Section. 4.6. In the process of creating graphs, we connect nodes of different scales from the same modality and nodes of the same scale from different modalities concurrently. The interactive way can restrict information imbalance while enhancing the representation of each input in fused images. After obtaining a graph, nodes constitute a corresponding leader node \(\mathbf{g}_{i}^{tr}\) to guide information delivery for the latter graph. Owing to the assistance of leader nodes, the GIM can resist information loss, improving the capability of feature learning. The leader nodes \(\mathbf{g}_{1}^{tr}\), \(\mathbf{g}_{2}^{tr}\) and \(\mathbf{g}_{3}^{tr}\) are finally mixed together to achieve the graph feature \(\mathbf{g}_{ir}\).
#### 3.4.1. Node and Edge Generation
Aimed at ensuring the diversity of features, we divide them into nodes of different scales through the pyramid pooling module (PPM) (Wang et al., 2017). Fig. 3 (a) describes the detailed process of node generation. We employ pyramid pooling, convolution, and upsample operations to split \(\mathbf{f}_{i}^{r}\) into multiple scales to obtain the nodes in the graph, respectively. Note that the nodes and \(\mathbf{f}_{i}^{*}\) keep consistent except for the number of channels. This process can be proved as follow:
\[(\mathbf{g}_{i}^{*})_{o}=Up\Big{(}Cono\big{(}\mathcal{P}(\mathbf{f}_{i}^{*}) \big{)}\Big{)}, \tag{4}\]
where \((\mathbf{g}_{i}^{*})_{o}\) represents the \(o\)-th (\(j\in\{1,2,3\}\)) node of the \(i\)-th (\(i\in\{1,2,3\}\)) graph in * (infrared/visible) modality. \(Up\) and \(\mathcal{P}\) denote the upsample and pyramid pooling operations.
The production of edges in Fig. 3 (b) also stands an essential role of the graph generation, which carries the information transmission between nodes. We build edges in different-scale nodes from the same modality. Distinctively, nodes with the same scale from different modalities are linked for learning more semantic-level relations. The edge generation in \(\mathbf{g}_{j}\) and \(\mathbf{g}_{k}\) is bidirectional and defined as:
\[\mathbf{e}_{j,k}=Cono(\mathbf{g}_{j}-\mathbf{g}_{k}), \tag{5}\]
\[\mathbf{e}_{k,j}=Cono\big{(}\mathcal{N}(\mathbf{g}_{j}-\mathbf{g}_{k})\big{)}, \tag{6}\]
where \(\mathcal{N}\) means the negation operation. \(\mathbf{e}_{j,k}\) (\(\mathbf{e}_{k,j}\)) is the edge embedded from \(\mathbf{g}_{j}\) (\(\mathbf{g}_{k}\)) to \(\mathbf{g}_{k}\) (\(\mathbf{g}_{j}\)). In addition, the message passing \(\mathbf{m}_{j,k}\) can be formulated as:
\[\mathbf{m}_{j,k}=Sigmoid(\mathbf{e}_{j,k})\cdot\mathbf{g}_{j}, \tag{7}\]
where _Sigmoid_ denotes the Sigmoid operation.
#### 3.4.2. Leader Node and Information Delivery
In Fig. 4 (a), the introduction of leader nodes makes the delivery of semantic information between nodes in the graph more effectively, which can be represented as follow:
\[\mathbf{g}_{i}^{*}=Conv\Big{(}Concat\big{(}(\mathbf{g}_{i}^{*})_{1},(\mathbf{g }_{i}^{*})_{2},(\mathbf{g}_{i}^{*})_{3}\big{)}\Big{)} \tag{8}\]
In the process of information delivery as shown in Fig. 4 (b), the leader node generates the corresponding leader weight by the GAP and Sigmoid operation. After three former nodes pass through the convolutions, we multiply them with the leader weight in channel domain. Finally, the extracted multi-level features are propagated into the latter nodes, which can embody both details and targets clearly in fused images.
### Loss Function
To guarantee that more meaningful information can be learned during the training phase, we introduce three varieties of loss functions, _i.e._, the pixel loss \(\mathcal{L}_{\text{MSE}}\), the edge loss \(\mathcal{L}_{\text{edge}}\) and the structure loss \(\mathcal{L}_{\text{SSIM}}\). The combined \(\mathcal{L}_{\text{total}}\) can be shown as follow:
\[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{MSE}}+\alpha\mathcal{L}_{\text{ edge}}+\beta\mathcal{L}_{\text{SSIM}}, \tag{9}\]
where \(\alpha\) and \(\beta\) are preset hyperparameters with the value of 10 and 0.5. Specifically, mean squared error (MSE) can measure the pixel intensity between source images and the fusion result. Note that we conduct weighted average to source images before calculating. It can be defined as:
\[\mathcal{L}_{\text{MSE}}=\text{MSE}\big{(}(\mathbf{I}_{ir}+\mathbf{I}_{vis})/2, \mathbf{I}_{f}\big{)}, \tag{10}\]
where \(\mathbf{I}_{ir}\) and \(\mathbf{I}_{vis}\) mean infrared and visible images, respectively. To highlight edge details, \(\mathcal{L}_{\text{edge}}\) selects the infrared/visible image with a larger gradient value to achieve the edge gradient:
\[\mathcal{L}_{\text{edge}}=\parallel\triangledown\mathbf{I}_{f}-max(\triangledown \mathbf{I}_{ir},\triangledown\mathbf{I}_{vis})\parallel_{1}^{2}, \tag{11}\]
where \(\triangledown\) is the gradient operator and \(\parallel\cdot\parallel_{1}\) is the \(l_{1}\)-norm. Besides, structural similarity index measure (SSIM) (Srivastava et al., 2017) can calculate the similarity between source images and the fusion image, which is expressed as follow:
\[\mathcal{L}_{\text{SSIM}}=\big{(}1-\text{SSIM}(\mathbf{I}_{f},\mathbf{I}_{ir}) \big{)}+\big{(}1-\text{SSIM}(\mathbf{I}_{f},\mathbf{I}_{vis})\big{)}. \tag{12}\]
Figure 4. Specific illustration of (a) leader node generation and (b) information delivery.
Figure 3. Specific illustration of (a) node generation and (b) edge generation.
With the help of the above loss function, the pixel and structural level information can be fully retained, which makes the fusion and down-stream results perform well.
## 4. Experiments
In this section, we first introduce the experimental setup, comparison approaches and dataset selection. Then, we analyze the fusion, detection, and segmentation results separately to verify the superiority of our proposed method. Furthermore, ablation experiments are mentioned to demonstrate the effectiveness of the proposed modules.
### Experimental Implementation
In the training phase, we choose Adam optimizer to adjust the training parameters, where the stride and bitch size are set to 8 and 2. The initial learning rate of the network is \(1\mathrm{e}^{-3}\) with a decay rate of \(2\mathrm{e}^{-4}\). The total epoch is 100. In the loss function, the hyperparameters \(\alpha\) and \(\beta\) are set to 10 and 0.5, respectively. The selection of training datasets is presented in Section. 4.2. All experiments are implemented on an NVIDIA GeForce 3070Ti GPU with PyTorch framework.
### Dataset Selection and Comparison Approaches
The TNO (Wang et al., 2017), M\({}^{3}\)FD (Kipf and Welling, 2017) and MFNet (Chen et al., 2017) datasets contain plenty of infrared and visible image pairs. Moreover, the M\({}^{3}\)FD and MFNet datasets also have image pairs that have been labeled for detection and segmentation. Before training, we combine 15 TNO pairs, 150 M\({}^{3}\)FD pairs and 1083 MFNet pairs as the training set of our IGNet. The testing set consists of 10 TNO pairs, 150 M\({}^{3}\)FD pairs, and 361 MFNet pairs. Note that the division of the TNO and M\({}^{3}\)FD datasets is random, the MFNet dataset is based on (Wang et al., 2017).
We select seven state-of-the-art methods including DIDFuse (Wang et al., 2017), U2Fusion (Wang et al., 2017), SDNet (Wang et al., 2017), TarDAL (Kipf and Welling, 2017), UMFusion (Wang et al., 2017), DeFusion (Wang et al., 2017) and ReCoNet (Chen et al., 2017), to compare with our proposed IGNet in qualitative and quantitative results. During the fusion task, we apply six evaluation metrics, _i.e_., entropy (EN), visual information fidelity (VIF) (Chen et al., 2017), average gradient (AG), correlation coefficient (CC) (Li et al., 2018), the sum of the correlations of differences (SCD) (Chen et al., 2017) and edge-based similarity measure (\(Q_{\text{sh}/\text{f}}\)) (Wang et al., 2017), for objective estimation. Larger values of the above-mentioned metrics mean the image quality performs better.
In the detection task, 4200 pairs of labeled images are employed as training, validation, and testing sets in a ratio of 8:1:1. The labels are marked into six categories, _i.e_., people, bus, car, motorcycle, truck, and lamp. A mainstream detector, YOLOv5 (Liu et al., 2018), is conducted for detection. We set the optimizer, learning rate, epoch, and batch size as SGD optimizer, \(1\mathrm{e}^{-2}\), 400, and 8, respectively. The [email protected] is measured for quantitative comparison. Moreover, we utilize DeepLabW3+ (Chen et al., 2017) to segment fusion results, which choose the MFNet dataset as training and testing sets. There are nine labels in the sets, including background, car, person, bike, curve, car stop, guardrail, color cone, and bump. The training epoch and bitch size are set as 300 and 8, while other parameters keep the same as in the original experiment. The mIoU is selected for objective evaluation. In summary, we realize fusion images of each comparison approach to retain down-stream tasks, then analyze their corresponding performance.
### Analysis for Fusion Results
#### 4.3.1. Qualitative Analysis
We depict qualitative results on TNO, MFNet, and M\({}^{3}\)FD datasets in Fig. 5. Obviously, our results outperform other state-of-the-art methods. For instance, targets and surrounding scenes obscured by smoke can be clearly displayed on the TNO dataset. In the second illustration, TarDAL and ReCoNet occur over-exposed regions, while U2Fusion, SDNet and UMFusion remain low-contrast performance. Although DIDFusion can highlight luminance information (_e.g_., car lights), its background abandons many texture details, which is unfriendly to HVS. In addition, benefiting from the cooperation of GNN, blur artifacts can be effectively mitigated as shown in the green enlarged patch of the third row.
#### 4.3.2. Quantitative Analysis
In Table. 1, we enumerate the mean scores for the six metrics in the three testing sets. From an overall perspective, the quantitative results of our method stand in the lead position. Specifically, CC and SCD achieve the highest scores, which indicates the mutual connection between our fusion images and source inputs is the tightest. The highest value of \(Q_{\text{sh}/\text{f}}\) reflects that the edge contours of targets can be well represented. Moreover, the higher performance of EN and AG demonstrates that a large amount of information is preserved in our fusion results. Since our approach pays greater emphasis on information delivery, the VIF value also keeps at a higher level.
### Analysis for Detection Results
#### 4.4.1. Qualitative Analysis
As shown in Fig. 6, the disturbance of environmental factors causes the detection results of single-modal images to be generally weaker than those of fusion results. However, the sensitivity of different fusion results to detection is also varied. In the first row of examples, SDNet and DeFusion present significantly low confidence and error detection regions, which may mislead observers. Moreover, "Truck" is detected as "Car" in the second set, while missing detection of cars in the corner also occurs. As a representative, our fusion results contain rich advanced features, so that the corresponding detection results can avoid the above phenomena. We can also notice that our detection results achieve high-confidence scores on all labeled categories.
#### 4.4.2. Quantitative Analysis
Table. 2 exhibits the [email protected] of each label and matching total [email protected] measured by detection results of fused images, which can obtain higher indicators than single infrared or visible images. Under the comparison of fusion methods, our proposed IGNet performs 2.59% higher than others in detection. It is worth noting that IGNet can not only achieve excellent detection results but also take into account the quality of fusion images.
### Analysis for Segmentation Results
#### 4.5.1. Qualitative Analysis
Visual results of the segmentation on the MFNet dataset are presented in Fig. 7. Similarly, we also employ fusion results of each method as the input to obtain segmentation results. Due to less semantic information contained in images,
DIDFuse, U2Fusion, SDNet, and ReCoNet appear some missing segmentation areas in the first sample. In addition to the segmentation results of IGNet, the "Color cone" in the second example cannot be accurately segmented. It is appropriate to mention that our proposed method can exploit cross-modality interaction features to efficiently segment the contours of labeled objects.
#### 4.5.2. Quantitative Analysis
Table. 3 depicts the segmentation quantitative metric IoU for different categories, which presents IGNet outperforms other fusion methods in the segmentation task. Compared with the second-ranked method, our method improves mIoU in the ratio of 4.87%. For some infrared-sensitiveness labels, _e.g._, person, higher scores indicated that our method can more easily highlight thermal targets. Due to the high fidelity of fused images, the IoU of some visually appealing labels, _e.g._, car and bike, still keeps high performance. Note that the proposed IGNet can also generate vivid fusion images while achieving accurate segmentation results.
### Ablation Experiments
#### 4.6.1. Study on Modules
The proposed SSM and GIM play a key role in improving the fusion effect. It is obvious that fusion results perform poorly in luminance without SSM as shown in Fig. 8. Also, the cross-modality features of infrared and visible branches can not
\begin{table}
\begin{tabular}{c|c c c c c c|c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{8}{c|}{**Dataset:TNO**} & \multicolumn{8}{c|}{**Dataset:MFNet**} & \multicolumn{8}{c}{**Dataset:M\({}^{3}\)FD**} \\ & EN & VIF & AG & CC & SCD & \(\text{Q}_{\text{ab}/\text{f}}\) & EN & VIF & AG & CC & SCD & \(\text{Q}_{\text{ab}/\text{f}}\) & EN & VIF & AG & CC & SCD & \(\text{Q}_{\text{ab}/\text{f}}\) \\ \hline DIDFuse & 7.066 & 0.738 & **5.150** & 0.503 & 1.726 & 0.413 & 2.695 & 0.277 & 2.005 & 0.526 & 1.007 & 0.176 & 7.108 & **0.879** & 5.663 & 0.558 & 1.666 & 0.482 \\ U2Fusion & 6.844 & 0.663 & 5.062 & 0.242 & **1.739** & 0.444 & 4.612 & 0.503 & 2.899 & 0.627 & 1.262 & 0.364 & 7.090 & 0.831 & 5.546 & **0.569** & **1.753** & 0.524 \\ SDNet & 6.682 & 0.661 & 5.059 & 0.501 & 1.562 & **0.450** & 5.428 & 0.474 & 3.054 & **0.642** & 1.111 & 0.410 & 7.013 & 0.729 & 5.514 & 0.500 & 1.544 & **0.525** \\ TarDAL & **7.163** & **0.800** & 4.789 & 0.484 & 1.670 & 0.412 & **6.478** & 0.699 & **3.140** & 0.628 & **1.526** & 0.420 & **7.126** & 0.812 & 4.140 & 0.510 & 1.450 & 0.407 \\ UMFusion & 6.699 & 0.673 & 3.710 & **0.516** & 1.677 & 0.409 & 5.761 & 0.488 & 2.442 & 0.597 & 1.077 & 0.299 & 6.881 & 0.771 & 3.420 & 0.546 & 1.618 & 0.470 \\ DeFusion & 6.724 & 0.712 & 2.996 & 0.493 & 1.592 & 0.325 & 5.950 & **0.759** & 2.855 & 0.589 & 1.339 & **0.471** & 6.634 & 0.740 & 3.027 & 0.513 & 1.366 & 0.412 \\ ReCoNet & 6.682 & 0.728 & 3.674 & 0.481 & 1.732 & 0.340 & 3.894 & 0.544 & 3.105 & 0.544 & 1.190 & 0.365 & 6.740 & 0.867 & 4.557 & 0.515 & 1.495 & 0.499 \\ IGNet & **7.099** & **0.764** & **5.247** & **0.521** & **1.756** & **0.459** & **6.124** & **0.762** & **3.290** & **0.655** & **1.562** & **0.485** & **7.140** & **0.882** & **5.615** & **0.575** & **1.762** & **0.539** \\ \hline \hline \end{tabular}
\end{table}
Table 1Quantitative comparisons of our IGNet with seven state-of-the-art methods on TNO, MFNet and M\({}^{3}\)FD datasets. Optimal and suboptimal results are bolded in red and blue, respectively.
Figure 5. Visual comparisons of different approaches on TNO, MFNet and M\({}^{3}\)FD datasets. Our proposed IGNet can achieve notable targets and fine background details. The enlarged red and green circles are detailed patches of fusion results.
interact with each other without the decoration of the GIM, which causes the low contrast and halo artifacts of images. Furthermore, Fig. 9 reports the results of down-stream tasks. Due to the abundant semantic information extracted by the proposed module, the full modal can simultaneously obtain high-confidence detection and accurate segmentation results. The quantitative comparisons are depicted in Table. 4. It is not difficult to prove that the utilization of our proposed modules can bridge fusion and downstream tasks with a mutually beneficial situation.
#### 4.6.2. Study on Leader Node
In order to avoid intermediate feature loss, we use leader nodes to guide the information delivery. Without the help of leader nodes, fused images often appear distorting in color. Meanwhile, some wrong regions may emerge in detection and segmentation results. In contrast, IGNet makes full use of feature maps delivered by the leader nodes inside graphs, enabling semantic information to be revealed in fused images. Fig. 10 performs the superiority of our proposed method on two different datasets.
#### 4.6.3. Study on Parameters of Graph
We select one, three and five nodes to conduct each graph structure, aiming at verifying how the number of nodes N influence results. Except for the number of nodes, other parameters remain unchanged. It can be seen from Table. 5 that when there is only a single node in a graph, the quantitative indicators perform undesirably. As the number increases to five, its performance is almost indistinguishable from our results (N = 3). However, the operating efficiency of the network will decrease with N rising. Considering this issue, we employ three nodes in each graph, which can balance the quality of images and inference speed. Similarly, the number of loop L are preset to three for a trade-off.
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{6}{c|}{**AP@5**} & \multirow{2}{*}{**mAP@5**} \\ & & \multicolumn{1}{c}{People} & \multicolumn{1}{c}{Bus} & \multicolumn{1}{c}{Car} & \multicolumn{1}{c}{Motor} & \multicolumn{1}{c}{Truck} & \multicolumn{1}{c|}{Lamp} \\ \hline Infrared & 0.807 & 0.782 & 0.888 & 0.640 & 0.652 & 0.703 & 0.745 \\ Visible & 0.708 & 0.780 & 0.911 & 0.702 & 0.697 & 0.865 & 0.777 \\ DIDFuse & 0.800 & 0.798 & 0.924 & 0.681 & 0.692 & 0.843 & 0.790 \\ U2Fusion & 0.793 & 0.785 & 0.916 & 0.663 & 0.710 & 0.872 & 0.789 \\ SDNet & 0.790 & 0.811 & 0.920 & 0.670 & 0.689 & 0.838 & 0.786 \\ TarDAL & **0.817** & 0.815 & **0.948** & 0.696 & 0.687 & 0.873 & **0.806** \\ UMFusion & 0.790 & 0.783 & 0.920 & **0.728** & 0.691 & 0.847 & 0.793 \\ DeFusion & 0.805 & **0.827** & 0.921 & 0.689 & **0.714** & **0.876** & 0.805 \\ ReCoNet & 0.792 & 0.784 & 0.915 & 0.693 & 0.698 & **0.873** & 0.792 \\ IGNet & **0.816** & **0.824** & **0.928** & **0.730** & **0.721** & 0.869 & **0.815** \\ \hline \hline \end{tabular}
\end{table}
Table 2Detection quantitative comparisons of our IGNet with seven state-of-the-art methods on M\({}^{3}\)FD dataset. Optimal and suboptimal results are bolded in red and blue, respectively.
\begin{table}
\begin{tabular}{l|c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{6}{c|}{**IoU**} & \multirow{2}{*}{**mIoU**} \\ & & \multicolumn{1}{c}{Bac} & \multicolumn{1}{c}{Car} & \multicolumn{1}{c}{Per} & \multicolumn{1}{c}{Bik} & \multicolumn{1}{c}{Cur} & \multicolumn{1}{c}{C} & \multicolumn{1}{c}{S} & \multicolumn{1}{c}{Gua} & \multicolumn{1}{c}{C} & \multicolumn{1}{c}{Bum} \\ \hline Infrared & 0.821 & 0.663 & 0.592 & 0.513 & 0.347 & 0.398 & 0.422 & 0.414 & 0.479 & 0.516 \\ Visible & 0.899 & 0.774 & 0.482 & 0.586 & 0.372 & 0.517 & 0.451 & 0.432 & 0.506 & 0.558 \\ IDFNuse & 0.971 & 0.790 & 0.582 & 0.599 & 0.358 & **0.526** & **0.619** & 0.442 & 0.557 & 0.604 \\ U2Fusion & 0.974 & 0.817 & **0.631** & 0.625 & 0.408 & 0.520 & 0.448 & **0.593** & **0.615** \\ SDNet & 0.973 & 0.782 & 0.614 & 0.618 & 0.361 & 0.500 & 0.527 & 0.425 & 0.527 & 0.591 \\ TarDAL & 0.970 & 0.795 & 0.563 & 0.591 & 0.342 & 0.497 & 0.553 & 0.425 & 0.538 & 0.586 \\ UMfusion & 0.972 & 0.787 & 0.607 & 0.616 & 0.364 & 0.493 & 0.479 & 0.447 & 0.485 & 0.583 \\ DeFusion & 0.975 & **0.820** & 0.609 & 0.623 & 0.401 & 0.488 & 0.482 & 0.471 & 0.548 & 0.601 \\ ReCoNet & 0.973 & 0.813 & 0.598 & **0.610** & **0.413** & 0.519 & 0.544 & 0.476 & 0.552 & 0.610 \\ IGNet & **0.976** & **0.838** & **0.639** & **0.667** & **0.435** & **0.532** & **0.626** & **0.511** & **0.586** & **0.645** \\ \hline \hline \end{tabular}
\end{table}
Table 3Segmentation quantitative comparisons of our IGNet with seven state-of-the-art methods on MFNet dataset. Optimal and suboptimal results are bolded in red and blue, respectively.
Figure 8. Visual ablation comparisons of the SSM (\(\mathcal{S}\)) and GIM (\(\mathcal{G}\)) about fusion. The enlarged red and green circles are detailed patches of fusion results.
Figure 7. Segmentation visual comparisons of different fusion images on MFNet dataset. Our proposed IGNet can get the most accurate segmentation results compared to the ground truth. The red and green regions represent the error and missing segmentation, respectively.
## 5. Conclusion
In this paper, an interactive cross-modality framework based on graph neural network was proposed for infrared and visible image fusion. We presented a graph interaction module to learn mutual features from different branches, which can emphasize outstanding textures in source images. Aiming at preventing information from missing, the leader nodes were proposed to guide the feature propagation between adjacent graphs. In addition, abundant semantic information was also extracted by our proposed method, thus we could achieve well-performance detection and segmentation results. Extensive experiments proved our method is advanced in IVIF and down-stream tasks.
In the future, we tend to bridge multi-modality fusion, target detection, and image segmentation in a unified framework. In other words, it is worth further exploiting how to generate a fusion image that can also perform well in detection and segmentation tasks.
###### Acknowledgements.
This work was supported by the National Key R&D Program of China (2022ZD0117902) and by the National Natural Science Foundation of China (No. U20B2062).
|
2304.12501 | The cross-sectional stock return predictions via quantum neural network
and tensor network | In this paper, we investigate the application of quantum and quantum-inspired
machine learning algorithms to stock return predictions. Specifically, we
evaluate the performance of quantum neural network, an algorithm suited for
noisy intermediate-scale quantum computers, and tensor network, a
quantum-inspired machine learning algorithm, against classical models such as
linear regression and neural networks. To evaluate their abilities, we
construct portfolios based on their predictions and measure investment
performances. The empirical study on the Japanese stock market shows the tensor
network model achieves superior performance compared to classical benchmark
models, including linear and neural network models. Though the quantum neural
network model attains a lowered risk-adjusted excess return than the classical
neural network models over the whole period, both the quantum neural network
and tensor network models have superior performances in the latest market
environment, which suggests the capability of the model's capturing
non-linearity between input features. | Nozomu Kobayashi, Yoshiyuki Suimon, Koichi Miyamoto, Kosuke Mitarai | 2023-04-25T00:05:13Z | http://arxiv.org/abs/2304.12501v2 | # The cross-sectional stock return predictions via quantum neural network and tensor network
###### Abstract
In this paper we investigate the application of quantum and quantum-inspired machine learning algorithms to stock return predictions. Specifically, we evaluate performance of quantum neural network, an algorithm suited for noisy intermediate-scale quantum computers, and tensor network, a quantum-inspired machine learning algorithm, against classical models such as linear regression and neural networks. To evaluate their abilities, we construct portfolios based on their predictions and measure investment performances. The empirical study on the Japanese stock market shows the tensor network model achieves superior performance compared to classical benchmark models, including linear and neural network models. Though the quantum neural network model attains the lowered risk-adjusted excess return than the classical neural network models over the whole period, both the quantum neural network and tensor network models have superior performances in the latest market environment, which suggests capability of model's capturing non-linearity between input features.
## 1 Introduction
The arrival of real quantum computers and experiments that show the quantum supremacy [1, 2] make it more realistic that the new computational paradigm will come by virtue of quantum computing. It is true that we are currently at the era of NISQ (noisy intermediate-scale quantum computer) [3] and must implement the quantum error correction for full picture of such a new paradigm, but rapid progress of quantum technologies already open a new window to the research in various fields, such as quantum chemistry, optimization, machine learning, and finance. It is therefore worth looking for a practical application of quantum computers even in the NISQ era. The framework of variational quantum algorithms (VQAs) [4] are thought to be an effective approach towards the goal. It has been applied, for instance, to solve machine learning problems [5, 6].
Machine learning techniques developed within the framework of VQAs are essentially equivalent to the ones using tensor networks [7, 8, 9], which is originally invented as a
tool to simulate quantum physics in classical computers [10, 11]. Its ability to utilize an exponentially large tensor into a factorized series of smaller tensors has also allowed the machine learning community to successfully solve various machine learning problems [12, 8, 9]. It can consequently be considered as a quantum-inspired machine learning algorithm.
Given these growing interests of quantum and quantum-inspired machine learning algorithms, it is important to study their applicability on the real-world problems, which are, however, less known so far partly due to the current limitation of computational resource for quantum computers and their simulators. In this work, to address above issue, we consider a real-world financial problem, namely prediction of stock returns, employing quantum and quantum-inspired machine learning algorithms.
Stock return prediction has been a principal problem in finance. Ever since the work by Fama and French [13], who have provided the empirical evidence that the notion of so-called factors is effective in return explainability, significant efforts has been made to find unseen factors that have predicable powers for stock returns. Among practical investors, multi-factor models, which is a linear regression of stock returns by a set of factors, are commonly used thanks to their simplicity and interpretability, though they lack expressibility due to absence of interaction terms between factors. As it is, machine learning has been becoming an alternative to them. Various studies, [14, 15, 16, 17, 18] to name but a few, are conducted on stock return predictions with machine learning methods, which can capture non-linearity in contrast to multi-factor models.
Our interest here is to test whether the quantum or quantum-inspired techniques can be applied to predict stock returns, and also have competitive advantage over classical machine learning algorithms in that task. To this end, using a set of stocks in the Japanese stock market, we conduct portfolio backtesting over 10 years based on stock return predictions by quantum neural network, tensor network, standard linear regression, and neural network and compare their performances. As a result, we find that the tensor network model outperforms the other models, while the quantum neural network model are inferior to the neural network model in the whole backtesting period. We also observe that in the latest market environment the quantum neural network model has the better performance than the neural network model, which might be related to the overfitting problem. This experiment provides the implication that quantum neural network and tensor network may be able to learn non-linear and interaction effects among features, and they have potential to use in return predictions beyond the conventional models.
This paper is organized as follows. In Section 2, we explain the definition of our problem, the stock return predictions, then describe both classical and quantum machine learning algorithms we use in our analysis. Section 3 presents the methodology to conduct our backtesting experiment and then shows its results, using quantitative metrics that are often used to evaluate an investment performance. Finally, in Section 4 we conclude our analysis and discuss some future directions for further research.
## 2 Methodology
This section collects all the ingredients we use in our analysis. First we set up the definition of our objective as stock return prediction by cross-sectional analysis, which is based on comparing each stock to others at a point in time, and describe general methodology to tackle the problem. Then we explain classical models for return predictions, namely the linear regression and the neural network models. Both models are used as benchmark models against quantum ones in our experiment, according to the following reasons. The
linear regression model is one of the most traditional models as well as widely employed both by academicians and practitioners in finance. The neural network model serves as a classical counterpart of quantum models, not to mention that it shows superior investing performance over the linear model thanks to its flexibility and expressibility. After that we introduce quantum circuit learning, which is one realization of quantum neural network, and tensor network algorithms in our framework. Finally, we describe the optimization procedure for these machine learning models.
### Problem Definition
The objective of this work is to predict stock returns over cross-section. Before formulating our problem, let us clarify some notations.
Suppose there are trading dates indexed as \(0\leq t\leq T\) and at each trading date \(t\) we have \(N_{t}\) stocks available to invest. We call such a whole set of stocks as stock universe and denote \(U_{t}\). Remark that frequency of trading periods depends on our purpose and data availability, by which we adjust the frequency for monthly, weekly, daily, and so on. We describe most generic situation that stock universe varies over time. Stocks are indexed as \(i=1,\cdots,N_{t}\), and price of \(i\)-th stock at time \(t\) is denoted as \(p_{i,t}\). The return of \(i\)-th stock from \(t\) to \(t+1\) is then calculated as
\[r_{i,t+1}=\frac{p_{i,t+1}-p_{i,t}}{p_{i,t}}\,, \tag{1}\]
which is what we wish to predict. For financial practitioners, it is essential to predict stock returns, since they usually build trading strategies based on predicted future returns. In academic literature, it has been a central problem to investigate the predictability of stock returns and to construct prediction models which satisfies empirical characteristics, with the hope to reveal market structures.
There are mainly two distinct approaches to predict stock returns: one is that by focusing on a specific stock we use time series analysis to predict its return, the other is that we predict cross-sectional relative stock performance for whole stock universe at each time, employing each firm's features.
In this work we adopt the latter cross-sectional approach, in which we leverage information of firms. Suppose we have \(n\) features for \(i\)-th stock at time \(t\). Such features are compiled to \(n\)-dimensional vector \(X_{i,t}\), by means of which we describe the general formula of our prediction model as follows:
\[r_{i,t+1}=F(X_{i,t})+\epsilon_{i,t}\,, \tag{2}\]
where the form of \(F\) is not specified here and will be determined by our choice of models. \(\epsilon_{i,t}\) represents an error term.
As for a choice of features, what should explain stock returns is a long-standing subject to study in financial literature, and has industriously been investigated. The celebrated work by Fama and French [13] proposes and empirically shows that returns of individual firms can be explained by the following three factors: market (how the whole market move), size (how large the market capitalization of stocks is), and value (how the stock price is overvalued or undervalued). Ever since their publication, successive studies has followed in order to find other unknown factor to explain returns, with the result that the number of proposed factors has surpassed a hundred. Other than famous three factors mentioned above, typical factors considered so far includes momentum (how big the past return of stocks is) and quality (how stable earnings stocks have).
Regarding the prediction model \(F\), linear regression has been traditionally used both for academicians and practitioners, because of its simplicity and interpretability. In this case, (2) becomes
\[r_{i,t+1}=\sum_{k=1}^{n}X_{i,t,k}\cdot\theta_{k}+\epsilon_{i,t}\,, \tag{3}\]
where \(\theta\) is an \(n\)-dimensional vector of model parameters. Note that the index \(k\) represents \(k\)-th element of a \(n\)-dimensional vector. In our analysis, we employ this linear regression model as a benchmark. The parameters \(\theta\) is determined by the usual ordinary least square regression.
Traditional linear regression models neglect interaction terms between features and nonlinear terms. Machine learning models shed light on these issues, as is widely reported in e.g. [14, 15, 16, 17, 18]. In our analysis, we use the neural network models as classical machine learning ones. As quantum and quantum-inspired machine learning models, we propose to employ quantum circuit learning and tensor network in return predictions. The following subsections are devoted to describing these methods and how they can be applied for stock return predictions.
### Neural network
We consider a feed-forward neural network, which consists of \(L\) layers of affine maps and activation functions. It is formally written as
\[F^{\text{NN}}=\mathsf{W_{L}}\circ\sigma_{L-1}\circ\cdots\sigma_{1}\circ \mathsf{W_{1}}\,, \tag{4}\]
The affine map \(W_{l}\) acts on \(n_{l}\)-dimensional input vector \(Z_{l}\) as follows:
\[\mathsf{W_{l}}(Z_{l})=W_{l}Z_{l}+b_{l}\,, \tag{5}\]
where \(W_{l}\in\mathbb{R}^{n_{l+1}\times n_{l}}\) denotes a weight matrix and \(b_{l}\in\mathbb{R}^{n_{l}}\) a bias vector. The activation function \(\sigma_{l}\) is a key to generate non-linear effect on the model. Though there are a plenty of possibilities what activation function to use, in our analysis we use the same function \(\mathsf{ReLU}\) for all \(l=1,\cdots L\) defined as:
\[\sigma_{l}(x)=\mathsf{ReLU}(x)\equiv\max\{x,0\}\,. \tag{6}\]
Having these in our hands, we construct return prediction such that
\[r_{i,t+1}=F^{\text{NN}}(X_{i,t})+\epsilon_{i,t}\,. \tag{7}\]
### Quantum circuit learning
Among various quantum machine learning algorithms that have been developed recently [4], we employ the framework called quantum circuit learning [5] in this work. It is one of the variational quantum algorithms, aiming at application for supervised machine learning problems. Quantum circuit learning can be regarded as a quantum counterpart of the neural network, since both algorithm try to optimize parameters variationally so that an objective function is minimized. For this reason, quantum circuit leaning and similar approaches are also sometimes referred to as quantum neural network [4].
Quantum circuit learning consists of the following procedures. Suppose we have a dataset consisting of input data \(\{x_{i}\}_{i=1}^{N}\), and corresponding teacher data \(\{y_{i}\}_{i=1}^{N}\). First we
construct a quantum circuit \(V(x)\) from \(x\). We apply it to some initial state \(\left|\psi_{0}\right\rangle\) in order to encode the information of input variables into the quantum state: \(\left|\psi_{in}\right\rangle=V(x)\left|\psi_{0}\right\rangle\). Then we prepare a parameterized quantum circuit \(U(\theta)\) and apply it to the above state: \(\left|\psi_{out}\right\rangle=U(\theta)\left|\psi_{in}\right\rangle\). Finally, we measure the expectation value \(\left\langle\psi_{out}\right|O\left|\psi_{out}\right\rangle\) of some observable \(O\). In this work, we take Pauli \(Z\) operator acting on the first qubit, \(Z_{1}\), as the observable \(O\). It is taken as an output of the algorithm \(F^{\mathrm{QCL}}(x,\theta)\). The objective function built from \(y_{i}\) and \(F^{\mathrm{QCL}}(x,\theta)=\left\langle\psi_{out}\right|Z_{1}\left|\psi_{out}\right\rangle\) is minimized by varying the parameter \(\theta\). With the optimized parameter \(\theta=\theta^{*}\), the trained model is given as \(F^{\mathrm{QCL}}(x,\theta^{*})\). Figure 1 shows the general circuit of the quantum circuit learning algorithm.
We next explain the construction of quantum circuits for our analysis. It follows [5]. The initial state \(\left|\psi_{in}\right\rangle\) is prepared as \(\left|0\right\rangle^{\otimes n}\), where we assume the dimension of vectors \(x_{i}\) of input data is \(n\). The encoding circuit \(V(x_{i})\) is given by
\[V(x_{i})=\prod_{j=1}^{n}R_{j}^{Z}(\cos^{-1}x_{i,j}^{2})R_{j}^{Y}(\sin^{-1}x_{i,j})\,, \tag{8}\]
where \(R_{j}^{Z}\), \(R_{j}^{Y}\) represent rotation gates acting on \(j\)-th qubit:
\[R_{j}^{Z}(\phi)=e^{i\phi Z_{j}/2}\,,\quad R_{j}^{Y}(\phi)=e^{i\phi Y_{j}/2}\,. \tag{9}\]
Note that the input vector \(x_{i}\) must be normalized in the range of \([-1,1]\).
Then our parameterized quantum circuit is constructed as follows
\[U(\theta)=\prod_{i=1}^{d}\left(\prod_{j=1}^{n}U(\theta_{j}^{(i)})U_{\mathrm{ rand}}\right)\,, \tag{10}\]
which is illustrated in Figure 2. Here, \(U_{\mathrm{rand}}\) denotes a time evolution gate for the following Hamiltonian,
\[U_{\mathrm{rand}}=e^{-iH\tau},\quad H=\sum_{j=1}^{n}a_{j}X_{j}+\sum_{j=1}^{n} \sum_{k=1}^{j-1}J_{jk}Z_{j}Z_{k}\,, \tag{11}\]
where \(a_{j}\), \(J_{j,k}\) are randomly taken from a uniform distribution on \([-1,1]\) and \(\tau\) represents a time length of the evolution. Both of these parameters are fixed during the algorithm. \(U(\theta_{j}^{(i)})\) denotes a sequence of rotation gates on \(j\)-th qubit:
\[U(\theta_{j}^{(i)})=R_{j}^{X}\left(\theta_{j1}^{(i)}\right)R_{j}^{Z}\left( \theta_{j2}^{(i)}\right)R_{j}^{X}\left(\theta_{j3}^{(i)}\right)\,. \tag{12}\]
Figure 1: The general structure of quantum circuit learning, where we have two quantum circuit architectures: Data encoding circuit \(V(x)\) and parameterized quantum circuit \(U(\theta)\)
where \(R^{X}_{j}(\phi)=e^{i\phi X_{j}/2}\). \(U_{\rm rand}\) and \(U(\theta^{(i)}_{j})\) are repeatedly applied to the state for \(d\) times, resulting in the whole gate \(U(\theta)\) in Eq. (10).
Equipped with these gates, quantum circuit learning can be used in return prediction such that
\[r_{i,t+1}=F^{\rm QCL}(X_{i,t},\theta)+\epsilon_{i,t}\,. \tag{13}\]
### Tensor Network
Tensor network enables us to obtain effective representations of quantum wavefunctions that live on exponentially large dimensional Hilbert space. This is beneficial not only for quantum physics but also for machine learning problems, as tensor network enables us to manipulate a high dimensional feature space.
The matrix product state (MPS), one of the best studied and understood tensor networks among all types of ones, is employed in our analysis. MPS is defined as follows. Suppose we have an \(n\)-th order tensor \(T\), whose component is given by \(T_{i_{1}\cdots i_{n}}\). The MPS is a representation of such a tensor \(T\) by a product of smaller tensors:
\[T_{i_{1}\cdots i_{n}}=\sum_{\alpha_{1}\cdots\alpha_{n}}A^{(1)}_{i_{1}\alpha_{ 1}}A^{(2)}_{i_{2}\alpha_{1}\alpha_{2}}\cdots A^{(n)}_{i_{n},\alpha_{n}}\,, \tag{14}\]
where the range of indices \(\alpha_{i}\) is called the bond dimension \(m\).
We follow the approach taken in [12, 19, 9] to apply the MPS to our purposes. Consider input vector \(x\) and a feature map \(\Phi(x)\) which maps \(x\) to an \(n\)-th order tensor defined as
\[\Phi_{i_{1}\cdots i_{n}}(x)=\phi_{i_{1}}(x_{1})\phi_{i_{2}}(x_{2})\cdots\phi_ {i_{n}}(x_{n})\,, \tag{15}\]
where
\[\phi(x_{j})=\left(\begin{array}{c}1\\ x_{j}\end{array}\right)\,. \tag{16}\]
We construct a model regression function with an MPS \(W^{\rm MPS}\) which acts as variational parameters to be trained as,
\[y=F^{\rm MPS}(x,W)=\sum_{i_{1}\cdots i_{n}}W^{\rm MPS}_{i_{1}\cdots i_{n}}\Phi _{i_{1}\cdots i_{n}}(x)\,. \tag{17}\]
We use this function \(F^{\rm MPS}(W,x)\) in return prediction:
\[r_{i,t+1}=F^{\rm MPS}(X_{i,t},W)+\epsilon_{i,t}\,. \tag{18}\]
Figure 2: Our choice of a parameterized quantum circuit in the quantum circuit learning algorithm
### Optimization procedure
Now that we have introduced both classical and quantum machine learning models we test in our analysis, let us briefly describe how the training of models is performed. In this subsection, we denote all the prediction models as \(F(X_{i,t},\theta)\) where \(\theta\) represent parameters for corresponding model, unless otherwise noted. Given true return data \(r_{i,t}\) and predicted one \(\tilde{r}_{i,t}=F(X_{i,t})\), our objective function \(E\) to be minimized is the mean squared error,
\[E=\frac{1}{NT}\sum_{t=1}^{T}\sum_{i=1}^{N}(\tilde{r}_{i,t}-r_{i,t})^{2}\,. \tag{19}\]
To archive the minimum, we utilize stochastic gradient descent technique for all models, which is common prescription in learning of neural networks. In this framework, parameters are subsequently updated such as
\[\theta\leftarrow\theta-\eta\nabla_{\theta}E\,, \tag{20}\]
where \(\eta\) represents a hyperparameter and the explicit formula for updating parameters depends on the type of optimizers. As for the quantum circuit learning model \(F=F^{\text{QCL}}\), the gradient is calculated by the so-called parameter-shift rule [5, 6].
It is worth mentioning that, in tensor network, gradient descent technique is not a standard way to optimize parameters, since a more physics-oriented optimization algorithm called density matrix renormalization group (DMRG) [11] prevails in many physics applications and is also used in machine learning one [8]. We, however, work with gradient descent in our analysis, as it is simple to implement on high-level API such as TensorFlow [20] and allows us to compare with other models on equal footing. Note that DMRG approach is thought to be more sophisticated in updating parameters than gradient descent, therefore it would be interesting to investigate the difference of performances in optimizing tensor network models. See [19] for more details.
## 3 Experiment
In this section, we show our empirical study to evaluate how our proposed models perform in return prediction. Our criteria for the evaluation is how profitable our models are, which can be measured by applying models in investment strategies. For this purpose, we adopt an investment strategy based on models' predictions and conduct the backtesting experiment on past historical data. In the following we explain our dataset and methodology of the investment strategy, then discuss results of backtesting.
### Dataset
Our dataset, or investment universe \(U_{t}\), is a set of the Japanese stocks that are constituent of TOPIX500 index. TOPIX500 is a Japanese stock market index, consisting of the 500 most liquid stocks with the largest values of market capitalization among members of stocks listed on Tokyo Stock Exchange.
Input features we use are summarized in Table 1. We consider 10 features, which is rather a small number compared to general machine learning models for stock return predictions, where we typically employ as many as tens to hundreds features to gain expressibility and accuracy. This is due to the fact that our quantum circuit learning architecture requires one qubit for each feature; the more qubits we use, the more computationally intense the simulation of quantum circuits becomes. We therefore limit the
number of features to \(n=10\) so that our backtesting experiment can be conducted within reasonable computational time.
As a preprocessing, all features and returns are cross-sectionally ranked at each time step [17, 21]: the \(i\)th feature of the \(l\)th stock at time \(t\), \(x_{i,t,l}\), is converted to \((\rho_{i,t,l})/(N_{t}-1)\), where \(\rho_{i,t,l}\) is the rank of \(x_{i,t,l}\) among \(\{x_{i,t,l}\}_{i=1,\ldots,N_{t}}\) in the ascending order.
### Investment strategy
The investment strategy that we take in this work is as follows. Our backtesting period goes from June 2008 to May 2021, during which we make investment decisions on monthly basis and let \(t\) denote the end of each month. Subsequently, our subject to predict is then one-month future return.
At the beginning of backtesting, we take three-year samples (June 2008 - May 2011) as training dataset to train the model, and the following one-year ones (June 2011 - May 2012) as test dataset to predict returns. We then roll this procedure forward until the end of the backtesting period. See Figure 3 for its design. In short, we repeatedly make prediction for forthcoming year from most recent three-year samples, but only re-estimate models once a year, not every month, in order to avoid computationally intensive estimation, which is the severe problem for quantum circuit learning runnning on a simulator.
At each time step \(t\) we sort stocks in descending order based on predicted returns \(\tilde{r}_{i,t+1}\) and define a set of stocks belonging to the top quintile as \(H_{t}\). Assuming our models correctly predict stock returns, \(H_{t}\) should represent most profitable stocks among the whole universe \(U_{t}\). On that account, we go long, or buy, these stocks with equal weight. The portfolio return between \(t\) and \(t+1\) is then given by
\[r_{\text{port},t+1}=\frac{1}{|H_{t}|}\sum_{i\in H_{t}}r_{i,t+1}\,, \tag{21}\]
where \(|H_{t}|\) denote the number of stocks in \(H_{t}\). We repeat this process and measure the portfolio performance over the backtesting period.
To test the performance of our investment strategy, the common approach is to set up a benchmark portfolio and evaluate excess return between our portfolio and the benchmark. In this work, we use the TOPIX500 index as a benchmark, therefore the excess return is defined as
\[\alpha_{t}=r_{\text{port},t}-r_{\text{TOPIX500},t}\,, \tag{22}\]
\begin{table}
\begin{tabular}{|l|l|l|} \hline Factor & Feature & Description \\ \hline \hline \multirow{4}{*}{Value} & Book-Value to Price Ratio & Net Asset / Market Value \\ \cline{2-3} & Earning to Price Ratio & Net Profit / Market Value \\ \cline{2-3} & Sales to Price Ratio & Sales / Market Value \\ \hline Quality & Return on Equity & Net Profit / Net Asset \\ \hline \multirow{4}{*}{Momentum} & Momentum (1-month) & Stock Returns in the last month \\ \cline{2-3} & Momentum (3-month) & Stock Returns in the past 3 months \\ \cline{2-3} & Momentum (6-month) & Stock Returns in the past 6months \\ \cline{2-3} & Momentum (12-month) & Stock Returns in the past 12 months \\ \hline Size & Market Capitalization & log(Market Value) \\ \hline \multirow{2}{*}{Market} & \multirow{2}{*}{Beta} & Regression coefficient of stock returns \\ & & and market return (TOPIX) over 60 months \\ \hline \end{tabular}
\end{table}
Table 1: The list of features and their descriptions
where \(r_{\text{TOPIX500},t}\) denotes the return of the TOPIX500 index at time \(t\). The metrics of the portfolio performance we employ are following three quantities, all of which are constructed from the time series of \(\alpha_{t}\):
\[\text{ER} =\prod_{t=1}^{T}(1+\alpha_{t})^{12/T}-1\,, \tag{23}\] \[\text{TE} =\sqrt{\frac{12}{T-1}\sum_{t}^{T}(\alpha_{t}-\bar{\alpha}_{t})^{2 })}\,,\] (24) \[\text{IR} =\text{ER/TE}\,, \tag{25}\]
with \(\bar{\alpha}_{t}=1/T\sum_{t=1}^{T}\alpha_{t}\). Here ER represents an annualized excess return, TE (tracking error) denotes corresponding standard deviation, and IR is so called the information ratio, which expresses the risk-adjusted excess return of the portfolio.
### Model architectures
We summarize the detailed setting of our models. As a traditional model, we use the linear regression which we denote Linear. In all models we consider except for the linear regression, the number of parameters is set to be in the same order for fair comparison. We use Adam optimizer in the training, where the number of epochs is also fixed to 20 in all machine learning models.
Neural NetworkWe prepare two distinct neural network models, which differs in the number of hidden layers.
* NN1 denotes the neural network model with \(L=3\) layers whose nodes are given by \((10,7,1)\). This model has 92 parameters to be trained.
* NN2 denotes the neural network model with \(L=4\) layers whose nodes are given by \((10,5,4,1)\). This model has 93 parameters to be trained.
Figure 3: The concept of our backtesting experiment, showing that we take three years as a training period and subsequent one year as a test period, rolling this process until the end of the backtesting period
As mentioned earlier, we stick to use ReLU function as the activation function. TensorFlow [20] is used to implement the model.
Quantum Circuit LearningWe denote our quantum circuit learning model by \(\mathsf{QCL}\). The number of qubit is 10, which is same as the number of input features. The depth of parameterized gates is set to \(d=3\). The number of parameters is consequently 90. We use Qulacs [22] to implement.
Tensor NetworkWe denote our tensor network model by \(\mathsf{TN}\). We set the bond dimension to \(m=2\). The number of parameters is then 76 in this setting. We use TensorNetwork [23] as well as TensorFlow for its implementation.
### Backtesting result
Table 2 summarizes the results of our empirical backtesting. See also Figure 4 for cumulative returns of portfolios and Figure 5 for cumulative excess returns. We observe that the tensor network model \(\mathsf{TN}\) has the best performance in regard to both the excess return and the information ratio. On the other hand, the quantum circuit learning model \(\mathsf{QCL}\) has competitive performance with the neural network model with respect to the excess return, however, it has larger value of TE, which in turn results in inferior risk-adjusted return IR.
From Figure 5, before 2016 \(\mathsf{QCL}\) has the approximately same performance as \(\mathsf{Linear}\). This implies that \(\mathsf{QCL}\) at least learns the linear relationship between input features as is expected. After 2016, on the other hand, \(\mathsf{QCL}\) continues to outperform \(\mathsf{Linear}\), which might be because \(\mathsf{QCL}\) are able to learn non-linear relationships as well. What is more, in these recent market environments, \(\mathsf{QCL}\) can successfully predict stock returns and gain the excess returns, beating classical models. See Appendix A for numerics and graphs. We also find that during last three years in the backtesting period neural network models perform poorly. It suggests that neural networks used in this analysis tend to overfit to the previous market environment and fail in adapting to the latest one.
The tensor network model \(\mathsf{TN}\) has the best performance over other models in spite of the lowest number of parameters. It illustrates that \(\mathsf{TN}\) can possibly have effective architectures to learn financial data, to say nothing of possibility to capture non-linearity among features. It should be further investigated in the future whether this superiority holds when we increase the number of features and parameters in models.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline & Linear & NN1 & NN2 & \(\mathsf{QCL}\) & \(\mathsf{TN}\) \\ \hline ER (\%) & -0.28 & 1.27 & 1.76 & 1.35 & **3.71** \\ \hline TE (\%) & 6.64 & **3.79** & 4.28 & 6.18 & 5.41 \\ \hline IR & -0.04 & 0.34 & 0.41 & 0.22 & **0.69** \\ \hline \end{tabular}
\end{table}
Table 2: The empirical result of backtesting in TOPIX500 universe (Bold characters show the best numbers in each metrics)
## 4 Conclusion and discussion
In this paper we propose to use quantum and its inspired algorithms to predict stock returns. We especially test quantum circuit learning and tensor network as the proposed model against the classical models, namely linear regression and neural network. In order to evaluate their capabilities, we consider the investment strategy based on predicted returns by classical and quantum models. We then conduct backtesting over 10 years in the Japanese stock market.
Our finding is that the tensor network model outperforms classical models, while the quantum circuit learning model archives comparable performance with the neural network models but with higher risk. As is expected, both proposed models seem to learn non-linear relationships between input features, implied by their superior performance against linear
Figure 4: The cumulative returns of portfolios constructed by various methods and that of TOPIX500
Figure 5: The cumulative excess returns of portfolio constructed by various methods over TOPIX500
regression. Although the performance of the neural network models is deteriorated in the latest years, our proposed models successfully continue to gain the excess return. These difference in the performance can be related to the overfitting problem in machine learning and market instability in these periods. We therefore speculate that quantum techniques can have a good control of the overfitting problem, which is originally suggested in [5]. It is, however, unclear whether the hypothesis is true, further examination on this issue should be conducted.
Lastly, we comment on several open problems for future exploration.
* In this work we evaluate models' capabilities in the Japanese stock market. It should be examined if quantum models work in other countries, e.g. the United States, or in the global market. [21] studies the transfer learning of neural network in the investment problem between various markets. Whether transfer learning in the quantum model is also effective or not is another interesting research direction.
* While we study the predictability of stocks, it would be interesting whether quantum machine learning is applicable for other assets, such as bonds or currencies. See [24, 25] for machine learning approach in these assets.
* As is explained in Section 2, there are two approaches towards the return prediction, one of which is the cross-section prediction we employ. The other way, namely the time-series approach can be applied in quantum machine learning. In classical neural networks, recurrent neural network and its variants are developed and widely investigated in financial literature [26, 27, 28, 29]. It would be interesting to apply the quantum counterpart of such recurrent networks in financial analysis. See [30, 31] for the existing literature of quantum recurrent neural networks.
## Acknowledgement
This work was partially supported by KAKENHI Grant Number JP22K11924 from JSPS, MEXT Q-LEAP Grant No. JPMXS0120319794, and JST COI-NEXT No. JPMJPF2014.
|
2308.03915 | Predicting and explaining nonlinear material response using deep
Physically Guided Neural Networks with Internal Variables | Nonlinear materials are often difficult to model with classical state model
theory because they have a complex and sometimes inaccurate physical and
mathematical description or we simply do not know how to describe such
materials in terms of relations between external and internal variables. In
many disciplines, Neural Network methods have arisen as powerful tools to
identify very complex and non-linear correlations. In this work, we use the
very recently developed concept of Physically Guided Neural Networks with
Internal Variables (PGNNIV) to discover constitutive laws using a model-free
approach and training solely with measured force-displacement data. PGNNIVs
make a particular use of the physics of the problem to enforce constraints on
specific hidden layers and are able to make predictions without internal
variable data. We demonstrate that PGNNIVs are capable of predicting both
internal and external variables under unseen load scenarios, regardless of the
nature of the material considered (linear, with hardening or softening behavior
and hyperelastic), unravelling the constitutive law of the material hence
explaining its nature altogether, placing the method in what is known as
eXplainable Artificial Intelligence (XAI). | Javier Orera-Echeverria, Jacobo Ayensa-Jiménez, Manuel Doblare | 2023-08-07T21:20:24Z | http://arxiv.org/abs/2308.03915v1 | Predicting and explaining nonlinear material response using deep Physically Guided Neural Networks with Internal Variables
###### Abstract
Nonlinear materials are often difficult to model with classical state model theory because they have a complex and sometimes inaccurate physical and mathematical description or we simply do not know how to describe such materials in terms of relations between external and internal variables. In many disciplines, Neural Network methods have arisen as powerful tools to identify very complex and non-linear correlations. In this work, we use the very recently developed concept of Physically Guided Neural Networks with Internal Variables (PGNNIV) to discover constitutive laws using a model-free approach and training solely with measured force-displacement data. PGNNIVs make a particular use of the physics of the problem to enforce constraints on specific hidden layers and are able to make predictions without internal variable data. We demonstrate that PGNNIVs are capable of predicting both internal and external variables under unseen load scenarios, regardless of the nature of the material considered (linear, with hardening or softening behavior and hyperelastic), unravelling the constitutive law of the material hence explaining its nature altogether, placing the method in what is known as eXplainable Artificial Intelligence (XAI).
Nonlinear computational solid mechanics Deep Neural Network Internal Variables Explainable Artificial Intelligence Physics-Informed Machine Learning Physically Guided Neural Networks
## 1 Introduction
It is of common knowledge that our everyday life is being dramatically challenged by Big Data and Artificial Intelligence (AI). According to the International Data Corporation, the Global Datasphere (the summation of all data, whether created, captured, or replicated) will grow from 33 ZB in 2018 to 175 ZB by 2025 [1]. This is mainly due to the explosion of social networks, e-commerce and marketing, and the extension of the Internet of Things (IoT). Among the top eight companies in terms of market capitalization, five are based on data value leverage [2]. This huge amount of available information justifies the prosperity of data science and data-based decision in fields as diverse as sociology, economics, engineering and medicine.
As a response, Machine Learning (ML) methods have become today one of the main tools in business, but also in science and technology. These methodologies enable the extraction of information from data that would be intractable by means of traditional methods [3]. They try to mimic the process of human knowledge acquisition and structuring taking advantage of the aforementioned advances in data generation, management, and storage, as well as huge improvements
in the performance of computers and algorithms [4]. In the special case of Scientific ML [5], the natural adaptation of many supervised as well as unsupervised ML algorithms to the vectorized representation that most physical problems exhibit, makes the study of the convergence between both of special importance.
Data-driven methods are used in many different physical disciplines such as chemical and electrical processes [6], biology [7], spoken language recognition [8] and a long etcetera. However, the link between classical physical modelling and data-driven methods has not been quite clear so far, since the physical description of most systems was built on the basis of empirical knowledge rather than large data-bases. Nonetheless, the new era of computation and Big Data has opened new perspectives where data can be incorporated in this physical description in a consistent and comprehensive way. Promising improvements can be therefore spotted, such as new forms of empiricism that declare "the end of theory" and the impending advancement of data-driven methods over knowledge-driven science [9].
One of the most prominent strategies that has proven to be especially prolific in recent years is the use of Artificial Neural Networks (ANNs). Since 1958, when Rosenblatt developed the _perceptron_[10], many works have been devoted to ANNs, some of them demostrating their character as universal approximators [11, 12, 13, 14, 15]. However, it has been only in the last decade of the XX\({}^{\text{th}}\) century, thanks to the important progress of high performance computing capabilities and the combination of back-propagation [16] with stochastic gradient descent [17] algorithms, that ANNs have become a booming technology. The progress has accelerated in the last decade with the advent of Convolutional Neural Networks (CNNs) [18] and Recurrent Neural Networks (RNNs) [19], in what is known nowadays as Deep Learning (DL) [20], and culminating with the attention mechanism [21], transformer models and generative AI, whose impact has gained nowadays great popularity thanks to Large Language Models (LLMs) [22].
Leaving aside the progress of the DL as a research field, AI approaches have changed the way we conceive science. On the one hand, AI has been used to discover the hidden physical structure of the data and unravel the equations of a system [23, 24, 25]. On the other hand, the tremendous predictive power of AI has been blended with the scientific consistency of the explicit mathematical representation of physical systems through the concept of _data-driven_ models for simulation-based engineering and sciences (SBES) [26]. The latter in turn may be done by the combination of raw data and physical equations [27, 28], by enforcing a metriplectic structure to the model, related with the fulfillment of thermodynamic laws [29] or by defining the specific structure of the model [30, 31]. These novel _data-driven science_ approaches, coined as Scientific Machine Learning or Physics-Informed Machine Learning (PIML) arise therefore with the main purpose of turning apparently not-physically meaningful data-driven models, where approaches such as ANNs have excelled, into physics-aware models.
However, the interplay between data and physical sciences has not been exempt from setbacks of different nature. In fact, the use of complex DL models does not fit well with the study of physical problems. Furthermore, in many physical problems, many variables are involved, interacting in complex and non-stationary ways. This requires huge amounts of data to get accurate predictions using ANN techniques, sometimes in regions of the solution space that are difficult to access or uncommon and therefore difficult to be sampled. As a consequence, due to the bias-variance trade-off, poorly extrapolation capacity is obtained out of the the usual data range for models looking for recreating complex physics [32].
In addition, a physically based model is not only useful for making predictions, but also to help in acquiring new knowledge by the interpretation of its structure, parameters, and mathematical properties. Physical interpretability is, in most cases, at least as important as predictive performance. However, it is known that interpretability is one of the main weaknesses of ANNs, as the acquired knowledge is encoded in the strength of multiple connections, rather than stored at specific locations. That explains the huge efforts that are currently being made towards "whitening" the "black-box" character of ANN [33, 34] in what has been coined as eXplainable Artificial Intelligence (XAI) [35, 36]. In the context of data-driven simulation-based engineering and sciences (DDSBES) [37]. Two ways of proceeding can be distinguished: building specific ANNs structures endowed with the problem equations [38, 39], also known as the _inductive bias_ approach, and/or by regularizing the loss function using this same physical information [40, 41, 42].
The approach to be followed depends decisively upon the data availability and the way in which this data is used. If we follow a supervised approach, there are two possibilities.
* The first one assumes that we know the whole physics of the problem. In that situation, supervised ML is used for the sake of computational requirements. In that sense, ANNs act as Reduced Order Models (ROMs) that can be used as a surrogate in problems involving optimization or control. Hybridizing DL with physical information is a way of improving standard DL methods in terms of data requirements, less expensive training or noise filtering at the evaluation step, thanks to regularization [38, 40]. Therefore, we are interested in the predictive capacity of the approach.
* The second one assumes that we know some of the physics of the problem. In that situation, we are interested in knowing the hidden physics that remains unknown, expressed in terms of some model parameters [42] or functional relations [43]. For that reason, we are rather interested in the explanatory character of the method.
In other situations, we follow unsupervised approaches. This may be indeed due to two different possibilities:
* If we know the whole physics of the problem, the use of ANNs is merely instrumental and is used as an alternative way of solving numerically some system of Partial Differential Equations (PDEs) that require an important computational effort [44].
* When some of the physics of the problem are unknown, the intrinsic variability of the data (for instance when measuring spatial or temporal fields) may be exploited for an unsupervised discovery of some hidden constitutive models [45, 46]. For that reason, we are rather interested in the explanatory character of the method.
However, in the context of the IoT, where data quantity generally dominates over data quality, the data availability and variability is key, and it is difficult to guarantee whether, considering for instance the specific case of computational solid mechanics, "the combination of geometry and loading generates sufficiently diverse and heterogeneous strain states to train a generalizable constitutive model with just a single experiment" [47]. Therefore, there are no other alternatives rather than introducing all the control or measurable variables in the workflow, while maintaining the desirable properties of the PIML approach, namely, its ability to get fast predictions in real time (for optimization and control issues) together with its explanatory capacity.
In this work, we demonstrate, how, in the context of computational solid mechanics, Physically Guided Neural Networks with Internal Variables (PGNNIVs) enable the compliance with these requirements particularly well. PGNNIVs comprise ANNs that are able to incorporate some of the known physics of the problem, expressed in terms of some measurable variables (for instance forces and displacements) and some hidden ones (for instance stresses). Their predictive character, improving many of the features of conventional ANN [38], allow for fast and accurate predictions. In addition, PGNNIVs are able to unravel the constitutive equations of different materials from unstructured data, that is, uncontrolled test data obtained from system monitorization.
The content of this paper is structured as follows. First, the methodology is described, including a brief overview of the state of the art, the use of PGNNIVs in computational mechanics, the computational treatment of the physical tensorial fields and operators, as well as the data-set generation and training process. Then, the main results are presented, of both the predictive and explanatory capacity of the method. Finally, conclusions are summarized, together with the main limitations and a brief overview of future work.
## 2 Methods
### Brief overview of Physics Informed Machine Learning in computational solid mechanics
PDEs are the standard way to describe physical systems under the continuum setting thanks to their overarching capacity to model extremely different and complex phenomena. However, analytic solutions are most times difficult or even impossible to find. That is the reason why numerical methods have become the universal tool to obtain approximate but accurate solutions to PDEs. These methods consider a given discretization in space and time, which results in an algebraic (in general non-linear) system that is then solved by means of standard matrix manipulation.
In the last three decades, however, attempts to solve PDEs from a data-driven point of view have been numerous. First tentatives [48, 49] generalized earlier ideas for Ordinary Differential Equations (ODEs). Since then, many different approaches have been proposed, from collocation methods [50, 51, 52], variational/energy approaches [53, 54, 55], loss regularization using physical or domain knowledge [56, 57, 41, 58], to the most recent approaches using automatic differentiation [42], that is nowadays known as Physics-Informed Neural Networks (PINNs). Other works have extensively tried to address this challenge using the stochastic representations of high-dimensional parabolic PDEs [59, 60, 61].
In order to provide the data-driven models with a meaningful physical character, remarkable efforts have been done in the recent years to embed physical information into data-driven descriptions. The potential of solving inverse problems with linear and non-linear behavior in solid mechanics, for example, has been explored using DL methods [62], where the forward problem is solved first to create a database, which is then used to train the ML algorithms and determine the boundary conditions from assumed measurements. Other approaches initially build a constitutive model into the framework by enforcing constitutive constraints, and aim at calibrating the constitutive parameters [63].
In this context, clearly differentiating between external and internal variables becomes an important factor when approaching complex physical problems, but this is always disregarded from a data-driven viewpoint. External variables are those observable, measurable variables of the system, that can be obtained directly from physical sensors such as
position, temperature or forces; internal variables are non-observable (not directly measurable) variables, that integrate locally other observable magnitudes and depend on the particular internal structure of the system [64]. This is very important to consider in the ML framework since predictions associated to an internal state model require explicitly the definition of the cloud of experimental values that identifies the internal state model [65]. This implies "measuring", or better, assuming values for non-observable variables [66, 67]. This is for example, the case of stresses in continuum mechanics that can be determined _a priori_ only after making strong assumptions such as their uniform distribution in the center section of a sample under uniform tension.
An alternative is the use of PGNNIVs. This new methodological appraisal permits to predict the values of the internal variables by mathematically constraining some hidden layers of a deep ANN (whose structural topology is appropriately predefined) by means of the fundamental laws of the continuum mechanics such as conservation of energy or mechanical momenta that relate internal non-measurable with external observable variables. With this, it is possible to transform a pure ML based model into a physically-based one without giving up the powerful tools of DL, including the implicit correlation between observable data and the derived predictive capacity. This way, not only the real internal variables of the problem are predicted, but also the data needed to train the network decreases, convergence is reached faster, data noise is better filtered and the extrapolation capacity out of the range of the training data-set is improved, as recently demonstrated in [38]. In line with the terminology and general framework used in PIML [68], PGNNIVs showcase an intuitive interplay between an _inductive bias_ approach and a _learning bias_ appraisal, where physical constrains are incorporated by means of a physics-informed likelihood, i.e. additional terms in the loss function (also known as _collocation_ or _regularization losses_).
### Physically Guided Neural Networks with Internal Variables in computational mechanics
#### 2.2.1 Revisiting PGNNIVs
PGNNIVs are in essence a generalization of PINNS. In the latter, physical equations constrain the values of output variables to belong to a certain physical manifold that is built from the information provided by the data and the specific form of the PDE considered. One of the shortcomings of PINNs is that, in general, only simple ordinary PDEs that contain a few free parameters and are closed-form can be considered. In contrast, PGNNIVs do not only apply to scenarios where PDEs involve many parameters and have complex forms, but also to those where a mathematical description is not available. In fact, this new paradigm embodies a unique architecture where the values of the neuron variables in some intermediate layers acquire physical meaning in an unsupervised way, providing the network with an inherent explanatory capacity.
The main differentiating features that, up to the authors' knowledge, constitute a relevant contribution to the advances of data-driven physical modelling and its particular application to nonlinear mechanics are two-folded: first, physical constraints are applied in predefined internal layers (PILs), in contrast to previous works on PINNs. Secondly, and even most important, PGNNIVs are able to predict and explain the nature of the system all at once, i.e. predictability of the variables as well as explainability of the constitutive law are ensured and learned altogether. In all related works, modeling assumptions on the constitutive law of the correspondent materials are directly imposed, so that the material response complies with certain constrains. On the contrary, PGNNIVs only enforce universal laws of the system (e.g. balance equations) and no prior knowledge on the constitutive model is incorporated.
Classical Deep Neural Networks (DNNs) are often represented as _black boxes_ that theoretically can compute and learn any kind of function correlating the input and output data [69]. In particular, they perform very well in areas of science and technology where complex functions convey good approximations of the governing phenomena. Although there exist some heuristic rules [70], these _black boxes_ are usually trained via trial and error. Adding a physical meaning to the hidden layers and constraining them by adding an extra term to the cost function, has already proven to have significant advantages such as less data requirements, higher accuracy and faster convergence in real physical problems, as well as model unravelling capacities [38]. The basic principles of PGNNIVs are briefly exposed in the next lines.
Let us consider a set of continuous Partial Differential Equations (PDEs) of the form
\[\mathcal{F}(u,v) =f,\text{ in }\Omega, \tag{1a}\] \[\mathcal{G}(u,v) =g,\text{ in }\partial\Omega,\] (1b) \[\mathcal{H}(u) =v,\text{ in }\Omega, \tag{1c}\]
where \(u\) and \(v\) are the unknown fields of the problem, \(\mathcal{F}\) and \(\mathcal{H}\) are functionals representing the known and unknown physical equations of the specific problem, \(\mathcal{G}\) is a functional that specifies the boundary conditions, and \(f\) and \(g\) are known fields.
The continuous problem has its analogous discretized representation in finite-dimensional spaces in terms of vectorial functions \(\mathbf{F}\), \(\mathbf{G}\) and \(\mathbf{H}\) and nodal values \(\mathbf{u}\), \(\mathbf{v}\), \(\mathbf{f}\) and \(\mathbf{g}\). Particularly, \(\mathbf{u}\) are the solution field nodal values and \(\mathbf{v}\) are
the unknown internal field variables at the different nodes. The discretization may be done using any discretization technique, such as the Finite Element Method (FEM). Hence, Eqs. (1) become
\[\mathbf{F}(\mathbf{u},\mathbf{v})=\mathbf{f},\text{ in }\Omega, \tag{2a}\] \[\mathbf{G}(\mathbf{u},\mathbf{v})=\mathbf{g},\text{ in }\partial\Omega,\] (2b) \[\mathbf{H}(\mathbf{u})=\mathbf{v},\text{ in }\Omega. \tag{2c}\]
A PGNNIV may be defined for a problem of type (2) in the following terms:
\[\mathbf{y} =\mathsf{Y}(\mathbf{x}),\] \[\mathbf{v} =\mathsf{H}(\mathbf{u}),\] \[\mathbf{x} =\mathbf{I}(\mathbf{u},\mathbf{f},\mathbf{g}),\] \[\mathbf{y} =\mathbf{O}(\mathbf{u},\mathbf{f},\mathbf{g}),\] \[\mathbf{R}(\mathbf{u},\mathbf{v},\mathbf{f},\mathbf{g})=0,\]
where:
1. \(\mathbf{u}\), \(\mathbf{f}\) and \(\mathbf{g}\) are the measurable variables of the problem.
2. \(\mathbf{x}\) and \(\mathbf{y}\) are the input and output variables respectively and will be defined depending on which relation \(\mathbf{x}\mapsto\mathbf{y}\) wants to be predicted.
3. \(\mathbf{I}\) and \(\mathbf{O}\) are functions that compute the input \(\mathbf{x}\) and the output \(\mathbf{y}\) of the problem from the measurable variables. In other words, functions \(\mathbf{I}\) and \(\mathbf{O}\) define the data used as starting point to make predictions, \(\mathbf{x}\), and the data that we want to predict, that is, \(\mathbf{y}\).
4. \(\mathbf{R}\) are the physical constraints, related to the relations given by \(\mathbf{F}\) and \(\mathbf{G}\).
5. \(\mathsf{Y}\) and \(\mathsf{H}\) are DNN models: * \(\mathsf{Y}\) is the **predictive model**, whose aim is to infer accurate values for the output variables for a certain input set, that is, to surrogate the relation \(\mathbf{x}\mapsto\mathbf{y}\).
* \(\mathsf{H}\) is the **explanatory model**, whose objective is to unravel the hidden physics of the relation \(\mathbf{u}\mapsto\mathbf{v}\).
#### 2.2.2 Adaptation to computational solid mechanics.
Our aim now is to reframe Eqs. (1) in the context of solid mechanics. To fix ideas, although it is not difficult to adapt the methodology to other constitutive models, we restrict this analysis to hyperelastic solids with constant and known density \(\rho\). First, we have to consider equilibrium equations (momentum conservation) in the domain \(\Omega\). In spatial coordinates, equilibrium reads
\[\mathrm{div}(\mathbf{\sigma})+\rho\mathbf{b}=\mathbf{0}, \tag{4}\]
where \(\mathbf{\sigma}\) is Cauchy stress tensor, \(\rho\) is the density, \(\mathbf{b}\) the spatial volumetric body force field and \(\mathrm{div}\) is the divergence operator in spatial coordinates. In material coordinates, equilibrium reads
\[\mathrm{DIV}(\mathbf{P})+\rho\mathbf{B}=\mathbf{0}, \tag{5}\]
where now \(\mathbf{P}=\det(\mathbf{F})\mathbf{\sigma}\mathbf{F}^{-\intercal}\) is the first Piola-Kirchhoff stress tensor, \(\mathbf{B}=\det(\mathbf{F})\mathbf{b}\) is the reference volumetric body force field, and \(\mathrm{DIV}\) is the divergence operator in material coordinates.
If \(\mathbf{\xi}=\chi(\mathbf{\Xi})\) is the motion function that relates spatial (\(\mathbf{\xi}\)) and material (\(\mathbf{\Xi}\)) coordinates, we define the deformation gradient tensor \(\mathbf{F}\) as
\[\mathbf{F}=\mathrm{GRAD}(\chi)=\mathrm{GRAD}\otimes\mathbf{\xi}, \tag{6}\]
where \(\mathrm{GRAD}\) is the gradient operator in material coordinates. Eq. (6) shows that \(\mathbf{F}\) is a potential tensor field so \(\mathbf{F}\) must satisfy the following compatibility equation in each connected component of \(\Omega\):
\[\mathrm{ROT}(\mathbf{F})=\mathbf{0}, \tag{7}\]
where \(\mathrm{ROT}\) is the rotational in material coordinates.
For hyperelastic materials, the Cauchy stress tensor \(\mathbf{\sigma}\) is related to the deformation gradient tensor by means of the equation
\[\mathbf{\sigma}=\frac{1}{\det(\mathbf{F})}\frac{\partial\Psi}{\partial\mathbf{F}}\cdot\bm {F}^{\intercal}, \tag{8}\]
where \(\Psi\) is the strain energy function expressed as a function of the deformation state given by \(\mathbf{F}\), \(\Psi=\mathfrak{F}(\mathbf{F})\). Obtaining this particular function is the subject of research of material sciences and many approaches are possible, ranging from phenomenological descriptions to mechanistic and statistical models.
Finally, Eqs. (4) or (5), (7) or (6), and (8) must be supplemented with appropriate boundary conditions. We distinguish here between _essential_ boundary conditions and _natural_ boundary conditions. The former define the motion of the solid at some boundary points \(\Gamma_{E}\subset\partial\Omega\):
\[\mathbf{\xi}=\bar{\mathbf{\xi}}, \tag{9}\]
whereas the latter define the traction vector at some other boundary points \(\Gamma_{N}\subset\partial\Omega\):
\[\mathbf{\sigma}\cdot\mathbf{n}=\bar{\mathbf{t}}, \tag{10}\]
where \(\mathbf{n}\) is the outwards normal vector (in the spatial configuration) and \(\bar{\mathbf{\xi}}\) and \(\bar{\mathbf{t}}\) are known values of the solid motion and spatial traction forces respectively. The material analogous of Eq. (10) is
\[\mathbf{P}\cdot\mathbf{N}=\bar{\mathbf{T}}, \tag{11}\]
where now \(\mathbf{N}\) and \(\bar{\mathbf{T}}\) are the material analogous of \(\mathbf{n}\) and \(\bar{\mathbf{t}}\).
Let us assume that it is possible to measure (for instance using Digital Image Correlation techniques [71]) the system response in terms of its motion given by the map \(\mathbf{\xi}=\chi(\mathbf{\Xi})\), that is, \(\mathbf{\xi}\) is a measurable variable. Let us also assume that we can measure the volumetric loads, \(\mathbf{b}=\mathbf{b}(\mathbf{\xi})\), as well as the prescribed traction forces \(\bar{\mathbf{t}}\) (or, in an equivalent manner, \(\mathbf{B}=\mathbf{B}(\mathbf{\Xi})\) and \(\bar{\mathbf{T}}\)). Therefore, using Eqs. (6) and (8) it is possible to express:
\[\mathbf{F} =\mathcal{A}(\mathbf{\xi}), \tag{12a}\] \[\mathbf{\sigma} =\mathcal{B}(\Psi,\mathbf{\xi}), \tag{12b}\]
for some appropriate differential operators \[\mathcal{A}\] and \[\mathcal{B}\]. Therefore, it is possible to recast hyperelastic solid mechanics as
\[\mathrm{div}(\mathbf{\sigma}(\Psi,\mathbf{\xi})) =-\rho\mathbf{b}\quad\mathrm{in}\quad\Omega, \tag{13a}\] \[\mathbf{\xi} =\bar{\mathbf{\xi}}\quad\mathrm{in}\quad\Gamma_{E},\] (13b) \[\mathbf{\sigma}\cdot\mathbf{n} =\bar{\mathbf{t}}\quad\mathrm{in}\quad\Gamma_{N},\] (13c) \[\Psi =\mathfrak{F}(\mathbf{F}(\mathbf{\xi}))\quad\mathrm{in}\quad\Omega. \tag{13d}\]
where Eqs. (13a) and (13c) may be eventually substituted by their material analogous. Groupping Eqs. (13b) and (13c), it is clear that we can express computational solid mechanics for hyperelastic materials as
\[\mathcal{F}(\mathbf{\xi},\mathbf{\sigma})=\mathbf{b},\;\mathrm{in}\;\Omega, \tag{14a}\] \[\mathcal{G}(\mathbf{\xi},\mathbf{\sigma})=(\bar{\mathbf{\xi}},\bar{\mathbf{t}}) \;\mathrm{in}\;\partial\Omega,\] (14b) \[\mathcal{H}(\mathbf{\xi})=\mathbf{\sigma},\;\mathrm{in}\;\Omega, \tag{14c}\]
that is in the form of Eqs. (1) with \(u=\mathbf{\xi}\), \(v=\mathbf{\sigma}\), \(f=\mathbf{b}\) and \(g=(\bar{\mathbf{\xi}},\bar{\mathbf{t}})\).
Particularization to small strains solid mechanics.In small strains solid mechanics, it is common to work with the displacement field \(\mathbf{U}=\mathbf{\xi}-\mathbf{\Xi}\) and to define the displacement gradient tensor \(\mathbf{J}=\mathrm{GRAD}(\mathbf{U})\). In that case, the constitutive equation is formulated as
\[\mathbf{\sigma}=\mathfrak{G}(\mathbf{\varepsilon}), \tag{15}\]
where \(\mathbf{\varepsilon}\) is the Cauchy small strain tensor
\[\mathbf{\varepsilon}=\mathrm{symgrad}(\mathbf{U})=\frac{1}{2}(\mathbf{J}+\mathbf{J}^{\intercal }), \tag{16}\]
and \(\mathfrak{G}\) is a tensor map. With these considerations, the equations of the problem are now written as
\[\mathcal{F}(\mathbf{U},\mathbf{\sigma}) =\mathbf{b},\;\mathrm{in}\;\Omega, \tag{17a}\] \[\mathcal{G}(\mathbf{U},\mathbf{\sigma}) =(\bar{\mathbf{U}};\bar{\mathbf{t}}),\;\mathrm{in}\;\partial\Omega,\] (17b) \[\mathcal{H}(\mathbf{U}) =\mathbf{\sigma},\;\mathrm{in}\;\Omega, \tag{17c}\]
again in the form of Eqs. (1) with \(u=\mathbf{U}\), \(v=\mathbf{\sigma}\), \(f=\mathbf{b}\) and \(g=(\bar{\mathbf{U}},\bar{\mathbf{t}})\).
Once we have discretized the problem, our aim is to predict a motion field \(\mathbf{\xi}\) (or a displacement field \(\mathbf{U}\)) from a particular load case, expressed in terms of the volumetric loads and the natural boundary conditions1, \(\bar{\mathbf{t}}\), therefore \(\mathbf{I}(\mathbf{u},\mathbf{f},\mathbf{g})=(\mathbf{f},\mathbf{g})=(\mathbf{b},\bar{\mathbf{t}})\). With these last remarks, the PGNNIV problem is stated for finite solid mechanics as
Footnote 1: It is important to recall that, as essential boundary conditions can be measured as an output variable, it is not necessary to include them as inputs of our problem.
\[\mathbf{\xi} =\mathsf{Y}(\bar{\mathbf{t}}) \tag{18a}\] \[\mathbf{\sigma} =\mathsf{H}(\mathrm{KIN}(\mathbf{\xi}))\] (18b) \[\mathbf{x} =(\bar{\mathbf{t}},\mathbf{b})\] (18c) \[\mathbf{y} =\mathbf{\xi}\] (18d) \[\mathbf{R}(\mathbf{\xi},\mathbf{\sigma},\bar{\mathbf{t}}) =(\mathrm{div}(\mathbf{\sigma})-\rho\mathbf{b};\mathbf{\sigma}\cdot\mathbf{n}- \bar{\mathbf{t}};\mathbf{\xi}-\bar{\mathbf{\xi}}). \tag{18e}\]
where \(\mathrm{KIN}(\mathbf{\xi})\) is a selected kinematic descriptor of the strain state, such as the deformation gradient \(\mathbf{F}=\mathrm{GRAD}(\mathbf{\xi})\), the right Cauchy - Green deformation tensor \(\mathbf{C}=\mathbf{F}^{\intercal}\mathbf{F}\), or the Green - Lagrange strain tensor \(\mathbf{E}=\frac{1}{2}(\mathbf{C}-\mathbf{I})\), among others. For small strains solid mechanics, the methodology simplifies to
\[\mathbf{U} =\mathsf{Y}(\bar{\mathbf{t}}) \tag{19a}\] \[\mathbf{\sigma} =\mathsf{H}(\mathrm{symgrad}(\mathbf{U}))\] (19b) \[\mathbf{x} =(\bar{\mathbf{t}},\mathbf{b})\] (19c) \[\mathbf{y} =\mathbf{U}\] (19d) \[\mathbf{R}(\mathbf{U},\mathbf{\sigma},\bar{\mathbf{t}}) =(\mathrm{div}(\mathbf{\sigma})-\rho\mathbf{b};\mathbf{\sigma}\cdot\mathbf{n}- \bar{\mathbf{t}};\mathbf{U}-\bar{\mathbf{U}}). \tag{19e}\]
The appropriate structure and architecture of \(\mathsf{H}\) and \(\mathsf{Y}\) depend on the complexity of the material in hands, which is discussed later.
#### 2.2.3 Case study: geometry and external forces.
For illustration purposes, the case study considered in this work consists in a non-uniform biaxial test on a rectangular plate of height \(L_{1}=16\) cm and width \(L_{2}=20\) cm, under plane stress. No volumetric loads are incorporated, that is, \(\mathbf{b}=\mathbf{0}\). We impose a certain arbitrary compression load profile \(p=p(s)\) (where \(s\) is the coordinate along the right and top contour). To accelerate computations we consider a load profile that is symmetric with respect to the vertical and horizontal axis and acts perpendicularly to the plate contour, as shown in Figure 1. The symmetry of the problem allows therefore for the analysis of an equivalent problem by extracting the upper-right portion of the plate and applying the corresponding symmetry boundary conditions.
### Data and operators representation
In this section we discuss how the different mathematical objects in our use case problem (scalar, vectorial and tensorial fields and operators) are represented. First, we describe how the different fields are encoded and related to the measured data. Then, we explain how the different operators involved are built. This includes both the known operators (equilibrium and compatibility) as well as the unknown relationships comprising the predictive and explanatory networks, \(\mathsf{Y}\) and \(\mathsf{H}\) respectively. Then, we discuss how physical constrains are hardwired into the ANN, so that the built PGNNIV is tailored towards the discovery of constitutive models that comply with the physics of the solid mechanics problem, constraining therefore the learning space and bypassing the parametrization of the constitutive law.
#### 2.3.1 Data structures
The data that contains the nodal and element-wise variables (that is, displacements and stresses/strains respectively) is stored in array structures. Now we introduce the notation that will be used for referring to a given tensor field, that is represented by an array \(\mathsf{I}\) containing the information of the tensorial field itself. The different dimensions of the data are represented using the indexation \(\mathsf{I}[I|J|K]\) where \(I\) is a multi-index associated with the discretization of the problem, \(J\) with the tensorial character and \(K\) with the data instance. Thus, considering that we have a data-set of size \(N\) and a discretization of size \(n_{x}\times n_{y}\), the displacement field is represented by \(\mathsf{U}[i,j|k|l]\) where \(i=1,\dots,n_{x}\), \(j=1,\dots,n_{y}\), and where \(k=1,2\) (2D problem) and \(l=1\dots,N\), so that
\[\mathsf{U}[i,j|k|l]=u_{k}(x_{i},y_{j})\]
is the \(k\) component of the displacement field evaluated at \((x_{i},y_{j})\) (that is, the node \((i,j)\)) corresponding to the data \(l\). Analogously, the strain and stress fields are represented respectively by \(\mathsf{E}[i,j|k,l|m]\) and \(\mathsf{S}[i,j|k,l|m]\) where \(i=1,\dots,n_{x}-1\), \(j=1,\dots,n_{y}-1\), \(k,l=1,2\) and \(m=1\dots,N\), such that, for instance,
\[\mathsf{E}[i,j|k,l|m]=E_{kl}\left(\frac{1}{2}(x_{i}+x_{i+1}),\frac{1}{2}(y_{j}+ y_{j+1})\right)\]
is the \(k,l\) component of the strain tensor evaluated at the element \((i,j)\) corresponding to the data \(m\).
Finally, the traction forces, \(\mathbf{\bar{t}}^{\rm top}\) and \(\mathbf{\bar{t}}^{\rm right}\), which are treated as the inputs of our problem (provided the volume forces are not considered), are represented by \(\mathtt{t}^{\rm top}[i|j|k]\) where \(i=1,\ldots,n_{x}\), \(j=1,2\), and \(k=1\ldots,N\), so that
\[\mathtt{t}^{\rm top}[i|j|k]=t_{j}^{\rm top}(x_{i},L_{1}/2),\]
and \(\mathtt{t}^{\rm right}[i|j|k]\) where \(i=1,\ldots,n_{y}\), \(j=1,2\), and \(k=1\ldots,N\), so that
\[\mathtt{t}^{\rm right}[i|j|k]=t_{j}^{\rm right}(L_{2}/2,y_{i}),\]
both associated with the data instance \(k\).
#### 2.3.2 Operator construction
In this section we specify the details to build the predictive and explanatory networks (Y and H), as well as the constraint operator \(\mathbf{R}\). The definition of the complexity of a PGNNIV adapted to solid mechanics stems from the architecture of the predictive and explanatory networks, Y and H, as they must be able to learn the non-linearities between variables \(\mathbf{\bar{t}}\mapsto\mathbf{U}\) or \(\mathbf{\bar{t}}\mapsto\mathbf{\xi}\) and \(\mathbf{E}\mapsto\mathbf{P}\) (or \(\mathbf{\varepsilon}\mapsto\mathbf{\sigma}\)), as explained in Section 2.2.2.
Predictive network.The predictive network must be able to represent the data variability, so typically it has an autoencoder-like structure. Its complexity is therefore associated with the latent dimensionality and structure of the volumetric loads and boundary conditions. Although more sophisticated approaches coming from Manifold Learning theory are possible for analyzing data dimensionality and structure [72], this is not the main interest of this work and therefore is out of the scope of this particular study. Here we follow a much simpler approach, where we build an ANN that is sufficiently accurate when predicting the output \(\mathbf{y}\) from an input \(\mathbf{x}\).
Since we consider a biaxial quadratic load applied to the plate, the complexity of the network depends on the data variability. For non-uniform loads, the values of the elemental loads are the input of a DNN whose output are the nodal displacements. For the uniform load, we use a much simpler autoencoder-like DNN. It is important to note that a single, complex enough, autoencoder-like DNN would be able to represent the data variability even in the more complex scenario (the case with the largest latent space in the process of data generation). However, we have decided to use two different network architectures for illustrating this particular feature of PGNNIV: the predictive network is associated to data variability, rather than data nature. Therefore, we can adapt the network architecture to our problem characteristics, aiming either at a better network performance (avoiding overfitting) or at a lower computational cost.
Figure 1: **Dimensions and representation of the non-uniform biaxial test on a 2D plate.** Load \(p=p(s)\) acting perpendicularly to the contour is arbitrary, provided that it is compatible with the symmetry of the problem. We locate the origin \((x,y)=(0,0)\) at the geometrical center of the plate.
In Appendix A, the particular architecture of the two predictive networks is detailed, both for the non-uniform biaxial test and for the uniform biaxial test. Anyway, although the Y network architecture was handcrafted, it is expected that the more and variate data is available, the less relevant the hand-engineering of the network becomes.
Explanatory network.As in this work we restrict ourselves to the elastic regime, the input variable is the given strain state at an arbitrary node \(\mathbb{E}[i,j|k,l|m]\) (E represents the Cauchy deformation tensor for infinitesimal theory and the Green - Lagrange deformation tensor for finite strains theory) and the output variable is the associated stress state at that same element, \(\mathbb{S}[i,j|k,l|m]\) (again, S represents the Cauchy stress tensor or the first Piola-Kirchhoff tensor depending on whether we are in the infinitesimal or finite strains theory). Note that under the homogeneity assumption (and postulating that the stress state depends only on the value of the deformation at the same point), the explanatory network is a nonlinear map \(\mathbb{R}^{3}\to\mathbb{R}^{3}\), due the symmetry of both tensors, that may be expressed symbolically as
\[\mathbb{S}[i,j]\cdot,\left|m\right|=\mathsf{H}\left(\mathbb{E}[i,j|\cdot, \left|m\right|\right).\]
For non-local materials, given the described discretization, the explanatory network could be in principle a map \(\mathbb{R}^{3n_{x}n_{y}}\to\mathbb{R}^{3n_{x}n_{y}}\) represented as
\[\mathbb{S}[\cdot,\cdot|\cdot,\left|m\right|=\mathsf{H}\left(\mathbb{E}[\cdot,\cdot|\cdot,\left|m\right|\right).\]
In particular, H has to be able to capture the highly nonlinear dependencies that may exist between variables. This is in theory possible thanks to the universal approximation theorem: by adding more internal layers (also known as hidden layers) to the DNN model, we can provide the network with the learning capability and complexity that a particular nonlinear constitutive law might require. The implementation of the explanatory network for homogeneous materials is efficiently implemented using a _convolutional_ filter to move across the domain element by element, but expanding the features in a higher dimensional spaces, as illustrated in Fig. 2. We call this type of architecture a moving Multilayer Perceptron (mMLP).
For the very particular case of linear elasticity, we can explicitly parameterize the explanatory network, \(\mathsf{H}(\mathbf{\sigma})=\mathbf{H}(\mathbf{\sigma};\mathbf{D})\) where \(\mathbf{D}\) is the elastic tensor and \(\sigma_{ij}=D_{ijkl}\varepsilon_{kl}\). Using Voigt convention, the tensor \(\mathbf{D}\) is expressed, under plane stress conditions and in the most general case, as a \(3\times 3\) matrix.
\[\mathbf{D}=\left(\begin{array}{ccc}d_{11}&d_{12}&d_{13}\\ d_{12}&d_{22}&d_{23}\\ d_{13}&d_{23}&d_{33}\end{array}\right), \tag{20}\]
Figure 2: **Representation of the explanatory network for the 2D plane stress problem.** On the left, the strain field computed from the displacement field predicted by Y is represented on the plate. In an homogeneous material, the DNN is fed with the strain state on each element, and the correspondent stresses are obtained. The set of weights of this DNN moves across the elements of the plate and updates after each iteration of the optimization (see Figure 3 for the whole picture), acting as a 2D-convolutional filter. We call this type of architecture a moving Multilayer Perceptron (mMLP).
where \(d_{ij}\) are free model parameters, whereas for an isotropic elastic material we have
\[\mathbf{D}=\frac{E}{1-\nu^{2}}\left(\begin{array}{ccc}1&\nu&0\\ \nu&1&0\\ 0&0&1-\nu\end{array}\right), \tag{21}\]
where the elastic modulus \(E\) and the Poisson ratio \(\nu\) are the only free fitting parameters learned during the training process.
An analogous reasoning holds for more complex parametric dependencies. It is possible to express the explanatory network \(\mathsf{H}\) as a parametric model relating the strain and the stress states, that is
\[\mathsf{S}[i,j]\cdot,\cdot|m]=\mathbf{H}\left(\mathsf{E}[i,j]\cdot,\cdot|m]; \mathbf{\Lambda}\right).\]
where \(\mathbf{\Lambda}\) are some pre-defined fitting parameters that are learned during the training step. In particular, an homogeneous material is described as
\[\mathsf{S}[i,j]\cdot,\cdot|m]=\mathbf{H}\left(\mathsf{E}[i,j]\cdot,\cdot|m]; \mathbf{\Lambda}_{ij}\right).\]
In this work, this approach is illustrated with different types of materials ranging from the more simple case of a linear elastic material under infinitesimal strain theory to an hyperelastic Ogden material under finite strains theory.
Coupling the two networks using physical constraints.The definition and subsequent formulation of the PGNNIV framework implies that the loss function includes a term proportional to the quadratic error between the predictions and true values of the output variable (minimization of the maximum likelihood of the data given the parameters) and other penalty terms related to some (physical) equations i.e. equilibrium constraints. Therefore the different terms involved are:
1. Loss term associated with the measurement of the displacement field: \[\mathrm{MSE}=\frac{1}{N}\sum_{i=1}^{N}||\bar{\mathbf{U}}^{(i)}-\mathsf{Y}(\mathbf{t}^ {(i)})||^{2},\] (22) where \(\bar{\mathbf{U}}^{i}\) is the observed displacement corresponding to sample \(i\).
2. Constraint associated with the equilibrium equation. \[\mathbf{\nabla}\cdot\mathbf{P}=\mathbf{0},\quad\mathrm{or}\quad\mathbf{\nabla}\cdot\mathbf{\sigma }=\mathbf{0}.\] (23)
3. Constraint associated with the compatibility in the domain. \[\mathbf{E}-\frac{1}{2}\left(\mathbf{F}^{\intercal}\mathbf{F}-\mathbf{I}\right)=\mathbf{0},\quad \mathrm{or}\quad\mathbf{\varepsilon}-\frac{1}{2}(\nabla\otimes\mathbf{U}+\mathbf{U} \otimes\nabla)=\mathbf{0}.\] (24)
4. Constraint associated with the equilibrium of the stresses in the boundary. \[\mathbf{P}\cdot\mathbf{N}-\mathbf{T}=\mathbf{0},\quad\mathrm{or}\quad\mathbf{\sigma}\cdot\mathbf{n}- \mathbf{t}=\mathbf{0},\;\mathrm{in}\;\Gamma_{N}.\] (25)
5. Constraints associated with the compatibility of the displacements in the boundary. \[U_{x}(x=0,y)=0,\quad U_{y}(x,y=0)=0.\] (26)
The global cost function (which turns out to be a _virtual_ physics-informed likelihood in a Bayesian formulation or, equivalently, a regularized cost function in the most common terminology) can be computed as a weighted sum of \(\mathrm{MSE}\) and \(\mathrm{PEN}\), with \(\mathrm{PEN}\) referring to the physical terms, that is, Eqs. (23), (24), (25) and (26). As Eq. (24) may be expressed as an explicit relation between \(\mathbf{E}\) (or \(\mathbf{\varepsilon}\)) and \(\mathbf{U}\), it is directly embedded in the network architecture. Therefore, the loss is expressed as:
\[\mathrm{CF}=\mathrm{MSE}+\mathrm{PEN}, \tag{27}\]
where
\[\mathrm{MSE}=\frac{1}{N}\sum_{i=1}^{N}\left[p_{1}\|\bar{\mathbf{U}}^{(i)}-\mathsf{ Y}\left(\bar{\mathbf{t}}^{(i)}\right)\|^{2}\right], \tag{28}\]
and
\[\mathrm{PEN}=\frac{1}{N}\sum_{i=1}^{N}\left[p_{2}\|\mathbf{\nabla}\cdot\mathbf{ \sigma}^{(i)}\|^{2}+p_{3}\|\mathbf{\sigma}^{(i)}\cdot\mathbf{n}-\bar{\mathbf{t}}^{(i)}\|^ {2}+p_{4}\|U_{x}^{(i)}(x=0,y)+U_{y}^{(i)}(x,y=0)\|^{2}\right], \tag{29}\]
where with the superscript \((i)\) we refer to the \(i\)-th piece of data and \(p_{j},j=1,2,3,4\) are penalty coefficients that account for the relative importance of each term in the global CF (and may be seen as Lagrange multipliers that softly enforce the constrains). Recall that no penalty for the compatibility in the domain is included since \(\mathbf{E}-\frac{1}{2}\left(\mathbf{F}^{\intercal}\mathbf{F}-\mathbf{I}\right)\) (or \(\mathbf{\varepsilon}-\frac{1}{2}(\nabla\otimes\mathbf{U}+\mathbf{U}\otimes\nabla)\)) is identically \(\mathbf{0}\).
The ANN minimization problem reads therefore:
\[\min_{\mathbf{W}}\mathrm{CF}(\mathcal{E};\mathbf{W}), \tag{30}\]
where \(\mathbf{W}\) are the network parameters and \(\mathcal{E}=\{\mathbf{\tilde{t}}^{i},\mathbf{\bar{U}}^{i}|i=1,\cdots,N\}\) is a given training data-set. By minimizing this function (and assuring that not overfitting is observed by examining the predictions for test data) we will obtain predictions of displacement, stresses and strains. For simplicity, Algorithm 1 details a stochastic gradient descent version of the optimization, even if in this work we always used the Adam optimizer.
From the theoretical point of view, Eq. (30) presents a complex constrained optimization problem that has been widely studied in the context of applied mathematics, i.e. Langrange multipliers. However, when NNs come into play along with PDEs, the optimization becomes more involved as the complex nature of the Pareto front, extensively studied in [73] for Physics-Informed Neural Networks (PINNs), determines that the optimum is a state where an individual loss cannot be further decreased without increasing at least one of the others, and therefore the optimal set of weighting hyperparameters \(p_{i}\) cannot be inferred in advance. This weighting hyperparameters \(p_{i}\), commonly referred as penalties, arise in a natural way if they are regarded as real numbers scaling the covariance matrix of the variables' _virtual_ maximum likelihood probability distribution. This concept was introduced in [74] and [75] in the context of state-space particle dynamics.
```
Input: PGNNIV architecture, batch size \(n_{b}\), penalties \(p_{k}\), \(k=1,2,3,4\), and number of iterations \(M\); Data: external forces \(\mathbf{\tilde{t}}^{(i)}\), measured displacements \(\mathbf{\bar{U}}^{(i)}\), \(i=1,\ldots,N\); Initialization of PGNNIV parameters, \(\mathbf{w}=\mathbf{w}^{0}\), \(j=0\); repeat for\(i=1,\ldots,n_{b}\)do \(\mathtt{U}^{(i)}\leftarrow\mathsf{Y}\left(\mathbf{\bar{t}}^{(i)};\mathbf{w}\right)\);/* Predictive network */ \(\mathtt{E}^{(i)}=\mathsf{KIN}\left(\mathtt{U}^{(i)}\right)\);/* Green-Lagrange or Cauchy strain tensor */ \(\mathtt{S}^{(i)}\leftarrow\mathsf{H}\left(\mathtt{E}^{(i)};\mathbf{w}\right)\);/* Explanatory network */ end for \(\mathrm{MSE}=\frac{1}{n_{b}}\sum_{i=1}^{n_{b}}\left[p_{1}||\mathbf{\bar{0}}^{(i)}- \mathtt{U}^{(i)}||^{2}\right]\); \(\mathrm{PEN}=\frac{1}{n_{b}}\sum_{i=1}^{n_{b}}\left[p_{2}||\mathsf{DIV}( \mathtt{S}^{(i)})||^{2}+p_{3}||\mathtt{S}^{(i)}\cdot\mathbf{N}-\mathbf{\bar{T}}^{(i)} ||^{2}+p_{4}\left(||\mathtt{U}^{(i)}_{x}(x=0,y)||^{2}+||\mathtt{U}^{(i)}_{y}(x,y=0)||^{2}\right)\right]\); \(\mathrm{CF}=\mathrm{MSE}+\mathrm{PEN}\); \(\mathbf{w}\leftarrow\mathbf{w}-\nabla_{w}\mathrm{CF}\); /* Stochastic gradient descent step */ \(j\gets j+1\); until\(j=M\); Output: Optimal parameters \(\mathbf{w}^{*}=\mathbf{w}\) for \(\mathsf{Y}\) and \(\mathsf{H}\);
```
**Algorithm 1**PGNNIV learning algorithm
Figure 3 shows a graphical representation of the different structures involved (tensorial fields) and the links between them (known and unknown operators) for finite strains solid mechanics.
#### 2.3.3 Details about the discretization.
The discretization of both space and time domains lies on the basis of numerical methods. In the particular case of solid mechanics under the hypotheses considered here, time discretization turns out not to be relevant for the overall computations, since loads are applied in a quasi-static way and the sole discretization of the geometry provides a very good approximation of how the continuum solid behaves.
Traditional FEMs follow a matrix-based approach to have algebraic systems whose solution is an approximate solution. By subdividing the whole domain into small parts (_finite elements_), PDEs governing the physical phenomena occurring in the particular geometry can be approximated by means of computable functions to generate algebraic systems even for complex geometries. However, FEMs require exact knowledge of the properties of the material and are usually time-consuming. On the contrary, PGNNIVs require no information about the material properties since these are learned during the training process of the network, and the calculation time for the forward problem is reduced to seconds at prediction time in the online loop.
The discrete nature of methods such as FEM, which subdivide space in small elements, very closely resembles that of PGNNIVs, which comprise a number of discrete units (neurons) to represent field variables. Moreover, commonly used differential operators in these methods are also subject of a suitable description in the PGNNIV framework using convolutional filters.
Figure 3: **Graphical representation of the designed 2D-planar stress PGNNIV.** All significant tensorial fields of the problem are represented: the input variables (top and right tractions and volume forces, where the latter are assumed to be null so removed formally from the input), the output variables (displacement field at each nodal value), as well as the internal variables of the problem (stress and strain fields, represented in Voigt notation).
For instance, it is possible to define the discrete gradient filter GRAD acting on the nodes for obtaining values on the elements or acting on the elements for obtaining values on the nodes. For example, if \(\mathtt{w}=\mathsf{GRAD}\otimes\mathtt{U}\), then
\[\mathtt{w}[i,j|1,1|m] =\frac{1}{2h_{x}}\left(\Delta_{x}\mathtt{U}[i,j+1|1|m]+\Delta_{x} \mathtt{U}[i,j-1|1|m]\right),\] \[\mathtt{w}[i,j|1,2|m] =\frac{1}{2h_{y}}\left(\Delta_{y}\mathtt{U}[i+1,j|1|m]+\Delta_{y} \mathtt{U}[i-1,j|1|m]\right),\] \[\mathtt{w}[i,j|2,1|m] =\frac{1}{2h_{x}}\left(\Delta_{x}\mathtt{U}[i,j+1|2|m]+\Delta_{x} \mathtt{U}[i,j-1|2|m]\right),\] \[\mathtt{w}[i,j|2,2|m] =\frac{1}{2h_{y}}\left(\Delta_{y}\mathtt{U}[i+1,j|2|m]+\Delta_{y} \mathtt{U}[i-1,j|2|m]\right),\]
where \(\Delta_{x}\mathtt{U}[i,\cdot|1|m]=\mathtt{U}[i+1,\cdot|1|m]-\mathtt{U}[i-1, \cdot|1|m]\) and \(\Delta_{y}\mathtt{U}[\cdot,j|1|m]=\mathtt{U}[\cdot,j+1|1|m]-\mathtt{U}[\cdot,j-1|1|m]\). Analogously, if \(\mathtt{R}=\mathsf{GRAD}\cdot\mathtt{T}\)
\[\mathtt{R}[i,j|1|m] =\frac{1}{2h_{x}}\left(\Delta_{x}\mathtt{T}[i,j+1|1,1|m]+\Delta_ {x}\mathtt{T}[i,j-1|1,1|m]\right)+\frac{1}{2h_{y}}\left(\Delta_{y}\mathtt{T} [i+1,j|1,2|m]+\Delta_{y}\mathtt{T}[i-1,j|1,2|m]\right),\] \[\mathtt{R}[i,j|2|m] =\frac{1}{2h_{x}}\left(\Delta_{x}\mathtt{T}[i,j+1|2,1|m]+\Delta_ {x}\mathtt{T}[i,j-1|2,1|m]\right)+\frac{1}{2h_{y}}\left(\Delta_{y}\mathtt{T} [i+1,j|2,2|m]+\Delta_{y}\mathtt{T}[i-1,j|2,2|m]\right),\]
Now we can define the different discretized differentials. For instance the 2D-discretized symmetric gradient of a vector field \(\mathbf{V}\) is \(\mathsf{SGRAD}(\mathbf{V})=\frac{1}{2}\left(\mathsf{GRAD}\otimes\mathbf{V}+\mathbf{V} \otimes\mathsf{GRAD}\right),\) where GRAD is the discrete gradient operator. Therefore, for large strain we have
\[\mathsf{E}=\frac{1}{2}\left(\mathsf{GRAD}\otimes\mathbf{U}+\mathbf{U}\otimes\mathsf{ GRAD}+\left(\mathsf{GRAD}\otimes\mathbf{U}\right)\left(\mathbf{U}\otimes\mathsf{GRAD} \right)\right)\]
and for small strains
\[\mathsf{E}=\mathsf{SGRAD}(\mathbf{U})=\frac{1}{2}\left(\mathsf{GRAD}\otimes\mathbf{U}+ \mathbf{U}\otimes\mathsf{GRAD}\right).\]
Similarly, the 2D-discretized divergence of a tensor field \(\mathbf{T}\) is defined as:
\[\mathsf{DIV}(\mathbf{T})=\mathsf{GRAD}\cdot\mathbf{T},\]
so we can express the equilibrium equation both in infinitesimal as well as finite strains theory as \(\mathsf{DIV}(\mathbf{\sigma})=\mathbf{0}\) or \(\mathsf{DIV}(\mathbf{P})=\mathbf{0}\) respectively.
### Data generation and training process
In this work we generate synthetic data although the methodology is the same for real experimental data. Synthetic data generation is more practical, as it allows for error validation by comparing to the real solutions, and inexpensive, given that an accurate numerical solution is available via FEM Analysis.
Small strains: linear, softening and hardening elastic materials.For the creation of the linear, softening and hardening materials, we used Matlab, in co-simulation with Abaqus CAE/6.14-2. Matlab was used to automatically and iteratively generate an Abaqus input file that contained the geometry and load profiles. Once each numerical simulation is completed, the results are stored back into Matlab. For all the test cases, the geometry is the same, whereas the variability in the data-set is achieved by randomly changing the load profiles so that all the examples correspond with different experiments. The load profiles are parabolic and are generated for both the right and top contours.
We consider three elastic materials with different constitutive laws. On the one hand, we choose an isotropic linear elastic material with elastic modulus \(E=1000\) Pa and Poisson's coefficient \(\nu=0.3\). On the other hand, we considered two nonlinear materials, one with softening properties and another one with hardening properties. In Abaqus, they were modeled as plastic materials with no discharging effects caused by the removal of the load, and strain ranges confined within very small values, that allowing for the compliance of the nonlinear constitutive law with the infinitesimal strains hypothesis. A relation of the type \(\sigma=K\varepsilon^{n}\) was used, with values of \(K\) and \(n\) specified in Table 1.
The data-set comprises \(N=10^{3}\) FEM-simulations for the linear material and \(N=10^{4}\) FEM-simulations for the hardening and softening materials.
Finite strains: Ogden-like hyperelastic material.For the finite strains case, an incompressible Ogden-like hyperelastic material of order 3 is used for the data-set generation. The incompressible Ogden hyperelastic material of order \(m\) is defined in terms of its strain-energy density function [76]:
\[\Psi(\mathbf{C})=\Psi(\lambda_{1},\lambda_{2},\lambda_{3})=\sum_{i=p}^{m}\frac{\mu_{ p}}{\alpha_{p}}\left(\lambda_{1}^{\alpha_{p}}+\lambda_{2}^{\alpha_{p}}+ \lambda_{3}^{\alpha_{p}}-3\right). \tag{31}\]
In Table 2 we report the material parameters used for the data generation. We produce \(N=10^{4}\) examples corresponding to uniform biaxial tests, where \(\lambda_{1},\lambda_{2}\in[1;1.10]\). For an incompressible membrane under biaxial deformation, assuming a plane stress state, the solution of the problem using an Ogden's strain-energy function has analytical solution [77]. The displacement fields corresponding to uniform biaxial deformations are
\[U_{x}(x,y)=\lambda_{1}x,\quad U_{y}(x,y)=\lambda_{2}y,\quad U_{z}(x,y)=\frac{1 }{\lambda_{1}\lambda_{2}}x,\]
so, the non-vanishing components of the Green-Lagrange deformation tensor, \(\mathbf{E}\), are
\[E_{xx}=\frac{1}{2}(\lambda_{1}^{2}-1),\quad E_{yy}=\frac{1}{2}(\lambda_{2}^{2 }-1),\quad E_{zz}=\frac{1}{2}((\lambda_{1}\lambda_{2})^{-2}-1).\]
As we are assuming plane stress, that is \(\sigma_{zz}=\sigma_{xz}=\sigma_{yz}=0\), the non-vanishing components of the first order Piola-Kirchhoff stress tensor are
\[P_{xx}=\frac{1}{\lambda_{1}}\sum_{k=1}^{3}\mu_{k}\left(\lambda_{1}^{\alpha_{k} }-(\lambda_{1}\lambda_{2})^{-\alpha_{k}}\right),\quad P_{yy}=\frac{1}{\lambda_ {2}}\sum_{k=1}^{3}\mu_{k}\left(\lambda_{2}^{\alpha_{k}}-(\lambda_{1}\lambda_{2} )^{-\alpha_{k}}\right).\]
Training process.For the evaluation of the methodology, we have trained four PGNNIVs corresponding to four cases:
* Linear material with parametric explanatory network.
* Linear, softening and hardening materials with non-parametric explanatory network.
* Ogden-like material with parametric explanatory network.
* Ogden-like material with non-parametric explanatory network.
For all the data-sets considered, there is a number of hyperparameters that have been tuned for obtaining the proceeding results with the different networks, namely the learning rate \(\beta\) and the four penalty coefficients \(p_{i}\), \(i=1,\ldots,4..\). The specific values of these hyperparameters are reported together with the different network topologies in Appendix A.2.
## 3 Results
When used as forward solvers, PGNNIVs can either predict measurable variables if force-displacement data is available, for example, through Digital Image Correlation (DIC) techniques, or explain the internal state of the solid if this is needed for a certain application. In this section, we validate the performance of PGNNIVs acting as a forward-solver against standard FEM solutions for the plate using the different materials described in Section 2.4, and also as a method for constitutive equation discovery.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Material** & \(K\) [Pa] & \(n\) [-] \\ \hline Softening & \(18.69\) & \(0.45\) \\ \hline Hardening & \(1.869\times 10^{12}\) & \(3.5\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameter values for the softening and hardening materials.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameter** & **Value** \\ \hline \(\mu_{1}\) & \(281\) Pa \\ \hline \(\mu_{2}\) & \(-280\) Pa \\ \hline \(\mu_{3}\) & \(0.31\) Pa \\ \hline \(\alpha_{1}\) & \(1.66\) \\ \hline \(\alpha_{2}\) & \(1.61\) \\ \hline \(\alpha_{3}\) & \(38.28\) \\ \hline \end{tabular}
\end{table}
Table 2: Parameter values for the Ogden hyperelastic material.
_Predicting and explaining nonlinear material response using Deep Physically Guided Neural Networks with Internal Variables_. J. Orera-Echeverria et al.
### Predictive capacity
#### 3.1.1 Infinitesimal strains case
We first evaluate the prediction capacity of PGNNIVs for the different tested materials under random parabolic loads. For a quantitative evaluation of the predictive capacity of the PGNNIV, we define the Relative Error (\(\mathrm{RE}\)) of an array field \(\mathrm{I}\) as:
\[\mathrm{RE}(\mathrm{I})=\frac{\sum_{I,J,K}\left(\hat{\mathrm{I}}[I|J|K]- \hat{\mathrm{I}}[I|J|K]\right)^{2}}{\sum_{I,J,K}\mathrm{I}[I|J|K]^{2}}, \tag{32}\]
where \(\hat{\mathrm{I}}\) is the predicted value and \(\mathrm{I}\) the value obtained using FEM. For instance, for the displacement field \(\mathbf{U}\) represented by the array \(\mathrm{U}\):
\[\mathrm{RE}(\mathrm{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\
with Eqs. (20) and (21) by computing the relative errors for a given parameter \(\lambda\), that are defined as:
\[\epsilon_{r}(\lambda)=\frac{|\hat{\lambda}-\lambda|}{\lambda}, \tag{36}\]
where \(\hat{\lambda}\) and \(\lambda\) are the predicted and real values respectively. The learned anisotropic and isotropic tensors, up to a precision of \(1\) Pa are:
\[\mathbf{D}_{\rm aniso}=\begin{pmatrix}1099&330&0\\ 330&1099&0\\ 0&0&511\end{pmatrix},\quad\mathbf{D}_{\rm iso}=\begin{pmatrix}1098&331&0\\ 331&1098&0\\ 0&0&767\end{pmatrix}.\]
The errors of the elastic tensor when using the anisotropic or the isotropic elastic material model are respectively \(\varepsilon_{r}(\mathbf{D}_{\rm aniso})=14.4\%\) and \(\varepsilon_{r}(\mathbf{D}_{\rm iso})=0.2\%\), i.e when the assumed hypotheses are true, the explanatory capacity of the network increases. Notwithstanding, the general anisotropic model has a certain explanatory capacity, as we may detect the supplementary structural symmetries in the resulting elastic tensor, that is, \(d_{13},d_{23}\ll d_{11},d_{12},d_{22},d_{33}\), \(d_{11}\simeq d_{22}\). The last symmetry condition \(d_{33}=d_{11}-d_{12}\) is not fulfilled, as the data-set is not very rich in large pure shear-stress states, so the model is not able to detect this symmetry.
Moving to specific structural parameter identification, Table 5 describes how the model is able to predict accurately the different elastic parameters. As commented before, the worst prediction is observed for the anisotropic model and the parameter that correlates shear stresses and strains. If, using the anisotropic model, we were interested in finding the value of the isotropic model parameters \(E\) and \(\nu\), it is possible to compute \(\nu\) using the relations
\[\nu=\frac{d_{12}}{d_{11}}=\frac{d_{12}}{d_{22}},\]
and then to compute \(E\) using
\[E=d_{11}(1-\nu^{2})=d_{22}(1-\nu^{2})=d_{12}\frac{1-\nu^{2}}{\nu}=d_{33}(1+\nu).\]
Of course, we obtain a different accuracy for each of these expressions. Using the anisotropic model, we obtain values of \(0.1\%\) and \(0.1\%\) for \(\epsilon_{r}(\nu)\) and values of \(0.04\%\), \(0.04\%\), \(0.4\%\) and \(34\%\) for \(\epsilon_{r}(E)\), in agreement with the previous observations.
State model discovery.While the predictive capacity of PGNNIVs does not necessarily surpass that of a classical (unconstrained) NN, the significant improvement is visible when assessing the explanatory capacity of the PGNNIV, which can be evaluated by its ability to learn the material constitutive law. We perform a virtual uniaxial test using the explanatory network, which corresponds to the functional representation of \(\sigma_{xx}=\mathsf{H}_{1}(\varepsilon,0,0)\) for \(\varepsilon\in[\varepsilon_{\rm min};\varepsilon_{\rm max}]\) and we compare the PGNNIV predictions with a virtual uniaxial test produced with FEM, as described in Section 2.4. Results are shown in Figure 4.
The explanatory error is quantified as the normalized area confined between the real uniaxial test curve and the PGNNIV-predicted one in Figure 4. It is expressed as:
\[\rm RE(\mathsf{H})=\sqrt{\frac{\int_{\varepsilon_{\rm min}}^{\varepsilon_{ \rm max}}(\hat{\sigma}_{xx}(\varepsilon)-\sigma_{xx}(\varepsilon))^{2}\,d \varepsilon}{\int_{\varepsilon_{\rm min}}^{\varepsilon_{\rm max}}\sigma_{xx} ^{2}(\varepsilon)\,d\varepsilon}}, \tag{37}\]
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameter, \(\lambda\)** & **Relative error, \(\epsilon_{r}(\lambda)\) (\%)** \\ \hline Anisotropic model & \\ \hline \(d_{11}\) & \(0.02\) \\ \(d_{12}\) & \(0.08\) \\ \(d_{13}\) & \(\infty\) \\ \(d_{22}\) & \(0.02\) \\ \(d_{23}\) & \(\infty\) \\ \(d_{33}\) & \(33.59\) \\ \hline \hline Isotropic model & \\ \hline \(E\) & \(0.20\) \\ \hline \(\nu\) & \(0.52\) \\ \hline \end{tabular}
\end{table}
Table 5: Predicted and real values of the model parameters.
where \(\hat{\sigma}_{xx}\) and \(\sigma_{xx}\) are the predicted and FEM stresses respectively, which result from strains \(\varepsilon\in[\varepsilon_{min};\varepsilon_{max}]\).
#### 3.2.2 Finite strains
We explore now the explanatory capacity for the finite strains case. If \(\lambda\in[\lambda_{\min};\lambda_{\max}]\) is the longitudinal stretch, \(\lambda_{1}\), the relative explanatory errors for a given transversal stretch \(\lambda_{2}\) are defined as:
\[\mathrm{RE}_{xx}(\mathsf{H};\lambda_{2}) =\sqrt{\frac{\int_{\lambda_{\min}}^{\lambda_{\max}}(\hat{P}_{xx} (\lambda,\lambda_{2})-P_{xx}(\lambda,\lambda_{2}))^{2}\,\mathrm{d}\lambda}{ \int_{\lambda_{\min}}^{\lambda_{\max}}P_{xx}^{2}(\lambda,\lambda_{2})\, \mathrm{d}\lambda}}, \tag{38}\] \[\mathrm{RE}_{yy}(\mathsf{H};\lambda_{2}) =\sqrt{\frac{\int_{\lambda_{\min}}^{\lambda_{\max}}(\hat{P}_{yy} (\lambda,\lambda_{2})-P_{yy}(\lambda,\lambda_{2}))^{2}\,\mathrm{d}\lambda}{ \int_{\lambda_{\min}}^{\lambda_{\max}}P_{yy}^{2}(\lambda,\lambda_{2})\, \mathrm{d}\lambda}}. \tag{39}\]
Note that, as the roles of \(x\) and \(y\) are symmetrical in the considered biaxial test (which is uniform, meaning that \(P_{xx}\) on the top contour has the same vale as \(P_{yy}\) on the right contour), the indicated errors are sufficient for illustrating the explanatory capacity of the method. For structural parameter identification, the formula used for error quantification is Eq. (36).
Parameter identification.As explained previously, we first explicitly state the parametric shape of the constitutive equation, that is, we prescribe the material to be Ogden-like. Under these assumptions and for the uniform biaxial test
\begin{table}
\begin{tabular}{|l|l|} \hline
**Type of material** & \(\mathrm{RE}(\mathsf{H})\) (\%) \\ \hline Linear & 3.02 \\ \hline Softening & 0.96 \\ \hline Hardening & 5.92 \\ \hline \end{tabular}
\end{table}
Table 6: Explanatory errors for the \(\mathsf{H}\) model subjected to uniform uniaxial test.
Figure 4: **PGNNIV prediction versus FEM solution of the uniaxial test curve for the different data-sets. We observe good agreement between FEM solution (continuous line) and PGNNIV prediction (dashed line), for the softening (a), linear (b) and hardening (c) materials.**
considered, the constitutive relation writes \[P_{xx} =\frac{1}{\sqrt{2E_{xx}+1}}\sum_{p=1}^{3}\mu_{k}\left[(2E_{xx}+1)^{ \alpha_{k}/2}-\left((2E_{xx}+1)(2E_{yy}+1)\right)^{-\alpha_{k}/2}\right],\] \[P_{yy} =\frac{1}{\sqrt{2E_{yy}+1}}\sum_{p=1}^{3}\mu_{k}\left[\left(2E_{yy }+1\right)^{\alpha_{k}/2}-\left((2E_{xx}+1)(2E_{yy}+1)\right)^{-\alpha_{k}/2} \right].\] Therefore, the parameters \(\alpha_{k}\), \(\mu_{k}\), \(k=1,2,3\) are, in principle, the ones that ought to be learned by the explanatory network. We obtain values for the parameters of \(\mu_{1}=276\,\mathrm{Pa}\), \(\mu_{2}=-277\,\mathrm{Pa}\), \(\mu_{3}=0.31\,\mathrm{Pa}\), \(\alpha_{1}=1.53\), \(\alpha_{2}=1.47\) and \(\alpha_{3}=38.32\). The relative errors for the different parameters are shown in Table 7. It is important to note that there are some parameters that are more accurately predicted than others, i.e. they are superfluous. This fact relies on the capacity of each of the parameters for explaining the material response, as observed in Fig. 5, where we compare the theoretical constitutive relation with the one obtained using the learned parameters. The explanatory errors are reported in Table 8, adding evidence of the explanatory power of the method despite the discrepancies in some parameters.
State model discovery.We now evaluate the model discovered by the PGNNIV with a virtual biaxial test using the explanatory network. The functional representation is now \((P_{xx},P_{yy},P_{xy})=\mathsf{H}(E_{xx},E_{yy},0)\) for \(E_{xx}\in[E_{\mathrm{min}};E_{\mathrm{max}}]\) and we compare the PGNNIV predictions with the model used for the data generation.
The results are shown in Fig. 6 for three different values of \(\lambda_{2}\), and the errors, computed according to Eqs.(39) are displayed in Table 8.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameter**, \(\lambda\) & **Relative error**, \(\epsilon_{r}(\lambda)\) (\%) \\ \hline \(\mu_{1}\) & \(1.7\) \\ \(\mu_{2}\) & \(-1.1\) \\ \(\mu_{3}\) & \(0.5\) \\ \(\alpha_{1}\) & \(7.7\) \\ \(\alpha_{2}\) & \(8.6\) \\ \(\alpha_{3}\) & \(0.1\) \\ \hline \end{tabular}
\end{table}
Table 7: Predicted and real values of the model parameters for the Ogden material.
Figure 5: **Parametric PGNNIV prediction versus analytic solution of the uniform biaxial test curve for the Ogden material.** We observe good agreement between analytical solution (continuous line) and PGNNIV prediction (dashed line), for the different values of \(\lambda_{2}\). This indicates that the network has a good explicability capacity even though some superfluous model parameters are not accurately fitted.
## 4 Discussion, conclusions and future work
Throughout this work we have presented the mathematical foundations of PGNNIVs in the field of computational solid mechanics, which demonstrates to be a particularly interesting niche for the use of such methodology. We have demonstrated that PGNNIVs have both predictive and explanatory capacity:
* **Predictive capacity**: PGNNIVs are able to accurately predict the solid response to new external stimuli in real time, something fundamental for optimization, control and probabilistic problems. They are also able to predict not only the solid response in terms of displacement field, but also the deformation and stress fields, without the need of any extra post-processing. This has been demonstrated in Section 3.1, where we have obtained relative errors always below \(10\%\). Controlling non-primary fields is sometimes important in engineering problems, as high stresses cause damage, plasticity or structural failure. As the explanatory network, once trained, encodes all the information about the material properties, it can be used for the prediction of stresses directly from the displacement fields, if necessary.
* **Explanatory capacity:** PGNNIVs are able to unveil hidden state model equations, that is, the constitutive equations of computational solid mechanics. First, for parameter identification and fitting, PGNNIV are able to identify inherent material symmetries (such as isotropy) and also to predict the value of the structural model parameters with high accuracy. In the latter, PGNNIVs are in a certain sense an alternative to conventional least-square minimization problems [78] (e.g. using standard methods such as Levenberg-Marquardt algorithm), but making the use of software and hardware tools associated with ANN technology: Graphical Processor Units (GPUs) and Tensor Processor Units (TPUs), distributed and cloud computation, scalability, transfer and federate learning strategies among others. In addition, PGNNIVs address the more challenging problem of model-free unravelling of nonlinear materials constitutive laws. In Section 3.2, we have demonstrated the explanatory capacity of PGNNIVs both for parameter identification and state model discovery with many examples (linear and nonlinear materials both in the infinitesimal and finite strains framework). The relative error when predicting structural parameters is always below \(2\%\) except if the data-set does not contain
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(Y\) stretch ratio & \multicolumn{2}{c|}{**Parametric model**} & \multicolumn{2}{c|}{**Non-parametric model**} \\ \hline & \(\mathrm{RE}_{xx}(\mathsf{H};\lambda_{2})\) (\%) & \(\mathrm{RE}_{yy}(\mathsf{H};\lambda_{2})\) (\%) & \(\mathrm{RE}_{xx}(\mathsf{H};\lambda_{2})\) (\%) & \(\mathrm{RE}_{yy}(\mathsf{H};\lambda_{2})\) (\%) \\ \hline \(\lambda_{2}=1.02\) & 0.98 & 2.65 & 0.34 & 3.57 \\ \hline \(\lambda_{2}=1.05\) & 1.15 & 1.93 & 0.33 & 1.77 \\ \hline \(\lambda_{2}=1.08\) & 1.32 & 1.10 & 0.39 & 1.52 \\ \hline \end{tabular}
\end{table}
Table 8: Explanatory errors for the finite strains problem subjected to uniform biaxial test.
Figure 6: **Non-parametric PGNNIV prediction versus analytic solution of the biaxial test curve for the Ogden material. We observe good agreement between FEM solution (continuous line) and PGNNIV prediction (dashed line), for the different values of \(\lambda_{2}\).**
information about the material response to stimuli associated with a certain parameter or the parameter itself has not a direct impact in the explanatory capacity (superfluous parameters). Besides, the explanatory relative error is below \(10\%\) for all the cases analyzed.
One important characteristic of PGNNIVs related to this double capacity is the fact that it is possible to decouple two sources of variability in the data obtained from a mechanical system that can be measured using any kind of sensor, namely, the stimuli variability and the variability related to system response.
* The stimuli variability is stored in the predictive network, which acts as an autoencoder. The encoder is able to map data that lives in a space of dimension \(D\), to a latent space of dimension \(d\ll D\). The size of the latent space is therefore informative about the data variability, and the values of the latent variables are a compressed representation of data. In addition to theoretical considerations, this fact has important practical consequences: if we know the sources of variability of our system, it is possible to design the predictive network accordingly.
* The physical interpretable knowledge, that is, the constitutive equation of the material, is delocalized, spread and diluted in the weights of the predictive network, but is also encoded in the relation \(\varepsilon\mapsto\mathbf{\sigma}\) or \(\mathbf{E}\mapsto\mathbf{P}\) that is learned by the explanatory network, in a much more structured way. Indeed, if it is intended to identify and adjust some structural parameters, this is possible. Otherwise, the constitutive model as a whole may be also unravelled using the expressiveness of ANN methods.
In that sense, PGNNIVs are knowledge generators from data as most of ML techniques are, with the difference that for this particular method, the physical knowledge is directly distilled in a separated component. This particularity is what makes the difference between PINNs and PGNNIVs. In PINNs, data and mathematical physics models are seamlessly integrated for solving parametric PDEs of a given problem [42, 68], but, by construction, the information cannot be extrapolated to other situations. That means that, in the context of solid mechanics, the network trained for a given problem using PINNs cannot be used for predicting the response of the system under different volume loads or boundary conditions, which greatly weakens its ability as a predictive method. PGNNIVs overcome this difficulty precisely by distilling the physical information from the intrinsic variability of the stimuli.
There is another paramount characteristic of PGNNIVs for computational solid mechanics that has been largely discussed and has explained their emergence. There is no need to have access to the values of internal variables, that are, strictly speaking, non-measurable as they are mathematical constructs coming from a scientific theory. In that sense, even if the bases for thermodynamically-appropriated ANN for constitutive equations in solid mechanics have been investigated [79], it is important to recall that stress fields, such as \(\mathbf{P}\) or \(\mathbf{\sigma}\) are not accessible without the need of extra hypothesis (geometry or load specific configurations). This fundamental issue should not be overlooked and many recent works have worked for this purpose. Efficient Unsupervised Constitutive Law Identification and Discovery (EUCLID) method is one of the most acclaimed efforts in that direction, either using sparse identification [80, 45, 81], clustering [82], Bayesian methods [83] or ANN [47]. However, EUCLID paradigm relies on the fact that the geometry and loads are appropriate enough to ensure strain-stress fields variability in a single specimen. When the geometry or the data acquisition capabilities do not satisfy this requirement, the only possibility is to take action on the data-set or the network, or at least to track the different load conditions and incorporate them to the computational pipeline.
Finally, among the different PIML methods, PGNNIVs are more transparent than other approaches that have demonstrated to be very performant for computing the evolution of dynamical systems by incorporating thermodynamical constraints. Structure Preserving Neural Networks enforce first and second laws of thermodynamics as regularization term [84]. Even if in the cited work the GENERIC structure allows for a split between reversible and non-reversible (dissipative) components, the physical information is again diluted in all the network weights, rather than in some specific components. A dimensionality reduction of the dynamic data was also explored [85]. This may be interpreted also as an information reduction to distil physical knowledge, although the interpretability is still dark. In PGNNIVs, however, interpretable physical information (that is, knowledge) is located in specific ANN components.
Nevertheless, the presented methodology still has some limitations and there exists room for exploration in several directions:
1. The data requirements for the problem in hands are high. In this work, we have generated data synthetically, but in reality sensors usually collect noisy data from experimental tests and there exist also important limitations concerning the size of the data-sets. A probabilistic (Bayesian) viewpoint will enable a new interpretation of PGNNIVs in the _small data_ regime, although this methodology is rather thought for systems where intensive data mining is possible, in which data quantity prevails over data quality.
2. Defining a suitable architecture for the Y and H networks is not a simple task and requires an iterative process, including the tuning of many hyperparameters. In addition, PGNNIVs require extra hyper-parameters, i.e. penalty coefficients \(p_{i}\), making the process even more involved and time consuming. We have presented some
insights into to the complexity and architecture of both predictive and explanatory networks, related with both their prediction and explanation character, but either a great intuition for network design, or time-intensive trial-and-error iterative testing are needed.
3. In this work, we have made used of relatively coarse discretizations, but finer meshes will result in more expensive training processes due to the exponentially larger number of parameters required. More powerful computational strategies (distributed computing, parallelization) as well as more advanced hardware (GPUs and TPUs) will enable the acceleration of the PGNNIVs training processes although the problem of dealing with multidimensional and unstructured meshes still remains open.
Many challenges lie ahead of the development of a more general PGNNIV framework. Next lines of research will address the formulation of PGNNIVs under finite strain assumptions using a much more theoretical basis, such as the presented in some recent works [79, 86, 87], leveraging its predictive and explanatory power. Furthermore, extensions of the 2D planar stress architecture to general 3D problems with more complex geometries and load scenarios as well as to more complex constitutive laws that might depend on time (visco-elasticity), or heterogeneous conditions, still pose major challenges for the future.
Finally, the usage of real data from sensors, for example, through Digital Image or Volume Correlation tests (DIC, DVC) [71] or even more advanced methods such as Finite Element Model Updating (FEMU) [88] or Virtual Fields Methods (VFM) [89] will put to the test the applicability of PGNNIVs to real scientific problems in the field of engineering.
In conclusion, we have demonstrated that in the context of computational solid mechanics, PGNNIVs are a family of ANN for accurately predicting measurable and non-measurable variables such as displacement and stress fields, in real time, and are also able to describe or unravel the constitutive model with high accuracy for different linear and nonlinear (hyper-)elastic materials. Even if this work is preliminary, the ingredients that it comprises correspond to a general approach and the methodology can be applied to cases of scientific interest with the necessary adaptations of the network architectures.
|
2306.02157 | Transforming to Yoked Neural Networks to Improve ANN Structure | Most existing classical artificial neural networks (ANN) are designed as a
tree structure to imitate neural networks. In this paper, we argue that the
connectivity of a tree is not sufficient to characterize a neural network. The
nodes of the same level of a tree cannot be connected with each other, i.e.,
these neural unit cannot share information with each other, which is a major
drawback of ANN. Although ANN has been significantly improved in recent years
to more complex structures, such as the directed acyclic graph (DAG), these
methods also have unidirectional and acyclic bias for ANN. In this paper, we
propose a method to build a bidirectional complete graph for the nodes in the
same level of an ANN, which yokes the nodes of the same level to formulate a
neural module. We call our model as YNN in short. YNN promotes the information
transfer significantly which obviously helps in improving the performance of
the method. Our YNN can imitate neural networks much better compared with the
traditional ANN. In this paper, we analyze the existing structural bias of ANN
and propose a model YNN to efficiently eliminate such structural bias. In our
model, nodes also carry out aggregation and transformation of features, and
edges determine the flow of information. We further impose auxiliary sparsity
constraint to the distribution of connectedness, which promotes the learned
structure to focus on critical connections. Finally, based on the optimized
structure, we also design small neural module structure based on the minimum
cut technique to reduce the computational burden of the YNN model. This
learning process is compatible with the existing networks and different tasks.
The obtained quantitative experimental results reflect that the learned
connectivity is superior to the traditional NN structure. | Xinshun Liu, Yizhi Fang, Yichao Jiang | 2023-06-03T16:56:18Z | http://arxiv.org/abs/2306.02157v3 | # Transforming to Yoked Neural Networks to Improve ANN Structure
###### Abstract
Most existing classical artificial neural networks (ANN) are designed as a tree structure to imitate neural networks. In this paper, we argue that the connectivity of a tree is not sufficient to characterize a neural network. The nodes of the same level of a tree cannot be connected with each other, i.e., these neural unit cannot share information with each other, which is a major drawback of ANN. Although ANN has been significantly improved in recent years to more complex structures, such as the directed acyclic graph (DAG), these methods also have unidirectional and acyclic bias for ANN. In this paper, we propose a method to build a bidirectional complete graph for the nodes in the same level of an ANN, which yokes the nodes of the same level to formulate a neural module. We call our model as YNN in short. YNN promotes the information transfer significantly which obviously helps in improving the performance of the method. Our YNN can imitate neural networks much better compared with the traditional ANN. In this paper, we analyze the existing structural bias of ANN and propose a model YNN to efficiently eliminate such structural bias. In our model, nodes also carry out aggregation and transformation of features, and edges determine the flow of information. We further impose auxiliary sparsity constraint to the distribution of connectedness, which promotes the learned structure to focus on critical connections. Finally, based on the optimized structure, we also design small neural module structure based on the minimum cut technique to reduce the computational burden of the YNN model. This learning process is compatible with the existing networks and different tasks. The obtained quantitative experimental results reflect that the learned connectivity is superior to the traditional NN structure.
Introduction
Deep learning successfully transits the feature engineering from manual to automatic design and enables optimization of the mapping function from sample to feature. Consequently, the search for effective neural networks has gradually become an important and practical direction. However, designing the architecture remains a challenging task. Certain research studies explore the impact of depth [1,2,3] and the type of convolution [4,5] on performance. Moreover, some researchers have attempted to simplify the architecture design. VGGNet [6] was directly stacked by a series of convolution layers with plain topology. To better adapt the optimization process of gradient descent process, GoogleNet [7] introduced parallel modules, while Highway networks [8] employed gating units to regulate information flow, resulting in elastic topologies. Driven by the significance of depth, the residual block consisted of residual mapping and shortcut was raised in ResNet [9]. Topological changes in neural networks successfully scaled up neural networks to hundreds of layers. The proposed residual connectivity was widely approved and was subsequently applied in other works such as MobileNet [10,11] and ShuffleNet [12]. Divergent from the relative sparse topologies, DenseNet [13] wired densely among blocks to fully leverage feature reuse. Recent advances in computer vision [25,26] also explored neural architecture search (NAS) methods [14,15,16] to search convolutional blocks. In recent years, Yuan proposed a topological perspective using directed acyclic graph (DAG) [29] to represent neural networks, enhancing the topological capabilities of artificial neural networks (ANNs). However, these approaches suffer from the bias of unidirectional and acyclic structures, limiting the signal's capability for free transmission in the network.
At the heart of our innovation lies a critical reimagining of traditional ANNs. Currently, NNs operate on asynchronous tensor flow, often organized hierarchically in a tree-like structure. However, this approach inadvertently hampers the nodes within each level from effective communication, relegating them to mere information carriers devoid of meaningful interaction. This inherent limitation substantially diminishes the potential of ANNs, impeding their full capabilities.
Our work transcends these constraints by introducing a paradigm shift. We present a method that enables synchronous communication among nodes within the same level, a fundamental departure from the status quo. This transformative adjustment yields a remarkable enhancement in information transformation, thereby significantly boosting the overall capacity of ANN structures. By fostering a collaborative environment among nodes, our approach leverages their collective power to unlock unprecedented capabilities.
Particularly, what sets our research apart is its inspiration drawn from the intricate dynamics of biological neural systems. Unlike the traditional stacked unit approach, where neural elements operate in isolation, our approach mirrors the cooperative nature of biological neural modules. In these systems, multiple neural units collaboratively execute precise functional implementations, resulting in exquisite performance. Our innovation is poised to bridge the gap between artificial and biological neural networks, thus propelling ANN structures closer
to the remarkable efficiency of their natural counterparts.
The existing efforts in neural network connectivity have primarily focused on the tree structures where neural units at the same level cannot exchange information with each other, resulting in significant drawbacks for ANNs. This limitation arises due to the absence of a neural module concept. In this paper, we argue that the current connectivity approaches fail to adequately capture the essence of neural networks. Since the nodes at the same level of a tree cannot establish connections with each other, it hampers the transfer of information between these neural units, leading to substantial defects for ANNs. We argue that the nodes in the same level should form a neural module and establish interconnections. As a result, we introduce a method to build up a bidirectional complete graph for nodes at the same level of an ANN. By linking the nodes in a YOKE fashion, we create neural modules. Furthermore, when we consider all the nodes at the same level, we would have a chance to construct a bidirectional complete graph in ANNs and yields remarkable improvements. We refer to our model as Yoked Neural Network, YNN for brevity. It is important to note that if all the edge weights in the bidirectional complete graph become vestigial and approach to zero, our YNN would reduce to a traditional tree structure.
In this paper, we analyze the structural bias of existing ANN structures. To more accurately mimic neural networks, our method efficiently eliminates structural bias. In our model, nodes not only aggregate and transform features but also determine the information flow. We achieve this by assigning learnable parameters to the edges, which reflect the magnitude of connections. This allows the learning process to resemble traditional learning methods, enhancing the overall performance of our model in imitating neural networks. As the nodes are relied on the values of other nodes, it is a challenging task designing a bidirectional complete graph for nodes at the same level. We address this challenge by introducing a synchronization method specifically tailored for learning the nodes at the same level. This synchronization method is crucial for ensuring the effective coordination and learning of these interconnected nodes.
Finally, to optimize the structure of YNN, we further attach an auxiliary sparsity constraint that influences the distribution of connectedness. This constraint promotes the learned structure to prioritize critical connections, enhancing the overall efficiency of the learning process.
The learning process is compatible with existing networks and exhibits adaptability to larger search spaces and diverse tasks, effectively eliminating the structural bias. We evaluate the effectiveness of our optimization method by conducting experiments on classical networks, demonstrating its competitiveness compared to existing networks. Additionally, to showcase the benefits of connectivity learning, we evaluate our method across various tasks and datasets. The quantitative results from these experiments indicate the superiority of the learned connectivity in terms of performance and effectiveness.
Considering that the synchronization algorithm for nodes at the same level may be computationally intense, we also propose a method to design small neural modules to simplify our model. This approach significantly reduces the computational burden of our model while maintaining its effectiveness.
To sum up, our contributions in this paper are as follows:
1. We provide an analysis of the structural bias present in existing ANN networks.
2. We propose the YNN model which involves YOKING the nodes at the same level together to simulate real neural networks.
3. We develop a synchronization method to effectively learn and coordinate the nodes at the same level, introducing the concept of neural modules.
4. We design a regularization-based optimization method to optimize the structure of the YNN model.
5. We propose the design of small neural modules to significantly reduce the computational complexity of our model, improving its efficiently.
## 2 Related Works
We firstly review some related works on the design of neural network structures and relevant optimization methods. The design of neural network has been studied widely. From shallow to deep, the shortcut connection plays an important role. Before ResNet, an early practice [17] also added linear layers connected from input to output to train multi-layer perceptrons. [7] was composed of a shortcut branch and a few deeper branches. The existence of shortcut eases the vanishing or exploding gradients [8, 9]. Recently, Yuan [29] explained from a topological perspective that shortcuts offer dense connections and benefit optimization. Many networks with dense connections exist On macro-structures also. In DenseNet [13], all preceding layers are connected. HRNet [18] was benefited from dense high-to-low connections for fine representations. Densely connected networks promote the specific task of localization [19]. Differently, our YNN optimizes the desired network from a bidirectional complete graph in a differentiable way.
For the learning process, our method is consistent with DARTS [22], which is differentiable. Different from sample-based optimization methods [29], the connectivity is learned simultaneously through the weights of the network using our modified version of the gradient descent. A joint training can shift the transferring step from one task to another, and obtain task-related YNN. This type was explored in [20, 21, 22, 23, 24] also, where weight-sharing is utilized across models at the cost of training. At the same time, for our YNN model, we also propose a synchronization method to get the node values in the same neural module.
In order to optimize the learned structure, a sparsity constraint can be observed in other applications, e.g., path selection for a multi-branch network [27], pruning unimportant channels for fast inference [28], etc. In a recent work, Yuan used L1 regularization to optimize a topological structure. In this paper, we also use L1 as well as L2 regularization to search a better structure.
Secondly, many deep learning works deal with the geometric data in these years[40]. They make neural network better cope with structure. Graph neural networks (GNNs) are connectivity-driven models, which have been addressing the need of geometric deep learning[30, 31]. In fact, a GNN adapts its structure to that of an input graph, and captures complex dependencies of an underlying system through an iterative process of aggregation of information. This allows to predict the properties of specific nodes, connections, or of the entire graph as a whole, and also to generalize to unseen graphs. Due to these powerful features, GNNs have been utilized in many relevant applications to accomplish their tasks, such as recommender systems [33], natural language processing [34], traffic speed prediction [35], critical data classification [36], computer vision [25, 26, 37], particle physics [38], resource allocation in computer networks [39], and so on.
In summary, the position of our work can by summarized as Fig 0:
## 3 Methodology
### Why YNN is Introduced?
ANN stands for a type of information flow. The traditional structure of ANN is a tree, which is a natural way to describe this type of information flow. Then, we can represent the architecture as \(G=(N,E)\), where \(N\) is the set of nodes and \(E\) denotes the set of edges. In this tree, each edge \(e_{ij}\in E\) performs a transformation operation parameterized by \(w_{ij}\), where \(ij\) stands for the topological ordering from the node \(n_{i}\) to node \(n_{j}\) with \(n_{i},n_{j}\in N\). In fact, the importance of the connection is determined by the weight of \(e_{ij}\). The tree structure as a natural way to represent such formation flow is most frequently used in ANN.
A tree is a hierarchical nested structure where a node can be influenced only
Figure 1: Fig 0
by its precursor node, thereby causing transformation of information between them. In a tree structure, the root node has no precursor node, while each other node has one and only one precursor node. The leaf node has no subsequent nodes. The number of subsequent nodes of each other node can be one or multiple. In addition, the tree structure in mathematical statistics can represent some hierarchical relationships. A tree structure has many applications. It can also indicate subordinating relationships.
In recent years, some researchers attempted to generalize this structure. In those works, except the root node, all other nodes are made to have multiple precursor nodes, i.e., the hierarchical information flow is made to form a directed acyclic graph (DAG).
However, a tree or a DAG is a hierarchical nested structure where a node can be influenced only by its precursor node, which makes the transformation of information quite inadequate. Moreover, we find that this structure is far more inferior in its strength compared with those of real neural networks, which connect far more complex structures than a tree or DAG structure as shown in Fig 1. In fact, a tree or a DAG structure is used just because its good mathematical properties which can apply backward propagation conveniently.
In this paper, we represent the neural network as a bidirectional complete graph for the nodes of the same level to make the description of NN is much better compared with the traditional ANN. Further, the connections between nodes are represented as directed edges, which determine the flow of information between the connected nodes. We consider that any two nodes \(n_{i}\) and \(n_{j}\) of the same level construct an information clique if there exists a path between them. Compared with the traditional tree structure, we yoke the nodes of the same level to form a bidirectional complete graph. We call this structure as YNN, which will be introduced in the next section.
Figure 2: Artificial Neural Network
### Structure of YNN
Inspired by the neural network of human beings as shown in the Fig 2. In order to enhance the ability of NN to express information, we design cliques for the nodes of each level of a neural network.
**Definition 1**: _A clique is a bidirectional complete graph which considers that for any two nodes \(n_{i}\) and \(n_{j}\), an edge exists from \(n_{i}\) to \(n_{j}\)._
According to this definition, the model in our framework is considered as a bidirectional complete graph for the nodes of the same level. These nodes construct a clique, where every node is not only influenced by its precursor nodes but also by all other nodes of its level. The cliques are represented as information modules which greatly enhance the characterization of NN.
According to the definition of clique, a neural network can also be represented as a list of cliques. Further, we can also introduce a concept of neural module.
**Definition 2**: _A neural module is a collection of nodes that interact with each other._
According to the definition, a neural module can be part of clique. In fact, if all the weights in a clique becomes zero, then the YNN model is reduced to the traditional tree structure.
In each clique of our model, the nodes are first calculated by using their precursor nodes, which only distribute features. The last one is the output level, which only generates final output of the graph. Secondly, each node is
Figure 3: Compare with biological nervous systems
also indicated by the nodes of the same level and their values are influenced by each other.
During the traditional forward computation, each node aggregates inputs from connected preorder nodes. We divide such nodes into two parts. The first part contains the precursor nodes of the last level, and the second part contains the nodes of the corresponding clique of the same level. Then, features are transformed to get an output tensor, which is sent to the nodes in the next level through the output edges. Its specific calculation method will be introduced in the next section.
In summary, according to the above definitions, each YNN is constructed as follow. Its order of outputs is represented as \(G=\{N,E\}\). For the nodes in the same level, bidirectional complete graphs are built as clique \(C\). Each node \(n\) in \(C\) is first calculated by using the precursor nodes without the nodes in the clique, which is called as the meta value \(\hat{n}\) of the node. Then, we calculate its real value \(n\) by using the nodes of the clique.
According to the meta value and the real value as introduced before, the structure of YNN is shown in the Fig 3.
The benefits of the structure can be formul
Figure 4: The first picture shows the tree structure of traditional ANN. The second picture shows our YNN model that yokes together the nodes of the first level. For the clique of the first level, the node spin part is based on its meta value, which also represents the connection with the pre nodes. As a result, we can decompose the spin node as shown in the third picture, which is to represent the meta value. The fourth and fifth pictures show the second level of our YNN model, which are the same as the second and third pictures, respectively.
In the next section, we will explain how to calculate the values of the nodes by using the precursor node as well as the nodes in the clique.
### Forward Process
Let we have \(n\) elements:
\[X=\{x_{1},x_{2},...,x_{n}\} \tag{1}\]
as the input data to feed for the first level of ANN. Then, the meta value \(\widehat{N}^{1}\) of the first level can be calculated as:
\[\widehat{N}^{1}=X*W^{01}, \tag{2}\]
where \(W_{01}\) is the fully connected weight of the edges between level 1 and input nodes. Then, similarity in nature, for meta value, the full connection between the levels makes the information to flow as:
\[\widehat{N}^{i}=f(N^{i-1})*W^{(i-1)i}, \tag{3}\]
where \(N^{i-1}=\{1,n_{1}^{i-1},n_{2}^{i-1},...\}\), \(n_{j}^{i-1}\) is the real value of the \(j\)th node in the \((i-1)\)th level, number 1 indicates for the bias of the value between the \((i-1)\)th and \(i\)th levels as well as the activation function \(f\).
Then, by introducing weight \(W^{i}\) in the \(i\)th level and considering the bidirectional complete graph of that level as a clique, we propose a method to calculate the real value \(N^{i}\) based on the meta value \(\widehat{N}^{i}\) as introduced in the previous section. Suppose, there are \(m\) nodes in the clique and they rely on the values of other nodes. Hence, we need a synchronization method to solve the problem. Here, we take the problem as a system of multivariate equations as well as an activation function \(f\). Then, for the real value of \(n_{j}^{i}\) in \(N^{i}\) based on the meta value \(\widehat{n}_{j}^{i}\) in \(\widehat{N}^{i}\), the equations can be summarized as follow:
\[\begin{cases}w_{01}^{i}+\sum\limits_{j\neq 1}f(n_{j}^{i})*w_{j1}^{i}+f( \widehat{n}_{1}^{i})*w_{11}^{i}=n_{1}^{i}\\ w_{02}^{i}+\sum\limits_{j\neq 2}f(n_{j}^{i})*w_{j2}^{i}+f(\widehat{n}_{2}^{i})*w _{22}^{i}=n_{2}^{i}\\...\\ w_{0m}^{i}+\sum\limits_{j\neq m}f(n)^{i}{}_{j}*w_{jm}^{i}+f(\widehat{n}_{m}^{i })*w_{mm}^{i}=n_{m}^{i}\end{cases}\]
In the above equations, \(w_{01}^{i}\), \(w_{02}^{i}\),..., \(w_{0m}^{i}\) are the bias of the real values of the nodes in the \(i\)th level. Note that, for the meta value, the bias is a value between the levels; while for a real value, the bias is a value in the individual level only.
Existing numerical methods would be able to solve the above equations efficiently. In the real applications, the efficiency can also be well optimized. In fact, for too large equations, we also propose a method to reduce the calculation scale efficiently. This method is introduced in the following section.
### Backward Process
In this section, we introduce the backward process of our model. Firstly, let the gradient of the output be the gradient of the meta value of the last level. The
loss of the model denote as \(\widehat{N}^{n}\). We calculate the node gradient for the real value of the \(i\)th level as:
\[d(N^{i})=d(\widehat{N}^{i+1})*(W^{i(i+1)})^{T}. \tag{4}\]
The meta value of \(\widehat{N}^{i}\) is calculated by using the real value of \(N^{i-1}\) according to the system of equations.
Then, to get the value of \(d(\widehat{N}^{i-1})\), we need to consider the nodes as the variables in the system of equations. For convenient, we introduce operator \(C^{i}\) to represent the derivatives for the \(i\)th level, which can be expressed as:
\[C^{i}=W^{i}-diag(W^{i})+eye(W^{i})\, \tag{5}\]
where \(W^{i}\) is the adjacency matrix of the clique in the \(i\)th level, \(diag(W^{i})\) is the diagonal matrix of \(W^{i}\), \(eye(W^{i})\) is the identity matrix whose size is the same as that of \(w^{i}\), and operator \(C^{i}\) represents the transfer of other nodes for each node in the clique according to the system of equations. In the clique, the identity matrix is for the node itself.
According to the system of equations, the meta value of a node is connected to its real value through the diagonal matrix of \(W^{i}\). Note that each node is calculated by using the activation function \(f\). As a result, after the transfer through the bidirectional complete graph, the gradient of the meta value of the nodes becomes:
\[d(\widehat{N}^{i})=d(N^{i})*C^{i}*f^{-1}(N^{i})*diag(W^{i})*f^{-1}(\widehat{N }^{i}). \tag{6}\]
Now, we have got the gradient of the meta value as well as that of the real value of each node. Finally, the gradient weight of the fully connected level \(W^{i(i+1)}\) between the \(i\)th and \((i+1)\)th level can be expressed as:
\[d(W^{i(i+1)})^{T}=d(\widehat{N}^{i+1})^{T}*f(N^{i}). \tag{7}\]
Now, we need to calculate the gradient of \(W^{i}\) for the clique in the \(i\)th level. According to the system of equations, we need to consider the weights of all the connected nodes. For any \(j\)th node in the clique, its connected weight is the \(j\)th column of the matrix. Similarly, for convenient, we introduce the following operator:
\[D^{i}_{j}=(1,f(n^{i}_{1}),...,f(\widehat{n}^{i}_{j}),...,f(n^{i}_{m}))\, \tag{8}\]
which can be found in the system of equations. Then, by the gradient of real value of the \(j\)th node \(n^{i}_{j}\) in \(N^{i}\), the following becomes the corresponding gradient of the clique:
\[d(W^{i}(:,j))=d(n^{i}_{j})*(D^{i}_{j})^{T}. \tag{9}\]
### YNN Structure Optimization
Consider that for the nodes in the same level, we construct a clique as stated before. Here, we consider a clique just as a universal set for all the possible
connections. In our work, we can optimize the YNN structure to let our model to focus on important connections only. The optimization process can be L1 or L2 regularization as usual, which can be parameterized \(L_{1}\) and \(L_{2}\), respectively.
For the \(j\)th node in the \(i\)th level, the process can be formulated as follow:
\[opt\_n_{j}^{i}=n_{j}^{i}+L_{1}*\sum_{k}abs(w^{i}(k,j))+L_{2}*\sum_{k}(w^{i}(k, j))^{2} \tag{10}\]
According to the L1 and L2 regularization, the L1 parameter can make our YNN to focus on important connections in the clique, and the L2 regularization makes the weight in the clique to be low to make our model to have better generation.
### Structure of Neural Module
According to the forward process of YNN as stated earlier, it solves a system of equations. A large number of nodes in the same level would bring too much computational burden to solve a large system of equations. In Fact, we can optimize the graph of any level by L1 and L2 regularization, and then turn to a minimum cut technology, e.g., the NE algorithm, to reduce the computation significantly. For each cut subgraph, we design a neural module structure according to definition 2 to simplify the system of equations as shown in Fig 4. Since the nodes are influenced only by the nodes in the subgraph, the system of equations can be reduced to the number of the nodes in the cut subgraph, which is formulated as a neural module as definition 2 in this paper.
In summary, the structure of the neural module can be constructed as follows:
1. Construct the clique for the nodes in the same level;
2. Optimize the clique by using the L1 and L2 regularization;
3. Cut the optimized graph using the NE algorithm;
4. Construct system of equations by taking each cut subgraph as a neural module.
As explained before, in this way the system of equations can be reduced to Ns-ary equations, where \(Ns\) is the number of nodes in each neural module. Of course, if the calculation of our model can be accept for our model, take the clique itself as Neural Module is most accurate, since clique considers all connection in the level.
## 4 Experiments
### Optimization of Classical ANN
In this section, we will show the experiments with our method. Here, we compare our method with the traditional NN method, stacked auto encoder(SAE),
as well as the generalized traditional NN which is a topological perspective to take NN as a DAG graph proposed in recent years.
We show our results for three real data sets. The first dataset contains the codon usage frequencies in the genomic coding DNA of a large sample of diverse organisms obtained from different taxa tabulated in the CUTG database. Here, we further manually curated and harmonized the existing entries by re-classifying the bacteria (bct) class of CUTG into archaea (arc), plasmids (plm), and bacteria proper (keeping with the original label 'bct'). The second dataset contains optically recognized handwritten digits made available by NIST using preprocessing programs to extract normalized bitmaps of handwritten digits from a preprinted form. Out of a total of 43 people. The third dataset is Connect-4 that contains all the legal 8-ply positions used in the game of connect-4, in which neither player has won yet, and the next move is not forced. The outcome class is the theoretical value of the first player in the game.
Here, we compared our method with other methods in terms of a variety of nodes. In this way, we can examine the effectiveness of our model at different levels of complexity of the traditional structure. These nodes are constructed by the NN, SAE, and DAG models. We compared these models in terms of the percentage error. The obtained results are organized in the following Tables, where we can see that our YNN model achieves much better results in most of the cases.
In fact, for all the data sets and a variety of nodes in the same level, our YNN
Figure 5: If the clique is too large, we would have too much computational burden to solve the system of equations. Then, we can first optimize the structure and learn the importance of the connection, followed by the application of the minimum cut method to formulate the structure of the neural module. In this way, the calculation for the system of equations can be limited to each subgraph.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Condon Data} & \multicolumn{1}{c}{} & \\ \cline{2-5} & 35 Nodes & 40 Nodes & 45 Nodes & 50 Nodes \\ \hline NN & 0.2789\(\pm\)0.0075 & 0.285\(\pm\)0.012 & 0.2875\(\pm\)0.0134 & 0.3073\(\pm\)0.0259 \\ SAE & 0.3912\(\pm\)0.0416 & 0.331\(\pm\)0.0044 & 0.3346\(\pm\)0.0096 & 0.3366\(\pm\)0.0099 \\ DAG & 3519\(\pm\)0.05 & 0.2828\(\pm\)0.0053 & 0.2989\(\pm\)0.0081 & 0.3134\(\pm\)0.0382 \\ YNN & **0.2751\(\pm\)0.0174** & **0.2489\(\pm\)0.0004** & **0.2582\(\pm\)0.0045** & **0.2475\(\pm\)0.0068** \\ YNN\&L1 & 0.2758\(\pm\)0.026 & 0.2513\(\pm\)0.0017 & 0.2635\(\pm\)0.0029 & 0.2625\(\pm\)0.0093 \\ YNN\&L2 & 0.2826\(\pm\)0.0366 & 0.2495\(\pm\)0.002 & 0.262\(\pm\)0.0081 & 0.2485\(\pm\)0.0122 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Connect-4 Dataset
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Condon Data} & \multicolumn{1}{c}{} & \\ \cline{2-5} & 35 Nodes & 40 Nodes & 45 Nodes & 50 Nodes \\ \hline NN & 0.2565\(\pm\)0.069 & 0.2181\(\pm\)0.445 & 0.1536\(\pm\)0.0323 & 0.259\(\pm\)0.0937 \\ SAE & 0.2871\(\pm\)0.04 & 0.3603\(\pm\)0.0086 & 0.4186\(\pm\)0.0419 & 0.3375\(\pm\)0.0376 \\ DAG & 0.2446\(\pm\)0.0409 & 0.2721\(\pm\)0.534 & 0.3475\(\pm\)0.0208 & 0.2585\(\pm\)0.0654 \\ YNN & **0.1433\(\pm\)0.0159** & 0.1725\(\pm\)0.0451 & 0.1552\(\pm\)0.0077 & 0.256\(\pm\)0.0001 \\ YNN\&L1 & 0.1633\(\pm\)0.0153 & 0.18\(\pm\)0.0247 & 0.1594\(\pm\)0.0225 & **0.1494\(\pm\)0.032** \\ YNN\&L2 & 0.1586\(\pm\)0.015 & **0.1614\(\pm\)0.0189** & **0.1483\(\pm\)0.142** & 0.1881\(\pm\)0.0001 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Optical Recognition of Handwritten Digits
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{Condon Data} & \multicolumn{1}{c}{} & \\ \cline{2-5} & 35 Nodes & 40 Nodes & 45 Nodes & 50 Nodes \\ \hline NN & 0.2789\(\pm\)0.0075 & 0.285\(\pm\)0.012 & 0.2875\(\pm\)0.0134 & 0.3073\(\pm\)0.0259 \\ SAE & 0.3912\(\pm\)0.0416 & 0.331\(\pm\)0.0044 & 0.3346\(\pm\)0.0096 & 0.3366\(\pm\)0.0099 \\ DAG & 3519\(\pm\)0.05 & 0.2828\(\pm\)0.0053 & 0.2989\(\pm\)0.0081 & 0.3134\(\pm\)0.0382 \\ YNN & **0.2751\(\pm\)0.0174** & **0.2489\(\pm\)0.0004** & **0.2582\(\pm\)0.0045** & **0.2475\(\pm\)0.0068** \\ YNN\&L1 & 0.2758\(\pm\)0.026 & 0.2513\(\pm\)0.0017 & 0.2635\(\pm\)0.0029 & 0.2625\(\pm\)0.0093 \\ YNN\&L2 & 0.2826\(\pm\)0.0366 & 0.2495\(\pm\)0.002 & 0.2622\(\pm\)0.0081 & 0.2485\(\pm\)0.0122 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Connect-4 Dataset
model could tend to get better results after the nodes are yoked together. The effect of our YNN could be improved by optimizing the structure as explained before. All of the first four lines of the Tables are for the results that do not be optimized by the L1 or L2 regularization. We can see that our YNN structure is more efficient even without regularization, compared with the traditional structure.
### Optimization of Structure
In this section, we optimize the structure of our model. Since every structure is a subgraph of a fully connected graph, the initial clique can be a search space for our model. Our model is optimized by using the L1 and L2 regularization, which are effective tools for optimizing structures. The obtained results show that such optimizations can yield better effect.
Here, we study the structure of the model for different L1 and L2 parameters, as shown in Fig 5. In the figure, the green line represents the results of YNN without optimization, while the blue and red lines are the results for a variety of L1 and L2 parameters, respectively. We can see that such optimization is effective for our YNN in most cases.
We also show the pixel map of the matrix for the clique. In the figure, the black-and-white graph represents the matrix of the fully connected graph for the nodes in the same level. The more black of the pixel means a lower weight for the corresponding edge.
According with the decline of the error, we can always seek a better structure compared with the bidirectional complete graph used in our YNN. Besides the L1 regularization, the L2 regularization is also an effective tool to optimize the structure of our model. A larger L2 regularization lowers the weights of all the edges, thus yields more black pixels. However, from the decline of error, we can find that the L2 regularization is also effective to optimize our YNN structure.
Figure 6: Regularization of results based on L1 and L2 for Codon dataset, optically recognized handwritten digits and connect-4 dataset.
## 5 Conclusion
In this paper, we propose a YNN structure to build a bidirectional complete graph for the nodes in the same level of ANN, so as to improve the effect of ANN by promoting the significant transfer of information. In our work, we analyze the structure bias. Our method eliminates structure bias efficiently. By assigning learnable parameters to the edges, which reflect the magnitude of connections, the learning process can be performed in a differentiable manner. For our model, we propose a synchronization method to simultaneously calculate the values of the nodes in the same level. We further impose an auxiliary sparsity constraint to the distribution of connectedness by L1 and L2 regularization, which promotes the learned structure to focus on critical connections. We also propose a small neural module structure that would efficiently reduce the computational burden of our model. The obtained quantitative experimental results demonstrate that the learned YNN structure is superior to the traditional structures.
|
2305.05368 | Deep Graph Neural Networks via Flexible Subgraph Aggregation | Graph neural networks (GNNs), a type of neural network that can learn from
graph-structured data and learn the representation of nodes through aggregating
neighborhood information, have shown superior performance in various downstream
tasks. However, it is known that the performance of GNNs degrades gradually as
the number of layers increases. In this paper, we evaluate the expressive power
of GNNs from the perspective of subgraph aggregation. We reveal the potential
cause of performance degradation for traditional deep GNNs, i.e., aggregated
subgraph overlap, and we theoretically illustrate the fact that previous
residual-based GNNs exploit the aggregation results of 1 to $k$ hop subgraphs
to improve the effectiveness. Further, we find that the utilization of
different subgraphs by previous models is often inflexible. Based on this, we
propose a sampling-based node-level residual module (SNR) that can achieve a
more flexible utilization of different hops of subgraph aggregation by
introducing node-level parameters sampled from a learnable distribution.
Extensive experiments show that the performance of GNNs with our proposed SNR
module outperform a comprehensive set of baselines. | Jingbo Zhou, Yixuan Du, Ruqiong Zhang, Di Jin, Carl Yang, Rui Zhang | 2023-05-09T12:03:42Z | http://arxiv.org/abs/2305.05368v2 | # Deep Graph Neural Networks
###### Abstract
Graph neural networks (GNNs), a type of neural network that can learn from graph-structured data and learn the representation of nodes through aggregating neighborhood information, have shown superior performance in various downstream tasks. However, it is known that the performance of GNNs degrades gradually as the number of layers increases. In this paper, we evaluate the expressive power of GNNs from the perspective of subgraph aggregation. We reveal the potential cause of performance degradation for traditional deep GNNs, i.e., aggregated subgraph overlap, and we theoretically illustrate the fact that previous residual-based GNNs exploit the aggregation results of 1 to \(k\) hop subgraphs to improve the effectiveness. Further, we find that the utilization of different subgraphs by previous models is often inflexible. Based on this, we propose a sampling-based node-level residual module (SNR) that can achieve a more flexible utilization of different hops of subgraph aggregation by introducing node-level parameters sampled from a learnable distribution. Extensive experiments show that the performance of GNNs with our proposed SNR module outperform a comprehensive set of baselines.
## 1 Introduction
GNNs have emerged in recent years as the most powerful model for processing graph-structured data and have performed very well in various fields, such as social networks (Perozzi et al. (2014)), recommender systems (Fan et al. (2019)), and drug discovery (Duvenaud et al. (2015)). Through the message-passing mechanism that propagates and aggregates representations of neighboring nodes, GNNs provide a general framework for learning information on graph structure.
Despite great success, according to previous studies (Li et al. (2018); Xu et al. (2018)), GNNs show significant performance degradation as the number of layers increases, which makes GNNs not able to take full advantage of the multi-hop neighbor structure of nodes to obtain better node representations.
The main reason for this situation is now widely believed to be oversmoothing (Li et al. (2018); Oono and Suzuki (2020); Xu et al. (2018); Klicpera et al. (2019)). However, since ResNet (He et al. (2016)) uses residual connection to solve a similar problem in computer vision and obtains good results, several new works have been inspired to apply the idea of residual connection to GNNs to alleviate oversmoothing and thus improve the expressive power. For example, JKNet (Xu et al. (2018)) learns node representations by aggregating the outputs of all previous layers at the last layer. DenseGCN (Li et al. (2019)) concatenates the results of the current layer and all previous layers as the node representations of this layer. APPNP (Klicpera et al. (2019)) uses the initial
residual connection to retain the initial feature information with probability \(\alpha\), and utilizes the feature information aggregated at the current layer with probability \(1-\alpha\).
In this paper, we evaluate the expressive power of GNNs from the perspective of subgraph aggregation. Based on this perspective, we show that the single high-hop subgraph aggregation of message-passing GNNs is limited by the fact that high-hop subgraphs are prone to information overlap, which makes the node representations obtained from k-hop subgraph aggregation indistinguishable, i.e., oversmoothing occurs.
Based on this perspective, we conduct a theoretical analysis of previous residual-based models and find that previous methods are in fact able to utilize multiple subgraph aggregations to improve the expressiveness of the model. However, most methods tend to utilize subgraph information by fixed coefficients, which assumes that the information from the subgraph of the same hop are equally important for different nodes, which leads to inflexibility in the model's exploitation of subgraph information and thus limits further improvement of the expressive power. Some existing methods try to overcome this inflexibility but lead to overfitting by introducing more parameters, which in turn affects the effectiveness of the model, which is demonstrated by the experiment.
Considering these limitations, we propose a **S**ampling-based **N**ode-level **R**esidual module (**SNR**). Specifically, we adopt a more fine-grained node-level residual module to achieve a more flexible exploitation of subgraph aggregation, which is proved by the theoretical analysis. On the other hand, to avoid overfitting due to the introduction of more parameters, instead of learning the specific parameters directly, we first learn a correlation distribution through reparameterization trick and obtain the specific residual coefficients by sampling. Experiments verify that this sampling-based approach can significantly alleviate overfitting.
**Our Contributions.** (1) We reinterpret the phenomenon that the effectiveness of traditional message-passing GNNs decreases as the number of layers increases from the perspective of _k_-hop subgraph overlap. (2) Based on the idea of subgraph aggregation, we theoretically analyze the previous residual-based methods and find that they actually utilize multiple hop subgraph aggregation in different ways to improve the expressive power of the model, and we point out the limitations of inflexibility and overfitting in previous residual-based methods. (3) We propose a sampling-based node-level residual module that allows more flexible exploitation of different _k_-hop subgraph aggregations while alleviating overfitting due to more parameters. (4) Extensive experiments show that GNNs with the proposed SNR module achieve better performance than other methods, as well as with higher training efficiency, on semi-supervised tasks as well as on tasks requiring deep GNNs.
## 2 Preliminaries
### Notations
A connected undirected graph is represented by \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},v_{2},\ldots,v_{N}\}\) is the set of \(N\) nodes and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of edges. The feature of nodes is given in matrix \(\mathbf{H}\in\mathbb{R}^{N\times d}\) where \(d\) indicates the length of feature. Let \(\mathbf{A}\in\{0,1\}^{N\times N}\) denotes the adjacency matrix and \(\mathbf{A}_{ij}=1\) only if an edge exists between nodes \(v_{i}\) and \(v_{j}\). \(\mathbf{D}\in\mathbb{R}^{N\times N}\) is the diagonal degree matrix whose elements \(d_{i}\) computes the number of edges connected to node \(v_{i}\). \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) is the adjacency matrix with self loop and \(\tilde{\mathbf{D}}=\mathbf{D}+\mathbf{I}\).
### Graph Neural Networks
A GNNs layer updates the representation of each node via aggregating itself and its neighbors' representations. Specifically, a layer's output \(\mathbf{H}^{\prime}\) consists of new representations \(\mathbf{h}^{\prime}\) of each node computed as:
\[\mathbf{h}^{\prime}_{i}=\mathbf{f}_{\theta}\left(\mathbf{h}_{i},\ \mathbf{AGGREGATE}\left(\{\mathbf{h}_{j}\mid v_{j}\in\mathcal{V},(v_{i},v_{j} )\in\mathcal{E}\}\right)\right)\]
where \(\mathbf{h}^{\prime}_{i}\) indicates the new representation of node \(v_{i}\) and \(\mathbf{f}_{\theta}\) denotes the update function. The key to the performance of different GNNs is in the design of the \(\mathbf{f}_{\theta}\) and **AGGREGATE** function. Graph Convolutional Network (GCN)(Kipf and Welling (2017)) is a classical massage-passing GNNs follows layer-wise propagation rule:
\[\mathbf{H}_{k+1}=\sigma\left(\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{ A}}\tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{H}_{k}\mathbf{W}_{k}\right) \tag{1}\]
where \(\mathbf{H}_{k}\) is the feature matrix of the \(k^{\text{th}}\) layer, \(\mathbf{W}_{k}\) is a layer-specific learnable weight matrix, \(\sigma(\cdot)\) denotes an activation function.
### Residual Connection
Several works have used residual connection to solve the problem of oversmoothing. Common residual connection for GNNs are summarized below. Details are explained in Appendix A.
## 3 Motivation
Message-passing GNNs recursively update the features of each node by aggregating information from its neighbors, allowing them to capture both the graph topology and node features. For a message-passing GNNs without a residual structure, the information domain of each node after \(k\)-layer aggregation is a related \(k\)-hop subgraph. Figure 1 shows that, after two aggregation operations, nodes on layer 2 obtain 1-hop neighbor and 2-hop neighbor information in layer 0, respectively. According to the definition of the \(k\)-hop subgraph, the information of the node on layer 2 in the figure is composed of all reachable nodes information shown on layer 0. We can consider the result of \(k\)-layer residual-free message-passing GNNs is equivalent to \(k\)-time aggregation of each node on its \(k\)-hop subgraph, which we call \(k\)-hop subgraph aggregation.
It is evident that as the number of aggregation operations increases, the reachable information range of a node expands rapidly, that is, the size of its \(k\)-hop subgraph grows exponentially as \(k\) increases, leading to a significant increase in the overlap between the \(k\)-hop subgraphs of different nodes. As a result, the aggregation result of different nodes on their respective \(k\)-hop subgraphs becomes indistinguishable. Furthermore, in a specific graph dataset, nodes with higher degrees tend to have a larger range of \(k\)-hop subgraphs compared to nodes with lower degrees. As a result, the subgraphs are more likely to overlap between nodes with higher degrees, making their aggregation results more likely to become similar and indistinguishable.
To verify this point, we conduct experiments on three graph datasets, Cora, Citeseer, and Pubmed. First, we group the nodes according to their degrees by assigning nodes with degrees in the range of \([2^{i},2^{i+1})\) to the \(i\)-th group. Subsequently, we perform aggregation with different layers of GCN and GAT, then calculate the degree of smoothing of the node representations within each group separately. We use the metric proposed in (Jin et al. (2022)) to measure the smoothness of the node representations within each group, namely **SMV**, which calculates the average of the distances between the nodes within the group:
\[\mathbf{SMV}(\mathbf{X})=\frac{1}{\mathbf{N}(\mathbf{N}-\mathbf{1})}\sum_{i \neq j}\mathbf{D}\left(\mathbf{X}_{i,:},\mathbf{X}_{j,:}\right) \tag{2}\]
where \(\mathbf{D}(\cdot,\cdot)\) denotes the normalized Euclidean distance between two vectors:
\[\mathbf{D}(\mathbf{x},\mathbf{y})=\frac{1}{2}\left\|\frac{\mathbf{x}}{\| \mathbf{x}\|}-\frac{\mathbf{y}}{\|\mathbf{y}\|}\right\|_{2} \tag{3}\]
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Residual Connection** & **Corresponding GCN** & **Formula** \\ \hline Res & ResGCN & \(\mathbf{H}_{k}=\mathbf{H}_{k-1}+\sigma\left(\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{H}_{k-1}\mathbf{W} _{k-1}\right)\) \\ \hline InitialRes & APPNP & \(\mathbf{H}_{k}=(1-\alpha)\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{H}_{k-1}+\alpha\mathbf{H}\) \\ \hline Dense & DenseGCN & \(\mathbf{H}_{k}=\mathbf{AGG}_{dense}(\mathbf{H},\mathbf{H}_{1},\ldots, \mathbf{H}_{k-1})\) \\ \hline JK & JKNet & \(\mathbf{H}_{output}=\mathbf{AGG}_{jk}(\mathbf{H}_{1},\ldots,\mathbf{H}_{k-1})\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Common residual connection for GNNs.
Figure 1: \(k\)-hop subgraph.
A smaller value of **SMV** indicates a greater similarity in node representations.
We select the most representative result illustrated in Figure 2, which shows the result of GAT on Pubmed. The rest of the results are shown in the Appendix B. It can be seen that the groups of nodes with higher degree tend to be more likely to have high similarity in the representation of nodes within the group in different layers of the model. This finding supports our claim.
After verifying the conclusion that subgraph overlap leads to oversmoothing through experiments, a natural idea is to alleviate the problem of large overlap of single subgraph by utilizing multiple hop subgraph aggregations, thereby alleviating oversmoothing. In the following section, we will demonstrate that the previous \(k\)-layer residual-based GNNs are actually different forms of integration of 1 to \(k\) hop subgraph aggregations.
### Revisiting Previous Models in a New Perspective
In the rest of this paper we will uniformly take GCN, a classical residual-free message-passing GNNs, as an example. We assume that **H** is non-negative, so the ELU function can be ignored. In addition, the weight matrix is ignored for simplicity. Combined with the formula of GCN given in Equation 1, we can formulate the specific result of \(k\)-hop subgraph aggregation as \(\mathbf{N}^{k}\mathbf{H}\), where \(\mathbf{N}=\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde{\mathbf{D }}^{-\frac{1}{2}}\). To show more intuitively how different \(k\)-layer-based residual models utilize \(\mathbf{N}^{j}\mathbf{H}\), \(j=0,1,\cdots,k\). We derive the general term formulas of their final outputs, and the results are shown in Table 2. Details of the derivation of the formula in this part are given in Appendix C.
From the formula in the table, we can see that, in comparison to message-passing GNNs, residual-based variants of GNNs can utilize multiple \(k\)-hop subgraphs. There are two methods to exploit them: **(1)** Summation, such as ResGCN and APPNP. Such methods employ linear summation over the aggregation of different hop subgraphs; **(2)** Aggregation functions, such as DenseNet and JKNet. Such methods make direct and explicit exploitation of different hop subgraph aggregations through methods such as concatenation.
However, for the first type of methods, they all employ a fixed, layer-level coefficient for linear summation of the subgraph aggregation, which assumes that the information from the subgraph of the same hop are equally important for different nodes. It will limit the expressive power of GNNs, which reveals the need to design a more fine-grained node-level residual module that can more flexibly utilize information from different \(k\)-hop subgraphs. For another type of method, they can achieve finer-grained subgraph aggregation, but the experiment find that their performance is not improved because of the more finer-grained structure, mainly because the introduction of more parameters leads to overfitting Phenomenon. In general, neither of these two types of methods has achieved a more effective improvement in the expressive power of GNNs.
\begin{table}
\begin{tabular}{c|c} \hline \hline
**Model Name** & **General Term Formula** \\ \hline ResGCN & \(\mathbf{H}_{k}=\sum_{j=0}^{k}\mathbf{C}_{k}^{j}\mathbf{N}^{j}\mathbf{H}\) \\ \hline APPNP & \(\mathbf{H}_{k}=\left(1-\alpha\right)^{k}\mathbf{N}^{k}\mathbf{H}+\alpha\sum \limits_{j=0}^{k-1}\sum\limits_{i=0}^{j}\left(-1\right)^{j-i}\left(1-\alpha \right)^{i}\mathbf{N}^{i}\mathbf{H}\) \\ \hline JKNet & \(\mathbf{H}_{k}=\mathbf{AGG}_{jk}(\mathbf{NH},\ldots,\mathbf{N}^{k-1}\mathbf{H})\) \\ \hline DenseGCN & — \\ \hline \hline \end{tabular}
\end{table}
Table 2: General term formulas of residual models.
Figure 2: SMV for node groups of different degrees.
The Proposed Method
In order to solve the two limitations of flexibility and overfitting encountered by previous residual-based models, we try to propose a node-level, more flexible, general residual module, which can alleviate overfitting caused by more parameters at the same time. Based on this, we propose a sampling-based node-level generic residual module SNR. We define SNR module as:
\[\mathbf{h}_{k-1}^{(i)}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
It is worth noting that GCNII and SNR-GCN share a similar architecture, so both can be viewed approximately as more refined APPNP-style models. However, when faced with the problem of overfitting due to more parameters, GCNII adds an identity matrix to mitigate the issue. Later experiment results have shown that SNR-GCN's learning distribution-sampling approach is more effective in alleviating overfitting.
### Complexity Analysis
Taking vanilla GCN as an example, we analyzed the additional complexity of SNR in model and time. We assume that the number of nodes in the graph is \(n\) and hidden dimension is \(d\).
**Model Complexity.** As described in Section 4, at each layer the SNR module learns a mean and standard deviation of the corresponding distribution for each node, so the complexity can be calculated as \(O(n)\), and thus the additional complexity of the \(k\)-layer model equipped with SNR is \(O(kn)\).
**Time Complexity.** The time complexity of a vanilla GCN layer mainly comes from the matrix multiplication of \(\mathbf{N}\) and \(\mathbf{H}\), hence its complexity is \(O(n^{2}d)\). And the main computational parts of a SNR module are the sampling of \(p_{k}^{(i)}\), scalar multiplication and matrix addition, which correspond to a complexity of \(O(n)\), \(O(nd)\), and \(O(nd)\), respectively. Thus the time complexity of the SNR module is \(O(nd)\) and the time complexity of a GCN layer equipped the SNR module is \(O(n^{2}d+nd)\). Therefore, the introduction of the SNR module does not significantly affect the computational efficiency.
## 5 Experiment
In this section, we aim to experimentally evaluate the effectiveness of SNR on real datasets. To achieve this, we will compare the performance of SNR with other methods and answer the following research questions. **Q1:** How effective is SNR on classical tasks that prefer shallow models? **Q2:** Can SNR help overcome oversmoothing in GNNs and enable the training of deeper models? **Q3:** How effective is SNR on tasks that require deep GNNs? **Q4:** How efficient is the training of SNR?
### Experiment Setup
In our study, we conduct experiments on four tasks: semi-supervised node classification **(Q1)**, alleviating performance drop in deeper GNNs **(Q2)**, semi-supervised node classification with missing vectors **(Q3)**, and efficiency evaluation **(Q4)**.
**Datasets.** To assess the effectiveness of our proposed module, we have used four data sets that are widely used in the field of GNN, including Cora, Citeseer, Pubmed (Sen et al. (2008)), and CoraFull (Bojchevski and Gunnemann (2018)) for testing purposes. In addition, we also use two webpage datasets collected from Wikipedia: Chameleon and Squirrel (Rozemberczki et al. (2021)). Details on the characteristics of these datasets and the specific data-splitting procedures used can be found in Appendix F.1.
**Models.** We consider two fundamental GNNs, GCN (Kipf and Welling (2017)) and GAT (Velickovic et al. (2017)). For GCN, we test the performance of SNR-GCN and its residual variant models, including ResGCN (Li et al. (2019)), APPNP (Klicpera et al. (2019)), DenseGCN (Li et al. (2019)), GCNII (Chen et al. (2020)) and JKNet (Xu et al. (2018)). For GAT, we directly equip it with the following residual module: Res, InitialRes, Dense, JK and SNR and test the performance. Additionally, for the SSNC-MV task, we compare our proposed module with several classical oversmoothing mitigation techniques, including BatchNorm (Ioffe and Szegedy (2015)), PairNorm (Zhao and Akoglu (2019)), DGN (Zhou et al. (2020)), Decorr (Jin et al. (2022)), DropEdge (Rong et al. (2019)) and other residual-based methods. Further details on these models and techniques can be found in the following sections.
**Implementations.** For all benchmark and variant models, the linear layers in the models are initialized with a standard normal distribution, and the convolutional layers are initialized with Xavier initialization. The Adam optimizer (Kingma and Ba (2015)) is used for all models. Further details on the specific parameter settings used can be found in Appendix F.2. All models and datasets used in this paper are implemented using the Deep Graph Library (DGL) (Wang et al. (2019)). All experiments are conducted on a server with 15 vCPU Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz, A40 with 48GB GPU memory, and 56GB main memory.
### Semi-supervised Node Classification
To validate the performance of SNR, we apply the module to two fundamental GNNs, GCN and GAT, and test the accuracy according to the mentioned experimental setup, and compare it with four classic residual modules, DenseNet, ResNet, InitialResNet and JKNet. We vary the number of layers in the range of \(\{1,2,3,\cdots,10\}\) and select the best result among all layers. Specifically, we run 10 times for each number of layers to obtain the mean accuracy along with the standard deviation. We select the best results among all layers and report them in the Table 3. We find that GNNs with the SNR module consistently achieve the best performance in all cases **(Q1)**. However, from the experimental results, many models with residual modules have not achieved the expected results. In many cases, compared with the basic model, the accuracy is even reduced. According to previous research (Zhao and Akoglu (2019)), we speculate that overfitting may have contributed to this phenomenon. To verify our hypothesis, we conduct further experiments. Given that most models in the previous experiments achieve their best performance with shallow models, we select models with two layers, train 500 epochs, and report their accuracy on the training and validation sets at each epoch. The results are shown in Appendix G. Most models show signs of overfitting and SNR module demonstrates the best ability to alleviate overfitting. Specifically, in shallow GNNs with limited subgraph aggregation, most models have similar expressive abilities, and overfitting is the main factor affecting their performance. Our proposed method effectively alleviates overfitting by learning a more representative distribution, resulting in a better performance than the base models.
### Alleviating Performance Drop in Deeper GNNs
As the number of layers in GNNs increases, oversmoothing occurs, resulting in performance degradation. Our objective is to investigate the performance of deep GNNs equipped with SNR and observe the impact of oversmoothing on their performance. We evaluate the performance of GNNs with different residual modules on 2, 16, and 32 layers using the Cora, Citeseer, and Pubmed datasets. The "None" column represents vanilla GNNs without any additional modules. According to (Chen et al. (2020)), APPNP is a shallow model, hence we use GCNII to represent GCN with initial residual connection instead. The same settings are used in section 5.4. The experimental results are presented in Table 4.
From Table 4, we can observe that GNNs with SNR consistently outperform other residual methods and the base models in most of cases when given the same number of layers. SNR can significantly improve the performance of deep GNNs **(Q2)**. For instance, on the Cora dataset, SNR improves the performance of 32-layer GCN and GAT by **53.69%** and **56.20%**, respectively. By flexibly utilizing multiple subgraph aggregation results with our SNR module, we can enhance the expressive power of the model and produce more distinctive node representations than those of regular GNNs, thereby overcoming the oversmoothing problem. These results suggest that we can train deep GNNs based on SNR, making them suitable for tasks that require the use of deep GNNs.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline
**Method** & **Cora** & **Citeseer** & **Pubmed** & **CoraFull** & **Chameleon** & **Squirrel** \\ \hline GCN & 80.16\(\pm\)1.15 & 70.20\(\pm\)0.62 & 78.26\(\pm\)0.61 & 68.40\(\pm\)0.33 & 68.00\(\pm\)2.30 & 51.69\(\pm\)1.83 \\ ResGCN & 79.01\(\pm\)1.26 & 69.27\(\pm\)0.66 & 78.08\(\pm\)0.51 & 67.98\(\pm\)0.51 & 65.26\(\pm\)2.47 & 47.43\(\pm\)1.14 \\ APPNP & 79.04\(\pm\)0.84 & 69.64\(\pm\)0.49 & 76.38\(\pm\)0.12 & 37.77\(\pm\)0.43 & 59.80\(\pm\)2.68 & 43.17\(\pm\)1.01 \\ GCNII & 78.53\(\pm\)0.67 & 69.55\(\pm\)1.14 & 76.17\(\pm\)0.70 & 68.30\(\pm\)0.26 & 64.76\(\pm\)2.43 & 52.83\(\pm\)1.51 \\ DenseGCN & 77.24\(\pm\)1.12 & 65.03\(\pm\)1.58 & 76.93\(\pm\)0.78 & 64.52\(\pm\)0.71 & 59.04\(\pm\)2.07 & 38.98\(\pm\)1.25 \\ JKNet & 78.16\(\pm\)1.21 & 65.33\(\pm\)1.66 & 78.10\(\pm\)0.55 & 66.11\(\pm\)0.49 & 55.75\(\pm\)2.93 & 35.95\(\pm\)1.10 \\
**SNR-GCN (Ours)** & **81.17\(\pm\)0.72** & **70.39\(\pm\)1.01** & **78.34\(\pm\)0.62** & **69.80\(\pm\)0.28** & **72.04\(\pm\)1.89** & **58.35\(\pm\)1.55** \\ \hline GAT & 79.24\(\pm\)1.18 & 69.51\(\pm\)1.07 & 77.59\(\pm\)0.80 & 67.39\(\pm\)0.32 & 65.81\(\pm\)2.13 & 50.16\(\pm\)2.42 \\ Res-GAT & 78.43\(\pm\)0.99 & 68.15\(\pm\)1.25 & 77.27\(\pm\)0.52 & 67.67\(\pm\)0.32 & 69.08\(\pm\)2.50 & 49.77\(\pm\)1.72 \\ InitialRes-GAT & 77.77\(\pm\)1.51 & 67.48\(\pm\)2.15 & 77.46\(\pm\)1.17 & 65.49\(\pm\)0.42 & 65.90\(\pm\)2.98 & 52.83\(\pm\)2.39 \\ Dense-GAT & 78.27\(\pm\)2.22 & 64.92\(\pm\)1.94 & 76.84\(\pm\)0.64 & 66.61\(\pm\)0.63 & 63.86\(\pm\)3.03 & 43.01\(\pm\)1.34 \\ JK-GAT & 78.91\(\pm\)1.71 & 65.59\(\pm\)2.62 & 77.70\(\pm\)0.64 & 67.69\(\pm\)0.65 & 56.14\(\pm\)2.68 & 37.25\(\pm\)1.01 \\
**SNR-GAT (Ours)** & **79.65\(\pm\)0.84** & **69.85\(\pm\)0.67** & **77.76\(\pm\)0.93** & **68.00\(\pm\)0.27** & **69.54\(\pm\)2.22** & **55.14\(\pm\)1.78** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of classification accuracy (%) results with various depths. The best results are in bold and the second best results are underlined.
### Semi-supervised Node Classification with Missing Vectors
When do we need deep GNNs? (Zhao and Akoglu (2019)) first proposed semi-supervised node classification with missing vectors (SSNC-MV), where nodes' features are missing. SSNC-MV is a practical problem with various real-world applications. For example, new users on social networks usually lack personal information (Rashid et al. (2008)). Obviously, we need more propagation steps to effectively aggregate information associated with existing users so that we can obtain representations of these new users. In this scenario, GNNs with more layers clearly perform better.
Previous research has shown that normalization techniques can be effective in mitigating oversmoothing, and further, exploring deeper architectures. Therefore, we apply several techniques that can overcome oversmoothing and residual modules to GCN and GAT to compare their performance on tasks that require deep GNNs.
We remove the node features in the validation and test set following the idea in (Jin et al. (2022); Zhao and Akoglu (2019); Zhou et al. (2020)). We reuse the metrics that already reported in (Jin et al. (2022)) for None, BatchNorm (Ioffe and Szegedy (2015)), PairNorm (Zhao and Akoglu (2019)), DGN (Zhou et al. (2020)), DeCorr (Jin et al. (2022)), and DropEdge (Rong et al. (2019)). For all residual-based models, the results are obtained by varying the number of layers in \(\{1,2,3,\cdots,10,15,\cdots,30\}\) and running five times for each number of layers. We select the layer #K that achieves the best
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{4}{c|}{GCN} & \multicolumn{4}{c}{GAT} \\ & & L2 & L16 & L32 & L2 & L16 & L32 \\ \hline \multirow{6}{*}{Cora} & None & 79.50\(\pm\)0.84 & 69.83\(\pm\)2.47 & 25.31\(\pm\)12.49 & 79.11\(\pm\)1.55 & 75.44\(\pm\)1.08 & 22.74\(\pm\)7.47 \\ & Res & 78.73\(\pm\)1.27 & 78.46\(\pm\)0.79 & 38.70\(\pm\)8.20 & 78.36\(\pm\)1.42 & 34.80\(\pm\)6.26 & 32.06\(\pm\)0.54 \\ & InitialRes & 77.67\(\pm\)0.51 & 77.74\(\pm\)0.73 & 77.92\(\pm\)0.56 & 77.20\(\pm\)1.54 & 74.99\(\pm\)0.75 & 25.08\(\pm\)7.27 \\ & Dense & 75.24\(\pm\)1.73 & 71.34\(\pm\)1.51 & 75.43\(\pm\)2.49 & 76.80\(\pm\)1.71 & 74.75\(\pm\)2.22 & 75.70\(\pm\)2.20 \\ & JK & 76.28\(\pm\)1.73 & 72.39\(\pm\)3.20 & 75.03\(\pm\)1.11 & 78.06\(\pm\)0.51 & 76.66\(\pm\)1.39 & 23.29\(\pm\)8.45 \\ & **SNR (Ours)** & **80.58\(\pm\)0.82** & **78.55\(\pm\)0.92** & **79.00\(\pm\)1.43** & **79.69\(\pm\)0.55** & **77.92\(\pm\)1.54** & **78.94\(\pm\)0.80** \\ \hline \multirow{6}{*}{Citeseer} & None & 68.31\(\pm\)1.40 & 54.07\(\pm\)2.48 & 34.84\(\pm\)1.60 & 68.64\(\pm\)1.20 & 59.16\(\pm\)2.44 & 24.37\(\pm\)3.59 \\ & Res & 67.68\(\pm\)1.36 & 63.99\(\pm\)1.12 & 52.96\(\pm\)4.27 & 67.55\(\pm\)1.10 & 28.53\(\pm\)4.93 & 24.70\(\pm\)4.12 \\ & InitialRes & 68.23\(\pm\)0.95 & **68.29\(\pm\)0.92** & **68.74\(\pm\)0.61** & 66.86\(\pm\)1.60 & 62.42\(\pm\)2.29 & 73.84\(\pm\)8.7 \\ & Dense & 64.83\(\pm\)0.94 & 58.42\(\pm\)2.96 & 58.75\(\pm\)3.37 & 64.58\(\pm\)2.07 & 61.17\(\pm\)1.78 & 61.87\(\pm\)2.91 \\ & JK & 64.69\(\pm\)1.44 & 58.38\(\pm\)3.36 & 58.63\(\pm\)4.76 & 65.84\(\pm\)2.02 & 62.64\(\pm\)1.66 & 23.09\(\pm\)4.02 \\ & **SNR (Ours)** & **70.18\(\pm\)0.61** & 67.07\(\pm\)1.78 & 66.27\(\pm\)2.00 & **69.71\(\pm\)0.92** & **67.51\(\pm\)2.28** & **66.53\(\pm\)2.48** \\ \hline \multirow{6}{*}{Pubmed} & None & 77.53\(\pm\)0.73 & 76.16\(\pm\)0.96 & 51.29\(\pm\)11.71 & 77.07\(\pm\)0.52 & 77.49\(\pm\)0.65 & 53.20\(\pm\)9.18 \\ & Res & 77.64\(\pm\)1.01 & 77.65\(\pm\)0.78 & 73.31\(\pm\)7.15 & 77.36\(\pm\)0.60 & 50.16\(\pm\)7.65 & 43.46\(\pm\)3.30 \\ & InitialRes & 75.66\(\pm\)0.82 & 75.15\(\pm\)0.48 & 75.31\(\pm\)0.55 & 77.42\(\pm\)0.79 & 77.42\(\pm\)0.82 & 44.96\(\pm\)5.91 \\ & Dense & 76.81\(\pm\)1.06 & 74.01\(\pm\)2.36 & 76.33\(\pm\)1.17 & 76.66\(\pm\)0.61 & 76.38\(\pm\)1.26 & 76.50\(\pm\)1.47 \\ & JK & 77.61\(\pm\)0.78 & 76.31\(\pm\)1.45 & 76.59\(\pm\)1.53 & 77.48\(\pm\)0.84 & 77.75\(\pm\)0.77 & 40.84\(\pm\)0.23 \\ & **SNR (Ours)** & **77.84\(\pm\)0.51** & **78.02\(\pm\)0.71** & **77.36\(\pm\)0.78** & **77.51\(\pm\)0.62** & **78.17\(\pm\)0.85** & **77.77\(\pm\)0.46** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Node classification accuracy (%) on different number of layers. The best results are in bold and the second best results are underlined.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c} \hline \hline & \multicolumn{4}{c|}{GCN} & \multicolumn{4}{c}{GAT} \\ \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Cora} & \multicolumn{2}{c|}{Citeseer} & \multicolumn{2}{c|}{Pubmed} & \multicolumn{2}{c|}{Cora} & \multicolumn{2}{c|}{Citeseer} & \multicolumn{2}{c|}{Pubmed} \\ & Acc & \#K & Acc & \#K & Acc & \#K & Acc & \#K & Acc & \#K & Acc & \#K \\ \hline None & 57.3 & 3 & 44.0 & 6 & 36.4 & 4 & 50.1 & 2 & 40.8 & 4 & 38.5 & 4 \\ BatchNorm & 71.8 & 20 & 45.1 & 25 & 70.4 & 30 & 72.7 & 5 & 48.7 & 5 & 60.7 & 4 \\ PairNorm & 65.6 & 20 & 43.6 & 25 & 63.1 & 30 & 68.8 & 8 & 50.3 & 6 & 63.2 & 20 \\ DGN & 76.3 & 20 & 50.2 & 30 & 72.0 & 30 & 75.8 & 8 & 54.5 & 5 & 72.3 & 20 \\ DeCorr & 73.8 & 20 & 49.1 & 30 & 73.3 & 15 & 72.8 & 15 & 46.5 & 6 & 72.4 & 15 \\ DropEdge & 67.0 & 6 & 44.2 & 8 & 69.3 & 6 & 67.2 & 6 & 48.2 & 6 & 67.2 & 6 \\ Res & 74.06\(\pm\)1.10 & 7 & 57.52\(\pm\)1.30 & 67.36\(\pm\)2.01 & 84 & 74.86\(\pm\)1.25 & 6 & 57.88\(
performance and report its average accuracy along with the standard deviation. The results are reported in Table 5.
Our experiments show that GNNs with the SNR module outperform all previous methods **(Q3)**. Additionally, we find that for most models, the number of layers to reach the best accuracy is relatively large, which indicates that it is necessary to perform more propagation to gather information from further nodes so that we can obtain effective representations of nodes with missing features.
### Efficiency Experiment
In real-world tasks, the rate at which a model achieves optimal performance through training is often important, and this affects the true effectiveness and time consumption of the model in real-world applications. To enable concrete measurement and comparison, here we define the following metrics for model training efficiency:
\[\textbf{Efficiency}\ =\ \frac{\textbf{Accuracy}}{\textbf{Time}} \tag{8}\]
where **Accuracy** denotes the accuracy of the model when it reaches its optimal performance and **Time** denotes the time when the model reaches its optimal performance. The definition of this formula shows that a larger **Efficiency** represents a higher performance per unit time improvement, and therefore a higher training efficiency.
Based on the above equation, we evaluate the training efficiency of vanilla GNNs and SNR-GNNs. We use the 2, 4, 8, 16, 32, and 64-layer and average five **Efficiency** calculated for each layer of the model. Specifically, each **Efficiency** is calculated based on the time for the model to reach the highest accuracy on the validation set after 100 epochs of training and the accuracy achieved on the test set at that time. Figure 3 shows the models' **Efficiency** on Cora. The results on other datasets are shown in the Appendix H.
It can be noticed that the training efficiency decreases as the number of layers increases, which is due to the increase in training time caused by the rise in the number of model parameters. However, in most cases, compared to vanilla GNNs, our SNR module is able to maintain the highest training efficiency **(Q4)**.
## 6 Conclusion
Our work proposes a new perspective for understanding the expressive power of GNNs: the \(k\)-hop subgraph aggregation theory. From this perspective, we have reinterpreted and experimentally validated the reason why the performance of message-passing GNNs decreases as the number of layers increases. Furthermore, we have evaluated the expressive power of previous residual-based GNNs based on this perspective. Building on these insights, we propose a new sampling-based generalized residual module SNR and show theoretically that SNR enables GNNs to more flexibly utilize information from multiple \(k\)-hop subgraphs, thus further improving the expressive power of GNNs. Extensive experiments demonstrate that the proposed SNR can effectively address the issues of overfitting in shallow layers and oversmoothing in deep layers that are commonly encountered in message-passing GNNs, and significantly improves the performance, particularly in SSNC-MV tasks.
Figure 3: Efficiency for different models at different layers.
Our research will facilitate a deeper exploration of deep GNNs and enable a wider range of potential applications.
|
2304.05727 | Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks | Robustness has become an important consideration in deep learning. With the
help of explainable AI, mismatches between an explained model's decision
strategy and the user's domain knowledge (e.g. Clever Hans effects) have been
identified as a starting point for improving faulty models. However, it is less
clear what to do when the user and the explanation agree. In this paper, we
demonstrate that acceptance of explanations by the user is not a guarantee for
a machine learning model to be robust against Clever Hans effects, which may
remain undetected. Such hidden flaws of the model can nevertheless be
mitigated, and we demonstrate this by contributing a new method,
Explanation-Guided Exposure Minimization (EGEM), that preemptively prunes
variations in the ML model that have not been the subject of positive
explanation feedback. Experiments demonstrate that our approach leads to models
that strongly reduce their reliance on hidden Clever Hans strategies, and
consequently achieve higher accuracy on new data. | Lorenz Linhardt, Klaus-Robert Müller, Grégoire Montavon | 2023-04-12T09:34:13Z | http://arxiv.org/abs/2304.05727v3 | # Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
###### Abstract
Explainable AI has become a popular tool for validating machine learning models. Mismatches between the explained model's decision strategy and the user's domain knowledge (e.g. Clever Hans effects) have also been recognized as a starting point for improving faulty models. However, it is less clear what to do when the _user and the explanation agree._ In this paper, we demonstrate that acceptance of explanations by the user is not a guarantee for a ML model to function well, in particular, some Clever Hans effects may remain undetected. Such hidden flaws of the model can nevertheless be mitigated, and we demonstrate this by contributing a new method, Explanation-Guided Exposure Minimization (EGEM), that _preemptively_ prunes variations in the ML model that have not been the subject of positive explanation feedback. Experiments on natural image data demonstrate that our approach leads to models that strongly reduce their reliance on hidden Clever Hans strategies, and consequently achieve higher accuracy on new data.
keywords: Clever Hans effect, model refinement, pruning, Explainable AI, deep neural networks +
Footnote †: journal: Journal of Machine Learning
## 1 Introduction
Machine learning (ML) models such as deep neural networks have been shown to be capable of converting large datasets into highly nonlinear predictive models [44; 8; 54; 77; 86; 18]. As ML systems are increasingly being considered for high-stakes decision making, such as autonomous driving [4] or medical diagnosis [17; 90; 64; 81; 81], building them in a way that they reliably maintain their prediction accuracy on new data is crucial.
Proper data splitting and evaluation of trained models on hold-out test sets have long been recognized as an essential part of the validation process (e.g. in [11]), but unfortunately, such techniques cannot detect all flaws of a model [45; 66; 27]. Misspecified loss functions, spurious correlations, or biased datasets can potentially compromise attempts to build well-generalizing models, without altering the measured accuracy. A failure to address these more elusive flaws might lead to catastrophic failures, as has been demonstrated numerous times (e.g. [62; 26; 88; 45; 31; 90; 10; 64]), which has spurred efforts to find potential causes of such failures (e.g. [25; 45; 58; 89; 14; 84; 60]). Furthermore, in modern real-world scenarios, data is often non-i.i.d. due to intrinsic heterogeneities in the data generating process (different users, locations, sensors, etc.) [35] and plagued with spurious correlations [16]. The task of robustification against the use of spurious features, also known as the Clever Hans (CH) effect [45] or shortcut learning [27], is an especially challenging endeavor because, to the model, such CH features are indistinguishable from truly generalizing ones.
Explainable AI (XAI) [30; 71; 9; 70; 34] is a natural starting point for robustification because it places a human in the loop: Explanation techniques seek to describe a model to the human in an intelligible way, e.g. based on features that can be visualized or that have a specific meaning. Using such methods, human experts can identify hidden flaws in the model [14] and provide useful feedback for model improvement [7; 56; 83; 74; 3]. For example, if a local explanation reveals the use of spuriously correlated features [45] (Clever Hans effect), the expert may take action to correct this flawed prediction strategy. More training examples that do not exhibit the spurious correlation may be provided [60] or the model may be trained to match the users' ground-truth explanations [65; 63; 79; 83; 74; 3].
However, the perhaps more common case, where the explanation returned to the user is _correct_, i.e. it agrees with the knowledge of the human expert, is so far little explored. In this paper, we demonstrate that a model validated with classical XAI pipelines is still not guaranteed to perform well on new data. Specifically, we find that CH strategies may remain undetected, especially when the data _available_ for validation is limited or incomplete, supporting recent findings by Adebayo et al. [1]. Given the rising popularity of _foundation models_[13] (multi
purpose models made available in pretrained form by a third party), and given that such models do not always come with the full dataset used for training, this scenario is not an academic exercise but a timely concern.
To address the problem of undetected CH strategies, we propose to refine the original third-party model in a way that its overall feature exposure is reduced, subject to the constraint that explanations presented to the user remain the same. Specifically, we contribute a method, called Explanation-Guided Exposure Minimization (EGEM), that formulates an optimization problem weighting the feature exposure and explanation constraints. With mild approximations, our formulation simplifies to easy-to-implement soft-pruning rules that enable the removal or mitigation of undetected CH strategies.--Crucially, our refinement method only requires the data points whose predictions and explanations have been approved by the user. Neither prior knowledge about the spurious feature, nor data containing it is needed. Our proposal, as well as the context in which it operates are illustrated in Fig. 1.
To evaluate our approach, we simulate a number of scenarios of a user receiving a third-party model and possessing a subset of the data on which no CH strategies can be detected by classical XAI pipelines (e.g. LRP/SpRAy [6; 45]).
Results on image data demonstrate that our proposed EGEM approach (and its extension PCA-EGEM) delivers models with a much lower reliance on CH strategies, thereby achieving more stable prediction accuracy, especially when considering data with spurious features. Our approach also outperforms a number of existing and contributed baselines.
## 2 Related Work
In this section, we present related work on validating ML models that goes beyond classical validation techniques such as holdout or cross-validation [11] in order to address statistical artifacts such as domain shifts and spurious correlations. We make a distinction between methods relying on Explainable AI and users' explanatory feedback (Section 2.1), and a broader set of methods addressing domain shift and spurious correlations by statistical means (Section 2.2).
### Explainable AI and Clever Hans
Explainable AI (XAI) [29; 71; 93; 70; 53] is a major development in machine learning which has enabled insights into a broad range of black-box ML models. It has shown to be successful at explaining complex state-of-the-art neural networks classifiers [6; 62; 75; 92; 73], as well as a broader set of ML techniques such as unsupervised learning (e.g. [51; 39]). While most XAI methods generate an explanation for individual instances, solutions have been proposed to aggregate them into dataset-wide explanations that can be concisely delivered to the user [45].
Notably, XAI techniques have been successful at revealing CH features in ML models [45; 60]. Knowledge about the CH features can be used to desensitize the model to these features (e.g. via retraining [60] or layer-specific adaptations [3]). If ground-truth explanations are available (e.g. provided by a human expert), the model may be regularized to match these explanations [65; 63; 79], e.g. by minimizing the error on the explanation via gradient descent. Such adaptations to the users' expectations have also been shown to be effective in interactive settings [83; 74]. Our approach differs from these works as
Figure 1: Cartoon comparison of a naive XAI-based validation/deployment pipeline and our proposed approach incorporating an additional exposure minimization step. _Left:_ A third party trains a flawed (Clever Hans) model which exploits a spurious correlation in the data (images of the horse class have a copyright tag in the bottom-left corner). _Middle:_ The user receives the model from the third party. Because the user has limited data (in particular, no data with copyright tag), the flaw of the model cannot be easily detected with XAI methods and the model appears to be both accurate and right for the right reasons. _Right:_ Because of the undetected flaw, a naive deployment of the model is likely to result in prediction errors on new data (e.g. cats with copyright tags predicted to be horses). Our proposed approach preemptively reduces exposure to unseen features (the copyright tag), thereby avoiding these incorrect predictions.
we address the case where the available data does not contain CH features, hence making them indiscoverable by local explanations, and where the model is pretrained and thus cannot be regularized during training.
A different approach is DORA [15], which attempts to find potential CH features in a data-agnostic way and subsequently use the discovered candidate features to detect faulty decision strategies at deployment time. In contrast, we attempt to _robustify_ the network with no need for further post-processing at deployment. Furthermore, we examine the scenario where a limited amount of clean data is available, allowing us to employ conceptually different criteria besides outlierness.
### Robustness to Spurious Correlations
Our work is part of a larger body of literature concerned with _domain shift_ and how to design models robust to it. Yet, it concerns itself with a decidedly specialized and rather recent part of this area: unlearning or avoiding the use of spurious features in deep neural networks. Previous work attempting to create models that are robust against spurious correlations approached the problem from the angle of optimizing worst-group loss [22, 36, 68, 69, 80, 37, 43, 57]. This approach has shown to be effective in reducing reliance on CH features. Yet, these methods require access to samples containing the CH features and a labeling of groups in the data induced by these features. In particular, as previously pointed out by Kirichenko et al. [43], Group-DRO (distributionally robust optimization) [36], subsampling approaches [69, 37] and DFR (deep feature reweighting) [43] assume group labels on the training or validation data, and even methods that do away with these assumptions need to rely on group labels for hyper-parameter tuning [49, 22, 37]. Our setting is different from the ones above in that we assume that a pretrained model is to be robustified post hoc with limited data and that data from the groups containing the CH feature are not available at all. We believe this is a highly relevant scenario, considering the increasing prevalence of pretrained third-party models that have been trained on datasets that are unavailable or too large to fully characterize.
## 3 Explanation-Guided Exposure Minimization (EGEM)
Let us restate the scenario of interest in this paper, highlighted in Fig. 1: (1) a model provided in pretrained form by a third party, (2) a user who has limited data available to validate the third-party model and who concludes that the predictions and the associated decision strategies (as revealed by XAI) on this limited data are correct.--As argued before, in spite of the positive validation outcome, there is no guarantee that the model's decision strategy remains correct in regions of the input space not covered by the available data.
As a solution to the scenario above, we propose a preemptive model refinement approach, which we call _Explanation-Guided Exposure Minimization (EGEM)_. Technically, our approach is a particular form of knowledge distillation where the refined (or distilled) model should reproduce observed prediction strategies (i.e. predictions and explanations) of the original model on the available data. At the same time, the refined model should minimize its overall exposure to variations in the input domain so that undetected (potentially flawed) decision strategies are not incorporated into the overall decision strategy.
Let the original and refined model have the same architecture but distinct parameters \(\theta_{\text{old}}\) and \(\theta\). We denote by \(f(\mathbf{x},\theta_{\text{old}})\) and \(f(\mathbf{x},\theta)\) the predictions produced by the two models, and the explanations associated to their predictions as \(\mathcal{R}(\mathbf{x},\theta_{\text{old}})\) and \(\mathcal{R}(\mathbf{x},\theta)\) respectively. We then define the learning objective as
\[\min_{\theta}\ \mathbb{E}\left[\|\mathcal{R}(\mathbf{x},\theta)-\mathcal{R}(\mathbf{x },\theta_{\text{old}})\|^{2}+\Omega(\mathbf{x},\theta)\right] \tag{1}\]
where the expectation is computed over the available data, and where \(\Omega(\mathbf{x},\theta)\) is a function that quantifies the exposure of the model to the input variation when evaluated at the data point \(\mathbf{x}\).
Although the aforementioned formulation is general, it is not practical, because it would require optimizing a highly nonlinear and non-convex objective. Moreover, the objective depends on explanation functions which themselves may depend on multiple model evaluations, thereby making the optimization procedure intractable.
### A Practical Formulation for EGEM
To make the concept of explanation-guided exposure minimization effective, we will restrict our analysis to XAI methods that can attribute onto any layer of the model and whose produced scores have a particular structure. Specifically, we require the score assigned to a neuron \(i\) at a given layer to be decomposable in terms of the neurons \(j\) in the layer above, i.e. \(R_{i}=\sum_{j}R_{ij}\) and terms of the decomposition should have the structure
\[R_{ij}=a_{i}\underbrace{\rho(w_{ij})\,d_{j}}_{s_{ij}} \tag{2}\]
where \(a_{i}\) denotes the activation of neuron \(i\), \(w_{ij}\) is the weight connecting neuron \(i\) to neuron \(j\) in the next layer, \(\rho\) is an increasing function satisfying \(\rho(0)=0\) (e.g. the identity function), and \(d_{j}\) is a term that only indirectly depends on the parameters in the given layer and that is reasonable to approximate as constant locally. Explanation techniques that produce explanation scores with such structure include backpropagation methods such as Layerwise Relevance Propagation (LRP) [6, 55] and gradient-based techniques such as \(\text{Gradient}\times\text{Input}\) (GI) and Integrated Gradients (IG). (See Supplementary Note A for derivations.)
We now present our practical formulation of explanation-guided exposure minimization. First, it imposes explanation similarity between the original and the refined model on the messages \(R_{ij}\) at a specified layer of the network, which is not necessarily the input layer. Furthermore, it restricts the search for refined parameters to the weights of the same layer. Hence, our scheme can be interpreted as changing the parameters of a given layer so that the overall model has minimized its exposure, subject to the explanation at that layer remaining the same. Specifically, we solve:
\[\min_{w}\ \sum_{ij}\mathbb{E}\big{[}(R_{ij}(w_{ij})-R_{ij}(w_{ij}^{\text{old}})) ^{2}+\lambda(s_{ij}(w_{ij}))^{2}\big{]}, \tag{3}\]
where the expectation is taken over the available data. The first squared term constraints the explanations of the refined model to be close to that of the original model. The second squared term corresponds to the penalty used for exposure minimization. The quantity \(s_{ij}(w_{ij})\) which we use for this purpose can be interpreted as the way by which the refined model responds to the activation of neuron \(i\) through neuron \(j\), in particular, if \((s_{ij})_{j}\) becomes zero, the model becomes unresponsive to the activation of neuron \(i\). An advantage of this formulation is that it has the closed-form solution:
\[\forall_{ij}:\ w_{ij}=\frac{\mathbb{E}[a_{i}^{2}d_{j}^{2}]}{\mathbb{E}[a_{i}^ {2}d_{j}^{2}]+\lambda\mathbb{E}[d_{j}^{2}]}w_{ij}^{\text{old}} \tag{4}\]
See Supplementary Note B for a derivation. In other words, the refined model can be seen as a soft-pruned version of the original model where the pruning strength depends on how frequently and to what magnitude the input neuron is activated and how the model responds to the output neuron. If we further assume that \(a_{i}\) and \(d_{j}\) are independent (or more weakly that their squared magnitudes are decorrelated), then it can be shown that \(d_{j}\) vanishes from Equation (4). Furthermore, the refined model can be obtained by keeping the weights intact and inserting a layer directly after the activations that performs the scaling:
\[\forall_{i}:\ a_{i}\gets a_{i}c_{i} \tag{5}\]
with \(c_{i}=\mathbb{E}[a_{i}^{2}]/(\mathbb{E}[a_{i}^{2}]+\lambda)\). The pruning of the neural network architecture and the resulting loss of dependence on the CH feature are depicted in Fig. 2 (top).
The same approach can also be applied to convolutional layers. To calculate the scaling parameters \(c_{i}\), the activations of each channel are summed up along the spatial dimensions. For refinement, the pruning coefficients are then applied to all activations of the corresponding feature map (cf. Eq. 5). Such pruning strategy for convolutional layers can be derived exactly from Eq. (3) if assuming activation maps of infinite size (or circular convolutions) and stride 1. For the majority of convolution layers used in practice, Eq. (5) only derives from the objective formulation approximately.
Figure 2: _Top:_ Cartoon depicting the removal of unseen Clever Hans strategies via our proposed exposure minimization approaches (EGEM and PCA-EGEM). The refined model only retains the dependence on \(a_{2}\) (a neuron detecting the actual horse) and removes its reliance on \(a_{3}\) (a neuron responsive to spurious copyright tags). _Bottom:_ Qualitative behavior of PCA-EGEM on ML models trained on real datasets. The models produced by our approach become robust to spurious features not seen at inspection time but occurring at deployment time. (Pixel-wise explanations are computed using the zennit package [(2)].)
### Pruning in PCA Space
Within the EGEM soft-pruning strategy, each dimension of a layer is pruned individually. In practice, this only allows to eliminate undetected flawed strategies that use a set of neurons that is disjoint from the validated strategies. Because a given neuron may contribute both to detected and to undetected strategies, the standard version of EGEM may not be able to carry out the exposure minimization task optimally. To address this limitation, we propose PCA-EGEM, which inserts a virtual layer, mapping activations to the PCA space (computed from the available data) and back (cf. Fig. 2). PCA-EGEM then applies soft-pruning as in Eq. (5), but in PCA space, that is:
\[\begin{split} h_{k}&=U_{k}^{\top}(\mathbf{a}-\mathbf{ \tilde{a}})\\ h_{k}&\gets h_{k}\cdot c_{k}\\ \mathbf{a}&\leftarrow\sum_{k}U_{k}h_{k}+\mathbf{\tilde{a}} \end{split} \tag{6}\]
with \(c_{k}=\mathbb{E}[h_{k}^{2}]/(\mathbb{E}[h_{k}^{2}]+\lambda)\). Here, \(\{U_{k}\}_{k=1}^{K}\) is the basis of PCA eigenvectors and \(\mathbf{\tilde{a}}\) is the mean of the activations over the available data. The motivation for such mapping to the PCA space is that activation patterns that support observed strategies will be represented in the top PCA components. PCA-EGEM can therefore separate them better from the unobserved strategies that are likely not spanned by the top principal components.
While using PCA to find principal directions of interpretable features for GANs [28] has been proposed previously by Harkonen et al. [32] and has found application beyond that [76; 19], to our knowledge its use for the purpose of identifying a basis for exposure minimization is novel.
## 4 Experimental Evaluation
In this section, we evaluate the efficacy of the approaches introduced in Section 3 on various datasets that either naturally contain spurious correlations, giving rise to Clever Hans decision strategies, or have been modified to introduce such spurious correlations. After introducing the datasets, we demonstrate that the proposed approaches can mitigate the effect of CH behavior learned by various pretrained models. We will do this by evaluating our approaches on test datasets where the correlation of the CH feature and the true class is manipulated - i.e. a distribution shift is introduced. Additionally, we empirically explore the effect of the number of samples used for refinement and discuss the challenges of hyper-parameter selection in our setting in Sections 4.4 and 4.5. A more qualitative evaluation on the CelebA dataset [50] follows in Section 5.
### Datasets
We introduce here the datasets used to evaluate the proposed methods in Sections 4.3-4.5: a modified version of the MNIST dataset [46], the ImageNet dataset [23; 67], and the ISIC dataset [20; 85; 21]. Details on the preprocessing, and the neural networks used for each dataset can be found in the Supplemental Notes.
Modified MNISTThe original MNIST dataset [46] contains 70,000 images of hand-written digits, 10,000 of which are test data. We create a variant in which digits of the class '8' are superimposed with a small artifact in the top-left corner, with a probability of 0.7 (see Fig. 2). In order to generate a natural yet biased split of the training data that separates an artifact-free set of refinement data, we train a variational autoencoder [42] on this modified dataset and manually chose threshold along a latent dimension such that the samples affected with the artifact only fall on one side. This defines a subset of 39,942 samples from which clean refinement datasets are sampled and leaves a systematically biased subset (containing all modified '8' samples) only accessible during training. We train a small (2 convolutional and 2 fully connected layers) neural network on this dataset using binary cross-entropy loss over all ten classes on the whole training data.
ImageNetWe use the ILSVRC 2012 subset of the ImageNet dataset [23; 67] containing 1.2M natural images, each associated with one of 1000 classes for training. For evaluation, we use the 50 labeled samples per class that are contained in the validation set. Previous work has identified multiple spurious correlations potentially affecting a model's output [3]. In our experiments, we use a watermark and web-address on images of the 'carton' class and a gray frame around images of the'mountain bike' class as Clever Hans features (see Fig. 2). We vary their frequency in the test set by pasting these features on images (details in Supplementary Note C). The selected classes are evaluated in a binary classification setting against the most similar classes in terms of the output probabilities. Training set images used for refinement that do not contain the CH feature are manually selected for the 'carton' experiments and automatically for'mountain bike' experiment. For experiments on this dataset, we make use of the pretrained ResNet50 [33] (for the 'carton' class) and VGG-16 [78] (for the'mountain bike' class) networks available in pytorch1.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Dataset & \multicolumn{2}{c|}{Classes (Count)} & CH \\ \hline MNIST & **8**, others (10) & 3-pixel corner \\ \hline ISIC2019 & **Melanocyte nevus**, others (8) & colored patch \\ \cline{2-3} & **carton**, crate (2) & watermark, www \\ \cline{2-3} ImageNet & **carton**, envelope (2) & watermark, www \\ \cline{2-3} & **carton**, packet (2) & watermark, www \\ \cline{2-3} & **mountain bike**, bicycle-built-for-two (2) & frame \\ \hline \end{tabular}
\end{table}
Table 1: Overview of datasets, classification problems (poisoned class in **bold**, number of classes in brackets), and spurious features (CH).
_ISIC_. The ISIC 2019 dataset [20; 85; 21] consists of images containing skin lesions that are associated with one of eight medical diagnoses. The data is split into 22,797 samples for training and 2,534 for evaluation. We fine-tune a neural network based on a VGG-16 pretrained on ImageNet for this classification task using a cross-entropy loss. Some images of the class 'Melanocytic nevus' are contaminated with colored patches (see Fig. 2), which have been recognized as potential CH feature [52; 63; 12; 3]. We manually remove all contaminated images after training and use this clean dataset for refinement. Images in the test set are contaminated at the desired ratio by pasting one extracted colored patch onto other images.
### Methods
We compare several methods for the mitigation of the Clever Hans effect. The most basic baseline is the original pretrained model (Original). We evaluate both EGEM and PCA-EGEM, as well as what we will call _response-guided exposure minimization_ (RGEM), which attempts to maintain the last-layer responses, rather than the explanations, while minimizing exposure by penalizing large weights. Specifically, RGEM solves
\[\min_{\theta}\ \ \mathbb{E}[(f(\mathbf{x},\theta)-f(\mathbf{x},\theta_{\rm old}))^{2} ]+\lambda\|\theta\|^{2} \tag{7}\]
where the expectation is computed over the available data. See Supplementary Note D for more details. Furthermore, we evaluate a version of the original model fine-tuned on the refinement set (Retrain) and a version of the original model where the last layer has been replaced by weights learned via ridge regression on the refinement data (Ridge). The latter is equivalent to linear probing or DFR, which has been shown to be effective in mitigating accuracy loss due to subpopulation shifts [72] and the Clever Hans effect, when hyper-parameter selection based on worst-group accuracy optimization is possible [43]. The formulation for Ridge can also be retrieved by replacing the output of the original model, \(f(\mathbf{x},\theta_{\rm old})\), in the formulation of RGEM in Supplementary Note D with the ground-truth labels.
### Results
To evaluate robustness against spurious features we generate a fully poisoned test set by adding the CH artifact uniformly over all test images of all classes. This serves as a scenario where the correlation of the CH feature and the target class breaks. Such a distribution shift could, for example, happen in medical applications where a classifier might be trained on data in which the mode of data collection or the population characteristics of subjects are correlated with the outcome, but this correlation does not hold in the general case [64]. Note that while this poisoning scenario is an extreme case, it is not the worst case, as the class which was contaminated during training will also be modified with artifacts during testing. For refinement, 700 correctly predicted samples per class are used, oversampling images if fewer than 700 correctly predicted samples are in the available refinement data. For the modified MNIST and the ISIC dataset, we use 1000 randomly chosen test samples for each run of the evaluation, for ImageNet we use all available validation samples.
We evaluate the various models for all tasks under 0% and 100% uniform poisoning. Classification accuracy for intermediate levels of poisoning can be obtained by linear interpolation of these extremes. Figure 3 shows the obtained accuracy on those two levels of poisoning. An ideal model would obtain high accuracy with only a very small difference between clean and poisoned data. It should be invariant to the spurious feature and at most react to possible interference with other features, e.g. the spurious feature being pasted on top of a relevant part of the image, while not losing accuracy on the clean data. As expected, across all datasets increased poisoning reduces accuracy of the original model. _Importantly, this drop in accuracy cannot be detected without access to samples containing the CH feature_.
On the modified MNIST dataset, the original model loses about 30% of its clean-data accuracy when evaluated at the 100% poisoning level. All other models achieve both clean-data and 100%-poisoned accuracy levels within 4% of the original model's clean-data accuracy. While explanation-based methods lose slightly more clean-data accuracy than the other baselines, they display virtually no gap between clean-data accuracy and 100%-poisoned accuracy, making them the most predictable when no poisoned data is available.
On the more complex ISIC dataset, it can be observed that exposure to the CH feature cannot be completely removed by any of the methods. EGEM and PCA-EGEM still provide fairly robust models, with the highest poisoned-data accuracy and the smallest gap between 0%-poisoned and 100%-poisoned accuracy. PCA-EGEM retains clean-data accuracy while being the only method improving poisoned-data accuracy by more than 10 percentage points. The dataset provides a challenge for all other methods. Even though Retrain is the only method that improves clean-data accuracy in the refinement process, its poisoned-data accuracy is virtually the same as the original model's, indicating that the absence of a feature in the refinement data is not enough to remove it from the model, given only a limited amount of samples.
On the ImageNet tasks containing the 'carton' class, PCA-EGEM is the most robust refinement method. It is only outperformed in the 100% poisoned setting of the 'carton/envelope' task, where Retrain achieves the highest clean-data and poisoned accuracy. As we will see in Section 4.4, the inferior 100% poisoning accuracy of PCA-EGEM is a result of the hyper-parameter selection procedure and not fundamentally due to the pruning-based nature of the method. On the 100% poisoned setting'mountain bike' task, no refinement method is able to achieve accuracy gains over the original model. This might be due to the small magnitude of the CH effect resulting in the
clean-data loss due to refinement outweighing the robustness gain. This case also demonstrates that refinement is not beneficial in all scenarios and might not even lead to an improved 100% poisoned accuracy. Whether or not to refine should be decided based on whether the loss of clean-data accuracy can be tolerated.
Overall, this section's experiments demonstrate that the proposed refinement methods can preemptively robustify a pretrained model against Clever Hans effects, even if the latter cannot be observed from the limited available data. We could clearly establish that the attempt to robustify against CH behavior in absence of the associated artifact or knowledge thereof is not a hopeless endeavor and can be addressed with relatively simple methods. Yet, the trade-off between clean-data accuracy and poisoned-data accuracy cannot be directly observed and thus needs to be resolved heuristically. We explore this aspect in the next section.
### Hyper-parameter Selection
The hyper-parameters optimized in the experiments in this section are the number or epochs for 'Retrain' and the regularization factor \(\lambda\) for all other refinement methods. For the deep exposure-based approaches, EGEM and PCA-EGEM, we do not optimize \(\lambda\) for each layer directly, but we rather employ an approach inspired by the triangular method of Ashouri et al. [5] and earlier work [61] where pruning strength increases with the layer index, which allows us to reduce the number of parameters to optimize to one. In particular, we define thresholds \(\tau_{l}\) that denote the desired average pruning ratio per layer, where \(l\) is the layer index within the set of \(L\) layers to be refined:
\[\tau_{l}=\begin{cases}\alpha&\text{if }l=1,\\ 1&\text{if }l=L,\\ \alpha+(l-1)\times\frac{1-\alpha}{L-1}&\text{otherwise},\end{cases}\]
and optimize \(\alpha\). \(\lambda_{l}\) is then set such that the average pruning factor \(\mathbb{E}_{\mathbf{x}\in\mathbf{X}}\,\mathbb{E}_{j}\,c_{j}\) from Eq. (4) for layer \(l\) is at least \(\tau_{l}\). The search for \(\lambda_{l}\) given \(\tau_{l}\) can be easily implemented as exponential search.
Ideally, the hyper-parameters should be set such that classification loss is minimized while exposure to the spurious artifact is negligible. While classification loss on clean data can be readily approximated by evaluating the loss function on the refinement data, exposure to the spurious artifact is a more elusive quantity and cannot be measured without a priori knowledge of the spurious artifact. In previous work (e.g. [36; 68; 69; 22; 80; 37; 43; 57]) it is assumed that for each class a set of samples with and without the spurious artifact is given and in most cases that the _worst-group-accuracy_ can be directly optimized or at least used for hyper-parameter selection, circumventing this problem. Since in our problem setting access to samples with the artifact is not given, this metric for parameter selection is not available and we need to establish a heuristic approach. Assuming that the classification loss on clean data can be approximated accurately, one option is to pick the strongest refinement hyper-parameter (i.e. highest number of epochs, largest \(\lambda\) or smallest \(\alpha\)) from a pre-defined set (see Supplementary Note F) for which the validation accuracy after refinement is at least as high as the one achieved by the original model.
As it is possible that strong refinement also impairs the use of generalizing features, there may be a trade-off between clean-data accuracy and robustness to spurious features. That optimizing overall clean-data accuracy is generally not the best approach to optimizing overall accuracy is highlighted by the fact that other works optimize _worst-group-accuracy_, as mentioned above. We explore the accuracy trade-off in Fig. 4 by introducing a'slack' parameter \(s\) to the hyper-parameter selection for PCA-EGEM. We refer to Supplementary Note G for the results of all methods. The refinement hyper-parameter is then chosen as the strongest regularization, given that the validation accuracy is at most \(s\%\) smaller than the one achieved by the original model. The idea is that minimizing loss of
Figure 3: The accuracy for 0% (lighter shade) and uniform 100% (darker shade) poisoning with the spurious feature. The last four bars for each method refer to the binary tasks constructed from the ImageNet dataset. Solid lines show average 100%-poisoned accuracy and dashed lines show average clean-data accuracy over all datasets. The results shown are the mean accuracy and standard deviation obtained using 700 refinement samples per class on the respective test sets over five runs.
classification accuracy on the refinement data prevents removing too much exposure to useful features, yet, allowing for some slack counteracts the tendency to choose trivial least-refinement solutions.
We suspect that in the simple case of the modified MNIST dataset, the model only learned few important high-level features and that the CH feature is close to being disentangled in some layer of the network. This scenario is a natural fit for pruning methods which could simply remove the outgoing connections of the node corresponding to the CH feature. Stronger refinement risks pruning useful features as well, which is an effect that can be observed in Fig. 4. For most datasets, we can observe that the accuracy curves first converge to or maintain a minimal 0%-100% poisoning gap. In this regime PCA-EGEM prunes unused or CH features. After crossing a certain level of slack, both accuracy values deteriorate as features necessary for correct classifications are being pruned as well.
The results previously presented in Fig. 3 are the outcomes for \(s=5\%\). This is a heuristic and it can be seen from Fig. 4 that different values of slack may be beneficial to increase robustness, depending on the dataset. We also show in Supplementary Notes G and H that PCA-EGEM provides the most robust refinement over a large range of slack values. As slack cannot be optimized w.r.t. the true deployment-time accuracy, we propose to set \(s\) between 1% and 5% as a rule of thumb.
In principle, another hyper-parameter is the choice of layers to refine. Knowledge of the type of Clever Hans could potentially guide this choice [3; 47] as the layer in which a concept is best represented may differ across concepts [41]. Since we do not assume such knowledge in our experiments, we simply refine the activations after every ResNet50 or VGG-16 block for the parts of the models that are derived from those architectures and additionally after every ReLU following a fully connected or convolutional layer that is not contained in a ResNet or VGG block. For 'Retrain' we fine-tune the whole network and RGEM and Ridge are restricted to the last layer.
### The Effect of the Sample Size
As the number of instances available for refinement is limited, a natural question is what impact the number of samples has on the efficacy of refinement and if refining with too few instances can be detrimental. In this section, we repeat the experiment from Section 4.3 for refinement datasets containing 25, 50, 200, 500, and 700 instances per class for 0% and 100% uniform poisoning. Slack is again set to 5%. If for some classes fewer correctly classified instances are available, these are over-sampled to achieve the desired number.
The effect of varying sample size is shown in Figure 5. It can be seen that especially in the low-sample regime, the positive effect of refinement is modulated by the number of instances. See Supplementary Note H for all other methods. While refinement with a small sample size appears in most cases to be remarkably effective for increasing 100%-poisoned accuracy, clean-data accuracy tends to suffer as the sample does not cover all of the features necessary to generalize, some of which are thus pruned away. For this reason, a larger refinement sample is in most cases beneficial, in particular for preserving clean-data accuracy. Yet, there are two cases that stand out as breaking this rule: The modified MNIST dataset and the 'carton/envelope' task. In both cases, the gap between 0% and 100%-poisoned accuracy is close to constant, suggesting that the drop in accuracy stems from a loss of generalizing features rather than a loss of robustness, as could be induced e.g. by samples contaminated by CH features. Considering the effect of slack, displayed in Fig. 4, we can also see that those two scenarios are also the cases for which 5% slack is not optimal. We hypothesize that here, the negative effect of increasing sample size stems from the interrelation between sample size and refinement strength. In particular, for EGEM and PCA-EGEM, using fewer instances generally means less coverage of the feature space, which leads to more zero or near-zero coefficients in the pruning procedure (cf. Eq. 5). Hence, for EGEM and PCA-EGEM, larger sample sizes potentially lead to _weaker_ refinement which can be similar in effect to
Figure 4: Accuracy under variations of the slack parameter. Higher slack means higher refinement-data loss is accepted when selecting the refinement hyper-parameter. The dotted line indicates clean-data accuracy whereas the solid line indicates 100% poisoned data accuracy. Mean and standard deviation are computed over 5 runs.
a decrease in slack.
Since clean-data accuracy can be evaluated on held-out data, we can observe that applying PCA-EGEM results in fairly predictable 100%-poisoning performance across a wide range of sample sizes, i.e. the spread between clean-data and poisoned-data accuracy is small, as is demonstrated by the relatively small shaded area in Fig. 5.
## 5 Use Case on CelebA: Reducing Bias
In this section, we will take a closer look at the effect of applying PCA-EGEM to a model trained on the CelebA dataset [50]. In contrast to the previous experiments, we do not evaluate based on a specific known CH feature, but rather conduct the analysis in an exploratory manner, uncovering subpopulations for which a learned CH behavior is leading to biased classifications. In practice, such an analysis could be done in hindsight, e.g. when PCA-EGEM has been applied before deployment, and its effect is later evaluated on new samples collected during deployment.
The CelebA dataset contains 202,599 portrait images of celebrities, each associated with 40 binary attributes. The existence of spurious correlations in the CelebA dataset has been documented previously [69; 91; 40; 68] and it can be seen in Supplementary Note C.2 that the attributes in the training set are correlated to various degrees. We train a convolutional neural network (details in Supplementary Note E) on the 'train' split of the CelebA dataset using cross-entropy loss on a 'blonde hair'-vs-not classification task. The training data is stratified and we achieve a binary test accuracy of 93%, which is comparable to accuracy reported in other works, e.g. Sagawa et al. [68]. We regard this classifier as a model given to the user by a third party.
In the following, we will assume a scenario where the user seeks to use the third-party classifier to retrieve blond people from a set of images available during deployment. They wish this retrieval process to be accurate and not biased against subgroups in the population. In order to analyze the impact of applying PCA-EGEM on such retrieval task, we simulate a validation set where the user has a limited subset of 'clean' examples, specifically, 200 examples of both classes, that are correctly predicted by the model and whose explanation highlight the actual blond hair as determined by LRP scores falling dominantly within the area of the image where the hair is located (see Supplementary Note F). These explanations (considered by the user to be all valid) are then fed to PCA-EGEM in order to produce a model that is more robust to potential unobserved Clever Hans effects. As for the previous experiments, we use 5% slack, which translates here to \(\alpha=0.01\).
### PCA-EGEM Reduces Exposure to Shirt Collars
After the model is deployed, the analysis of the decision strategy (of the original and refined model) can be reexamined in light of the new data now available. Fig. 6 shows explanations for some retrieved images, specifically, evidence for them being predicted to be blond. We can
Figure 5: Effect of the number of instances used for refinement. The dotted line indicates clean-data accuracy whereas the solid line indicates 100%-poisoned data accuracy. Mean and standard deviation are computed over 5 runs.
Figure 6: Test set images that exhibit strong changes in the detection of blond hair and corresponding LRP explanations, before and after refinement. Red indicates positive and blue negative contribution to the detection of blond hair. Shirt collars and similar features appear to inhibit the prediction of blond hair in the original model but less so in the refined one.
observe that pixels displaying hair are considered to be relevant and remain so after refinement.
In contrast, one can identify a significant change of strategy before and after refinement in the lower part of the image: The original model appears to make heavy use of shirt and suit collars as a feature inhibiting the detection of blond hair, whereas such inhibiting effect is much milder in the refined model. This observation suggests that PCA-EGEM has effectively mitigated a previously unobserved Clever Hans strategy present in the original model, and as a result, effectively aided the retrieval of images with collars on them.
### PCA-EGEM Balances Recall Across Subgroups
We will now analyze the implication of the Clever Hans effect reduction by PCA-EGEM on specific subgroups, specifically, whether certain subgroups benefit from the model refinement in terms of recalling members with the attribute 'blond'.
To this end, we randomly sample for every attribute in the dataset, a subset of 5000 images from the test data that only contains samples exhibiting this attribute. If fewer images are available for some attribute, we use all of the available samples. We evaluate the classifier with and without the application of PCA-EGEM on each of these subgroups.
Figure 7 shows recall scores for each subgroup before and after application of PCA-EGEM. We observe a substantial increase of recall on low-recall subgroups, such as 'Wearing_Necktie', 'Goatee', and 'Male'. Most high-recall group see only minuscule negative effects. Overall, while having almost no effect on the dataset-wide recall, we can observe that the application of PCA-EGEM rebalances recall in favor of under-recalled subsets. Our investigation thus demonstrates that a model bias responsible for under-detecting blond hair in these subgroups has been mitigated by applying PCA-EGEM, and consequently leads to a set of retrieved images that is more representative of the different subgroups and more diverse.
It is of theoretical interest to ask whether such rebalancing effect would generalize to other scenarios. An argument is that the underrepresentation of certain subgroups in the retrieved set is mainly caused by subgroups with low prevalence of the class of interest being actively suppressed by the model in order to optimize its accuracy. In practice, such suppression can be achieved by identifying features specific to the subgroup and, although causally unrelated to the task, making these features contribute negatively to the output score. Our PCA-EGEM technique, by removing such task-irrelevant Clever Hans features, redresses the decision function in favor of these low-prevalence subgroups, thereby leading to a more balanced set of retrieved instances.
Two outliers to the overall rebalancing effect can however be noted in Fig.7: 'Wearing_Hat', and 'Blurry'. Interestingly, these are two subgroups in which the feature of interest (the hair) is occluded or made less visible. In other words, in these two subgroups, only weakly correlated features are available for detection, and their removal by PCA-EGEM consequently reduces the recall. An underlying assumption behind the rebalancing effect is therefore that the true features are detectable in the input image without resorting to weakly or spuriously correlated features.
Overall, we have demonstrated in our CelebA use case, that PCA-EGEM can be useful beyond raising accuracy on disadvantageous test-set distributions. Specifically, we have shown that our PCA-EGEM approach enables the retrieval of a more diverse set of positive instances from a large heterogeneous dataset.
## 6 Open Questions
We could demonstrate the efficacy of the proposed methods for mitigation of Clever Hans effects in Sections 4.3 and 5, however, it can also be observed that 1) a complete removal of the model's response to the spurious (CH) feature is usually not achieved, and 2) classification accuracy on clean data may suffer. We suggest that there are multiple reasons for these undesired effects.
Firstly, in deep neural networks, CH features are generally not neatly disentangled from generalizing features. This means that either entangled well-generalizing features might suffer from pruning, reducing clean-data accuracy, or CH features might not be pruned due to being entangled with a feature present in the clean dataset. The latter would inhibit robustification against the CH feature. While the PCA-EGEM extension we have proposed achieves some basic form of disentanglement, more refined disentanglement methods based, for example, based on finding independent components, could be considered in future work.
A second open question is posed by the fact that the number of examples for which one collects explanatory feedback is limited. Thus, not all generalizing features may be present in the refinement data, thereby leading to these features being pruned away. Methods to more extensively draw from the user's explanatory feedback (e.g. rendering explanations to the user in a more intuitive way so that more examples can be inspected in a more precise manner, or presenting to the user in priority examples considered to be more informative) should be the focus of further investigation.
As pointed out in previous work [40], removing spurious (CH) features can hurt performance on data where the spurious correlation holds. Thus, un-biasing methods such as the refinement approaches introduced in this paper should be used under the assumption that they may hurt classification accuracy on biased data.
We also point out that our technique relies on the refinement set not containing CH features. This may naturally be satisfied in many cases where samples for refinement originate from a different data source than the
training data. Generally, our methods rely on post-hoc attribution methods to validate that the given samples are clean. While there have been concerns on the effectiveness of attribution methods in some scenarios [1], these concerns could be addressed in future work by moving beyond pixel-wise attribution for explanation validation e.g. using concept-based or counterfactual explanations [82; 87; 24].
## 7 Conclusion
Sensitivity to distribution shifts, such as the ones induced by spurious correlations (so-called Clever Hans effects) has long been an Achilles heel of machine learning approaches, such as deep learning. The problem becomes even more pronounced with the increasing adoption of foundation models for which the training data may not be public and is thus closed to scrutiny. Explanation techniques have the potential to uncover such deficiencies by putting a human in the loop [48; 45; 56; 71]. Previous work in XAI has mainly focused on improving explanations or fixing flaws in the model that have been identified by the user from such explanations. In contrast, we have considered the under-explored case where the human and the explanation _agree_ but where there are possibly unobserved spurious features that the model is sensitive to. While recent work has shown that XAI-based validation techniques may fail to detect some of these Clever Hans strategies employed by a model [1], we have argued that one can nevertheless still reduce the exposure of a model to some of these hidden strategies and demonstrated this via our contributed Explanation-Guided Exposure Minimization approach.
Our approach, while formulated as an optimization problem, reduces to simple pruning rules applied in intermediate layers, thereby making our method easily applicable, without retraining, to complex deep neural network models such as those used in computer vision. Our method was capable of systematically improving prediction performance on a variety of complex classification problems, outperforming existing and contributed baselines.
Concluding this paper, we would like to emphasize the novelty of our approach, which constitutes an early attempt to leverage correct explanations for producing refined ML models and attempts to tackle the realistic scenario where Clever Hans features are not accessible. We believe that in future work, the utility derived from explanations via refinement can still be expanded, e.g. by letting the user specify what is correct and what is incorrect in an explanation so that the two components can be treated separately, or by identifying sets of examples to present to the user that are the most useful to achieve model refinement, for example, by ensuring that they cover the feature space adequately or by active learning schemes.
## 8 Acknowledgements
This work was partly funded by the German Ministry for Education and Research (under refs 01IS14013A-E, 01GQ1115, 01GQ0850, 01IS18056A, 01IS18025A and 01IS18037A), the German Research Foundation (DFG) as Math+: Berlin Mathematics Research Center (EXC 2046/1, project-ID: 390685689). Furthermore KRM was partly supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program, Korea University and No. 2022-0-00984, Development of Artificial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Explanation). We thank Pattarawat Chormai and Christopher Anders for the valuable discussion and the extracted watermark artifact.
## References
* [1] J. Adebayo, M. Muelly, H. Abelson, and B. Kim (2022) Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation. In Proceedings of the 10th International Conference on Learning Representations, Cited by: SS1, SS2.1, SS2.2, SS2.
* [2] C. J. Anders, D. Neumann, W. Samek, K. Muller, and S. Lapuschkin. Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zemmit, CoReLay, and ViRelay. _arXiv preprint_, 2106.13200, 2021.
* [3] C. J. Anders, L. Weber, D. Neumann, W. Samek, K.-R. Muller, and S. Lapuschkin. Finding and removing Clever Hans: Using explanation methods to debug and improve deep models. _Information Fusion_, 77:261-295, 2022.
* [4] S. Aradi. Survey of deep reinforcement learning for motion planning of autonomous vehicles. _IEEE Transactions on Intelligent Transportation Systems_, 23(2):740-759, 2022.
* [5] A. H. Ashouri, T. S. Abdelrahman, and A. Dos Remedios. Retraining-Free Methods for Fast on-the-Fly Pruning of Convolutional Neural Networks. _Neurocomputing_, 370(C):56-69, 2019.
* [6] S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Muller, and W. Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. _PLoS ONE_, 10(7):e0130140, 2015.
* [7] D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, and K.-R. Muller. How to Explain Individual Classification Decisions. _Journal of Machine Learning Research_, 11:1803-1831, 2010.
* [8] D. Bahdanau, K. Cho, and Y. Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In _Proceedings of the 3rd International Conference on Learning Representations_, 2015.
* [9] A. Barredo Arrieta, N. Diaz-Rodriguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, and F. Herrera. Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. _Information Fusion_, 58:82-115, 2020.
* [10] A. Binder, M. Bockmayr, M. Hagele, S. Wienert, D. Heim, K. Hellweg, M. Ishii, A. Stenzinger, A. Hocke, C. Denkert, et al. Morphological and molecular breast cancer profiling through explainable machine learning. _Nature Machine Intelligence_, 3(4):355-366, 2021.
* [11] C. Bishop. _Pattern Recognition and Machine Learning_. Springer, 2006.
* [12] A. Bissoto, E. Valle, and S. Avila. Debiasing Skin L-Aision Datasets and Models? Not So Fast. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_, 2020.
* [13] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, et al. On the Opportunities and Risks of Foundation Models. _arXiv preprint_, 2108.07258, 2021.
* [14] S. Booth, Y. Zhou, A. Shah, and J. Shah. Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by Example. In _Proceedings of the 35th AAAI Conference on Artificial Intelligence_, 2020.
* [15] K. Bykov, M. Deb, D. Grinwald, K.-R. Muller, and M. M. C. Hohne. DORA: Exploring outlier representations in Deep Neural Networks. _arXiv preprint_, 2020.04530, 2022.
* [16] C. S. Calude and G. Longo. The deluge of spurious correlations in big data. _Foundations of Science_, 22(3):595-612, Mar. 2016.
* [17] D. Capper, D. T. W. Jones, M. Sill, V. Hovestadt, D. Schrimpf, et al. DNA methylation-based classification of central nervous system tumours. _Nature_, 555(7697):469-474, 2018.
* [18] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A Simple Framework for Contrastive Learning of Visual Representations. In _Proceedings of the 37th International Conference on Machine Learning_, volume 119, pages 1597-1607, 2020.
* [19] P. Chormai, J. Herrmann, K.-R. Muller, and G. Montavon. Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces. _arXiv preprint_, 2212.14855, dec 2022.
* [20] N. C. F. Codella, D. Gutman, M. E. Celebi, B. Helba, M. A. Marchetti, S. W. Dusza, A. Kalloo, K. Lioypris, N. Mishra, H. Kittler, and A. Halpern. Skin Lesion Analysis Toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC). _arXiv preprint_, 1710.05006, 2018.
* [21] M. Combalia, N. C. F. Codella, V. Rotemberg, B. Helba, V. Giaplasma, O. Reiter, C. Carrera, A. Barreiro, A. C. Halpern, S. Puig, and J. Malvehy. BCN2000: Dermoscopic Lesions in the Wild. _arXiv preprint_, 1908.02288, 2019.
* [22] E. Creager, J.-H. Jacobsen, and R. Zemel. Environment Inference for Invariant Learning. In _Proceedings of the 38th International Conference on Machine Learning_, volume 139, pages 2189-2200, 2021.
* [23] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 248-255, 2009.
* [24] A.-K. Dombrowski, J. E. Gerken, K.-R. Muller, and P. Kessel. Diffeomorphic Counterfactuals with Generative Models. _arXiv preprint_, 2206.05075, jun 2022.
* [25] D. J. Fremont, X. Yue, T. Dreossi, A. L. Sangiovanni-Vincentelli, S. Ghosh, and S. A. Seshia. Scenic: A language for scenario specification and scene generation. In _Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI)_, pages 63-78. Association for Computing Machinery, 2019.
* [26] T. Gebru and J. Buolamwini. Gender Shades: Intersection Accuracy Disparities in Commercial Gender Classification. In _Proceedings of the 1st Conference on Fairness, Accountability and Transparency_, volume 81, pages 77-91, 2018.
* [27] R. Geirhos, J. H. Jacobsen, C. Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. A. Wichmann. Shortcut learning in deep neural networks. _Nature Machine Intelligence_, (211):665-673, 2020.
* [28] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In _Advances in Neural Information Processing Systems_, volume 27, pages 2672-2680, 2014.
* [29] D. Gunning and D. W. Aha. Darpa's explainable artificial intelligence (XAI) program. _AI Mag._, 40(2):44-58, 2019.
* [30] D. Gunning, M. Steffik, J. Choi, T. Miller, S. Stumpf, and G.-Z. Yang. XAI-Explainable artificial intelligence. _Science Robotics_, 4(37):eaay7120, 2019.
* [31] M. Hagele, P. Segeerer, S. Lapuschkin, M. Bockmayr, W. Samek, F. Klauschen, K.-R. Muller, and A. Binder. Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. _Scientific reports_, 10:6423, 2020.
* [32] E. Harkonen, A. Hertzmann, J. Lehtinen, and S. Paris. GANSpace: Discovering Interpretable GAN Controls. In _Advances in Neural Information Processing Systems_, volume 33, pages 9841-9850, 2020.
* [33] K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 770-778. IEEE, jun 2016.
* Beyond Explainable AI
- International Workshop, Held in Conjunction with ICML 2020_, volume 13200 of _Lecture Notes in Computer Science_. Springer, 2022.
* [35] K. Hsieh, A. Phanishayee, O. Mutlu, and P. B. Gibbons. The non-lid data quangime of decentralized machine learning. In _ICML_, volume 119 of _Proceedings of Machine Learning Research_, pages 4387-4398. PMLR, 2020.
* [36] W. Hu, G. Niu, I. Sato, and M. Sugiyama. Does distributionally robust supervised learning give robust classifiers? In _Proceedings of the 35th International Conference on Machine Learning_, volume 80, pages 2029-2037, 2018.
* [37] B. Y. Idrissi, M. Arjovsky, M. Pezeshki, and D. Lopez-Paz. Simple data balancing achieves competitive worst-group-accuracy. In _Proceedings of the First Conference on Causal Learning and Reasoning_, pages 177:336-351, 2022.
* [38] P. Jurmeister, S. Gloss, R. Roller, M. Leitheiser, S. Schmid, L. H. Mochmann, et al. DNA methylation-based classification of sinonasal tumors. _Nature Communications_, 13(1):7148, 2022.
* [39] J. Kauffmann, M. Esders, L. Ruff, G. Montavon, W. Samek, and K.-R. Muller. From Clustering to Cluster Explanations via Neural Networks. _IEEE Transactions on Neural Networks and Learning Systems_, pages 1-15, 2022.
* [40] F. Khani and P. Liang. Removing spurious features can hurt accuracy and affect groups disproportionately. In _Proceedings of the ACM Conference on Fairness, Accountability, and Transparency_, pages 196-205. ACM, 2021.
* [41] B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas, and R. Sayres. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In _Proceedings of the 35th International Conference on Machine Learning_, volume 80, pages 2668-2677, 2018.
* [42] D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. _arXiv preprint_, 1312.6114, 2014.
* [43] P. Kirchenko, P. Izmailov, and A. G. Wilson. Last Layer Refraining is Sufficient for Robustness to Spurious Correlations. _arXiv preprint_, 2204.02937, 2022.
* [44] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In _Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems_, pages 1097-1105, 2012.
* [45] S. Lapuschkin, S. Waldchen, A. Binder, G. Montavon, W. Samek, and K.-R. Muller. Unmasking Clever Hans predictors and assessing what machines really learn. _Nature Communications_, 10(1):1096, 2019.
* [46] Y. LeCun and C. Cortes. MNIST handwritten digit database. [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/), 1998.
* [47] Y. Lee, A. S. Chen, F. Tajwar, A. Kumar, H. Yao, P. Liang, and C. Finn. Surgical Fine-Tuning Improves Adaptation to Distribution Shifts. _arXiv preprint_, 2210.11466, oct 2022.
* [48] Z. C. Lipton. The Mythos of Model Interpretability. _Communications of the ACM_, 61(10):35-43, 2016.
* [49] E. Z. Liu, B. Haghgoo, A. S. Chen, A. Raghunathan, P. W. Koh, S. Sagawa, P. Liang, and C. Finn. Just Train Twice: Improving Group Robustness without Training Group Information. In _Proceedings of the 38th International Conference on Machine Learning_, volume 139, pages 6781-6792, 2021.
* [50] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In _Proceedings of International Conference on Computer Vision (ICCV)_, 2015.
* [51] P. Liznerski, L. Ruff, R. A. Vandermeulen, B. J. Franks, M. Kloft, and K.-R. Muller. Explainable Deep One-Class Classification. _arXiv preprint_, 2007.01760, 2020.
* [52] N. K. Mishra and M. E. Celebi. An Overview of Melanoma Detection in Dermoscopy Images Using Image Processing and Machine Learning. _arXiv preprint_, 1601.07843, 2016.
* [53] B. Mittelstadt, C. Russell, and S. Wachter. Explaining Explanations in AI. In _Proceedings of the Conference on Fairness, Accountability, and Transparency_, pages 279-288. ACM, 2019.
* [54] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. _Nature_, 518(7540):529-533, feb 2015.
* [55] G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K.-R. Muller. Layer-Wise Relevance Propagation: An Overview. In _Explainable AI: Interpreting, Explaining and Visualizing Deep Learning_, pages 193-209. Springer International Publishing, 2019.
* [56] W. J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu. Definitions, methods, and applications in interpretable machine learning. _Proceedings of the National Academy of Sciences of the United States of America_, 116(44):22071-22080, 2019.
* [57] J. Nam, J. Kim, J. Lee, and J. Shin. Spread Spurious Attribute: Improving Worst-group Accuracy with Spurious Attribute Estimation. _arXiv preprint_, 2204.02070, 2022.
* [58] A. Odena and I. Goodfellow. TensorFlowFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing. In _Proceedings of the 36th International Conference on Machine Learning_, pages 8603-8613. International Machine Learning Society (IMLS), 2019.
* [59] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In _Advances in Neural Information Processing Systems 32_, pages 8024-8035, 2019.
* [60] G. Plumb, M. T. Ribeiro, and A. Talwalkar. Finding and Fixing Spurious Patterns with Explanations. _arXiv preprint_, 2106.02112, jun 2021.
* [61] A. Polyak and L. Wolf. Channel-level acceleration of deep face representations. _IEEE Access_, 3:2163-2175, oct 2015.
* [62] M. T. Ribeiro, S. Singh, and C. Guestrin. "Why should I trust you?" Explaining the predictions of any classifier. In _Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_, pages 1135-1144, 2016.
* [63] L. Rieger, C. Singh, W. J. Murdoch, and B. Yu. Interpretations are useful: penalizing explanations to align neural networks with prior knowledge. In _Proceedings of the 37th International Conference on Machine Learning_, pages 8116-8126, 2019.
* [64] M. Roberts, D. Driggs, M. Thorpe, J. Gilbey, M. Yeung, et al. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. _Nature Machine Intelligence_, 3(3):199-217, 2021.
* [65] A. S. Ross, M. C. Hughes, and F. Doshi-Velez. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations. In _Proceedings of the 26th International Joint Conference on Artificial Intelligence_, pages 2662-2670, 2017.
* [66] C. Rudin. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. _Nature Machine Intelligence_, 1(5):206-215, 2019.
* [67] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. _International Journal of Computer Vision (IJCV)_, 115(3):211-252, 2015.
* [68] S. Sagawa, P. W. Koh, T. B. Hashimoto, and P. Liang. Distributionally Robust Neural Networks. In _Proceedings of the International Conference on Learning Representations_, 2020.
* [69] S. Sagawa, A. Raghunathan, P. W. Koh, and P. Liang. An investigation of why overparameterization exacerbates spurious correlations. In _Proceedings of the 37th International Conference on Machine Learning_, volume 119, pages 8346-8356, 2020.
* [70] W. Samek, G. Montavon, S. Lapuschkin, C. J. Anders, and K.-R. Muller. Explaining deep neural networks and beyond: A review of methods and applications. _Proceedings of the IEEE_, 109(3):247-278, 2021.
* [71] W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K.-R. Muller, editors. _Explainable AI: Interpreting, Explaining and Visualizing Deep Learning_, volume 11700 of _Lecture Notes in Computer Science_. Springer International Publishing, Cham, 2019.
* [72] S. Santurkar, D. Tsipras, and A. Madry. BREEDS: benchmarks for subpopulation shift. _arXiv preprint_, 2008.04859, 2020.
* [73] T. Schnake, O. Eberle, J. Lederer, S. Nakajima, K. T. Schutt, K.-R. Muller, and G. Montavon. Higher-Order Explanations of Graph Neural Networks via Relevant Walks. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 44(11):7581-7596, 2022.
* [74] P. Schramowski, W. Stammer, S. Teso, A. Brugger, F. Herbert, X. Shao, H. G. Luigs, A. K. Mahlein, and K. Kersting. Making deep neural networks right for the right scientific reasons by interacting with their explanations. _Nature Machine Intelligence_, 2(8):476-486, 2020.
* [75] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. _Int. J. Comput. Vis._, 128(2):336-359, 2020.
* [76] Y. Shen and B. Zhou. Closed-form factorization of latent semantics in gans. In _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 1532-1540, 2021.
* [77] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sire, et al. Mastering the game of Go with deep neural networks and tree search. _Nature_, 529(7587):484-489, jan 2016.
* [78] K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In _Proceedings of the 3rd International Conference on Learning Representations_, 2015.
* [79] B. Simpson, F. Dutil, Y. Bengio, and J. P. Cohen. GradMask: Reduce Overfitting by Regularizing Saliency. _arXiv preprint_, 1904.07478, 2019.
* [80] N. S. Sohoni, M. Sanjabi, N. Ballas, A. Grover, S. Nie, H. Firooz, and C. Re. BARACK: Partially Supervised Group Robustness With Guarantees. _arXiv preprint_, 2201.00072, 2021.
* [81] E. Sorantin, M. G. Grasser, A. Hemmelmayr, S. Tschauner, F. Hrzic, V. Weiss, J. Laeckova, and A. Holzinger. The augmented radiologist: artificial intelligence in the practice of radiology. _Pediatric Radiology_, 52(11):2074-2086, 2022.
* [82] I. Stepin, J. M. Alonso, A. Catala, and M. Pereira-Farina. A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence, 2021.
* [83] S. Teso and K. Kresting. Explanatory interactive machine learning. In _Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_, page 239-245, 2019.
* [84] Y. Tian, Z. Zhong, V. Ordonez, G. Kaiser, and B. Ray. Testing DNN image classifiers for confusion & bias errors. In _Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering_, pages 1122-1134, 2020.
* [85] P. Tschandl, C. Rosendahl, and H. Kittler. The HAM10000 Dataset: A Large Collection of Multi-Source Dermatoscopic Images of Common Pigmented Skin Lesions. _Scientific Data_, 5:18016, 2018.
* [86] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is All you Need. In _Advances in Neural Information Processing Systems 30_, pages 5998-6008. Curran Associates, Inc., 2017.
* [87] S. Verma, J. Dickerson, and K. Hines. Counterfactual Explanations for Machine Learning: Challenges Revisited. _arXiv preprint_, 2106.07756, jun 2021.
* [88] J. K. Winkler, C. Fink, F. Toberer, A. Enk, T. Deinlein, R. Hofmann-Wellenhof, L. Thomas, A. Lallas, A. Blum, W. Stolz, and H. A. Haenske. Association Between Surgical Skin Markings in Dermoscopic Images and Diagnostic Performance of a Deep Learning Convolutional Neural Network for Melanoma Recognition. _JAMA Dermatology_, 155(10):1135, 2019.
* [89] W. Wu, H. Xu, S. Zhong, M. R. Lyu, and I. King. Deep Validation: Toward Detecting Real-World Corner Cases for Deep Neural Networks. In _Proceedings of the 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks_, pages 125-137, 2019.
* [90] L. Wynants, B. Van Calster, G. S. Collins, R. D. Riley, G. Heinze, et al. Prediction models for diagnosis and prognosis of covid-19: Systematic review and critical appraisal. _The BMJ_, 369(8242):m1328, 2020.
* ECCV 2020 Workshops_, pages 506-523, 2020.
* [92] Q. Zhang, X. Wang, R. Cao, Y. N. Wu, F. Shi, and S.-C. Zhu. Extraction of an explanatory graph to interpret a CNN. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 43(11):3863-3877, Nov. 2021.
* [93] Y. Zhang, P. Tino, A. Leonardis, and K. Tang. A Survey on Neural Network Interpretability. _IEEE Transactions on Emerging Topics in Computational Intelligence_, 5(5):726-742, 2021.
# Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
(Supplementary Material)
Lorenz Linhardt
Klaus-Robert Muller
Gregoire Montavon
###### Abstract
In this section we want to show that, assuming a common neural network with neurons of the type
\[z_{j} =\sum_{i}a_{i}w_{ij}+b_{j}\] \[a_{j} =g(z_{j}), \tag{1}\]
attribution scores associated to the explanation techniques \(\operatorname{Gradient}\times\operatorname{Input}\), Integrated Gradients [8], and layer-wise relevance propagation (LRP) [1; 6] can be decomposed and written in the form:
\[R_{i} =\sum_{j}R_{ij}\] \[R_{ij} =a_{i}\rho(w_{ij})d_{j} \tag{2}\]
where \(\rho:\mathbb{R}\rightarrow\mathbb{R}\) is some function, and \(d_{j}\) is a term that only indirectly depends on the activation \(a_{i}\) and the weight \(w_{ij}\).
### Gradient \(\times\operatorname{Input}\)
Denoting by \(y\) the output of the neural network that we would like to explain, we can write the scores obtained by \(\operatorname{Gradient}\times\operatorname{Input}\) w.r.t. any layer with activations \(\{a_{i}\}_{i=1}^{N}\) as:
\[R_{i} =a_{i}\frac{\partial y}{\partial a_{i}}. \tag{3}\]
Using the chain rule, the equation can be further developed as:
\[=a_{i}\sum_{j}\frac{\partial y}{\partial z_{j}}\frac{\partial z_{j}}{ \partial a_{i}}=\sum_{j}\underbrace{a_{i}\underbrace{\frac{\partial z_{j}}{ \partial a_{i}}\underbrace{\frac{\partial y}{\partial z_{j}}}}_{R_{ij}}}_{R_{ ij}} \tag{4}\]
form which we can identify the desired structure of Eq. (2).
### _Integrated Gradients_
For the Integrated Gradients formulation, we consider as integration path a segment from the origin to the data point (i.e. the map \(t\to t\cdot\mathbf{a}\) with \(t\in[0,1]\)):
\[R_{i}=\int\frac{\partial y}{\partial a_{i}}\frac{\partial a_{i}}{\partial t}dt \tag{5}\]
Using the chain rule, the equation can be further developed as:
\[=\int\sum_{j}\frac{\partial y}{\partial z_{j}}\frac{\partial z_{j}}{\partial a _{i}}\frac{\partial a_{i}}{\partial t}dt \tag{6}\]
\[=\sum_{j}\int\frac{\partial y}{\partial z_{j}}\frac{\partial z_{j}}{ \partial a_{i}}\frac{\partial a_{i}}{\partial t}dt \tag{7}\] \[=\sum_{j}\underbrace{\Big{(}\int\frac{\partial y}{\partial z_{j}} dt\Big{)}}_{R_{ij}}w_{ij}a_{i} \tag{8}\]
From which we can again identify the structure of Eq. (2).
### Layer-wise Relevance Propagation
Starting from a generic LRP rule that admits standard LRP rules such as LRP-0, LRP-\(\gamma\) and LRP-\(\epsilon\) as special cases, specifically:
\[R_{i}=\sum_{j}\frac{a_{i}\rho(w_{ij})}{\sum_{i^{\prime}}a_{i^{ \prime}}\rho(w_{i^{\prime}j})+\epsilon}\cdot R_{j} \tag{9}\]
with \(\rho(t)=t+\gamma\max(0,t)\) with \(\gamma\) nonnegative, we get after some slight reorderings the equation
\[=\sum_{j}\underbrace{a_{i}\rho(w_{ij})\underbrace{\frac{R_{j}}{ \sum_{i^{\prime}}a_{i^{\prime}}\rho(w_{i^{\prime}j})+\epsilon}}_{d_{j}}}_{R_{ ij}} \tag{10}\]
which has the structure of Eq. (2).
## Appendix B Derivation of EGEM
In this section we derive a closed form solution for the EGEM method, which we stated in Section 3 of the main paper as
\[\forall_{ij}:\ w_{ij}=\frac{\mathbb{E}[a_{i}^{2}d_{j}^{2}]}{ \mathbb{E}[a_{i}^{2}d_{j}^{2}]+\lambda\mathbb{E}[d_{j}^{2}]}w_{ij}^{\text{old}} \tag{11}\]
with the purpose of solving the objective
\[\min_{w}\ \sum_{ij}\mathbb{E}\big{[}(R_{ij}(w_{ij})-R_{ij}(w_{ij}^{ \text{old}}))^{2}+\lambda(s_{ij}(w_{ij}))^{2}\big{]} \tag{12}\]
where
\[R_{ij}(w_{ij})=a_{i}\,\rho(w_{ij})\,d_{j} \tag{13}\]
and
\[s_{ij}(w_{ij})=\rho(w_{ij})\,d_{j}. \tag{14}\]
Substituting these last two terms into the objective, we get:
\[\min_{w}\ \sum_{ij}\mathbb{E}\big{[}\big{(}a_{i}\,\rho(w_{ij})\,d_{j}- a_{i}\,\rho(w_{ij}^{\text{old}})\,d_{j}\big{)}^{2}+\lambda\cdot\big{(}\rho(w_{ ij})\,d_{j}\big{)}^{2}\big{]} \tag{15}\]
We observe that each term of the sum depends on its own parameter \(w_{ij}\). Hence, each term can be minimized separately. Consider one such term and compute its gradient:
\[E_{ij}(w_{ij}) =\mathbb{E}\big{[}\big{(}a_{i}\,\rho(w_{ij})\,d_{j}-a_{i}\,\rho(w_{ ij}^{\text{old}})\,d_{j}\big{)}^{2}+\lambda\cdot(\rho(w_{ij})\,d_{j})^{2}\big{]} \tag{16}\] \[\nabla E_{ij}(w_{ij}) =\mathbb{E}\big{[}2\big{(}a_{i}\,\rho(w_{ij})\,d_{j}-a_{i}\,\rho( w_{ij}^{\text{old}})\,d_{j}\big{)}\cdot a_{i}\,\rho^{\prime}(\theta_{ij})\,d_{j}+2 \lambda\cdot\rho(\theta_{ij})\,\rho^{\prime}(\theta_{ij})\,d_{j}^{2}\big{]} \tag{17}\]
We now find where the gradient is zero. Our derivation uses the fact that \(\rho(w_{ij})\) and its derivative do not depend on the data and can therefore be taken out of the expectation:
\[\mathbb{E}\big{[}2\big{(}a_{i}\,\rho(w_{ij})\,d_{j}-a_{i}\,\rho( w_{ij}^{\text{old}})\,d_{j}\big{)}\cdot a_{i}\,\rho^{\prime}(\theta_{ij})\,d_{j}+2 \lambda\cdot\rho(\theta_{ij})\,\rho^{\prime}(\theta_{ij})\,d_{j}^{2}\big{]} \stackrel{{!}}{{=}}0 \tag{18}\] \[\mathbb{E}\big{[}\big{(}a_{i}\,\rho(w_{ij})\,d_{j}-a_{i}\,\rho( w_{ij}^{\text{old}})\,d_{j}\big{)}\cdot a_{i}\,d_{j}+\lambda\cdot\rho(\theta_{ij}) \,d_{j}^{2}\big{]} \stackrel{{!}}{{=}}0\] (19) \[\rho(w_{ij})\,\mathbb{E}(a_{i}^{2}d_{j}^{2})-\rho(w_{ij}^{\text{ old}})\,\mathbb{E}(a_{i}^{2}d_{j}^{2})+\lambda\rho(w_{ij})\,\mathbb{E}[d_{j}^{2}] \stackrel{{!}}{{=}}0\] (20) \[\rho(w_{ij})\,\mathbb{E}(a_{i}^{2}d_{j}^{2})+\lambda\cdot\rho(w_ {ij})\mathbb{E}[d_{j}^{2}] \stackrel{{!}}{{=}}\rho(w_{ij}^{\text{old}})\, \mathbb{E}[a_{i}^{2}d_{j}^{2}]\] (21) \[\rho(w_{ij}) \stackrel{{!}}{{=}}\frac{\mathbb{E}[a_{i}^{2}d_{j}^{2} ]}{\mathbb{E}[a_{i}^{2}d_{j}^{2}]+\lambda\cdot\mathbb{E}[d_{j}^{2}]}\cdot\rho (w_{ij}^{\text{old}}) \tag{22}\]
Furthermore, the equation above also implies
\[w_{ij}\stackrel{{!}}{{=}}\frac{\mathbb{E}[a_{i}^{2}d_{j}^{2}]}{ \mathbb{E}[a_{i}^{2}d_{j}^{2}]+\lambda\cdot\mathbb{E}[d_{j}^{2}]}\cdot w_{ij} ^{\text{old}} \tag{23}\]
for the choices of function \(\rho\) encountered in Supplementary Note A.
## Supplementary Note C Data
### Supplementary Note C.1. Modified MNIST
The modified MNIST is a variant of the original MNIST dataset [4] where a small 3-pixel artifact is pasted onto the top left corner of 70% of the images of the digit '8'. In order to generate a somewhat natural split of the data where one part is free of the artifact, we train a variational autoencoder [3] on the artifact-modified dataset to model the underlying distribution. Then, we manually select a single dimension and threshold along this dimension to separate the data into two partitions, such that both partitions contain all digits, but the modified samples only fall onto one side. This partitioning defines the pool of clean samples from which to draw the refinement data.
### Supplementary Note C.2. Correlation Structure in CelebA
In this section we present the correlation structure of the attributes in the training partition of the CelebA dataset [5], which is essential when reasoning about Clever Hans effects since they are based on spurious correlations in the training data. Fig. 1 displays the respective correlation matrix. Notably, 'Male' and 'Blond_Hair' are negatively correlated, while 'Male' and 'Wearing_Necktie', 'Goatee', and 'Sideburns' are positively correlated, which could lead 'Male'-related visual features to be used as inhibitory signal by models trained to detect blond hair on this dataset.
## Supplementary Note D. Response-Guided Exposure Minimization
Consider the problem of learning a low-complexity model that reproduces the output of some original model on the validated data. Let \(f(\mathbf{x},\mathbf{w})\) and \(f(\mathbf{x},\mathbf{w}_{\text{old}})\) be the output of the student and teacher models respectively, parameterized by weights \(\mathbf{w}\) and \(\mathbf{w}_{\text{old}}\). We can formulate the objective as the optimization problem:
\[\min_{\mathbf{w}}\quad\mathbb{E}[(f(\mathbf{x},\mathbf{w})-f(\mathbf{x},\mathbf{w}_{\text{old}}))^ {2}]+\lambda\|\mathbf{w}\|^{2} \tag{24}\]
model output on the refinement data. The second term with regularization parameter \(\lambda\) penalizes overall model exposure, e.g. forcing the model to not be too complex. For the linear case, where the teacher model and student model are given by \(f(\mathbf{x},\mathbf{w}_{\text{old}})=\mathbf{w}_{\text{old}}{}^{\top}\mathbf{x}\) and \(f(\mathbf{x},\mathbf{w})=\mathbf{w}^{\top}\mathbf{x}\) respectively, we get the closed form solution:
\[\mathbf{w}=(\Sigma+\lambda I)^{-1}\Sigma\mathbf{w}_{\text{old}} \tag{25}\]
where \(\Sigma=\mathbb{E}[\mathbf{x}\mathbf{x}^{\top}]\). This equation resembles the ridge regression solution; the difference being that the cross-covariance between data and targets \(\mathbb{E}[\mathbf{x}\mathbf{y}]\) in the original model is replaced by the term \(\Sigma\mathbf{w}_{\text{old}}\). This term performs a realignment of the pretrained model's weights along the validation data. This realignment with the refinement data, desensitizes the model to directions in feature space that are not expressed in the available data and that the user could not verify, giving some level of immunity against a possible CH effect in the classifier. For neural networks with a final linear projection layer, response-guided exposure minimization (RGEM) can be applied to this last layer.
We can rewrite Eq. (25) as ridge regression on the predictions \(f(\mathbf{x},\mathbf{w}_{\text{old}})\):
\[\mathbf{w}=(\mathbf{X}^{T}\mathbf{X}+\lambda I)^{-1}\mathbf{X}^{T}f(\mathbf{X},\mathbf{w}_{\text{old}}) \tag{26}\]
where \(f(\mathbf{X},\mathbf{w}_{\text{old}})\) is the vector of outputs of the original model on the refinement data.
Figure 1: Correlation matrix of attributes in the CelebA training data.
## Supplementary Note E Models: Training Procedure
In this section, we provide the details of the training procedure relevant for reproducing the models used in our experiments in Section 4 of the main paper.
For the experiments on ImageNet and the ISIC dataset, we use the pretrained VGG-16 network, as provided in the pytorch library [7]. For the ISIC dataset, only keep the first two output nodes and fine-tune the network for 10 epochs with learning rate 0.0001 and batch size 64. For the experiments on MNIST, we train a neural network specified in Table 1 for 5 epochs with learning rate 0.001 and batch size 128. For the experiments on the CelebA dataset, we train a neural network specified in Table 2 for 10 epochs with a learning rate of 0.000025 and a batch size of 100. For all training procedures, the Adam optimizer [2] is used.
## Supplementary Note F User Verification and Experimental Details
In the following we provide details of the simulated user verification procedure and of the generation of the poisoned data sets, where the relation between the class and the spurious features is modified.
### Supplementary Note F.1 Simulation of User-Verification
Generally, we would consider samples to be user-verified if they are correctly predicted by the model and if a human has inspected and agrees with the corresponding explanation.
For the experiments in Section 4 of the main paper we assumed that all correctly classified clean samples are user-verified and thus just removed samples containing the artifact from the refinement data. For the experiments related to the class _mountain bike_, we only retained samples not containing a frame. This is easily automated by checking multiple pixels along the border for agreement with the gray value of the artifact. As it is challenging to automatically filter out images containing the _carton_-related watermarks, we did so manually for the corresponding experiments and permitted any correctly classified image without the spurious feature to be used for refinement. For the ISIC dataset, we manually removed samples containing the colored patches from the data after training. For the experiments on CelebA in Section 5 of the main paper, as the task was more open and no specific Clever Hans feature was targeted, we simulated user-verification by checking for agreement of an LRP-based explanation [1; 6] of the model's decision with an expected location of hair features. To this end we defined a mask, seen in Fig. 2, which roughly corresponds to where we expect features that a user would use to judge a celebrity's hair color to lie. We considered samples to be user-verified if at least 75% of the absolute LRP relevance for the correct class lies within this mask.
\begin{table}
\begin{tabular}{|c|l|c|c|c|} \hline ID & Type & Channels & Kernel & Stride \\ \hline
1 & Conv. (ReLU) & 8 & 3 & 1 \\
2 & Max-pool & - & 2 & - \\
3 & Conv. (ReLU) & 16 & 5 & 1 \\
4 & Max-pool & - & 2 & - \\
5 & FC (ReLU) & 200 & - & - \\
6 & FC (Identity) & 10 & - & - \\ \hline \end{tabular}
\end{table}
Table 1: Neural network architecture used for the experiments using the MNIST dataset.
\begin{table}
\begin{tabular}{|c|l|c|c|c|} \hline ID & Type & Channels & Kernel & Stride \\ \hline
1-11 & 3 VGG-16 blocks & - & - & - \\
12 & Conv. (ReLU) & 128 & 3 & 1 \\
13 & Adaptive max-pool & - & - & - \\
14 & FC (ReLU) & 512 & - & - \\
15 & FC (Identity) & 2 & - & - \\ \hline \end{tabular}
\end{table}
Table 2: Neural network architecture used for the experiments using the ISIC dataset.
### Task Selection and Preprocessing
We will first describe the choice of classes the ImageNet experiments are based on, and then provide further information about the way the individual test samples for the different poisoning scenarios have been manipulated.
The classes to be discriminated were selected based on their similarity to the target classes for which artifacts were identified. Similarity was measured by the mean absolute difference of the soft-max output (see Table 3). The selection was done using all training samples of the respective target class. We adopted this selection scheme because natural spurious signals cannot be expected to have a strong enough effect to facilitate significant confusion between classes that are only weakly related.
In the following we will describe the process of manipulating individual data points to shift the distribution of spurious artifacts and create the poisoned dataset for evaluation. The poisoning level denotes with what probability any data point in the clean evaluation dataset is manipulated.
For the class _mountain bike_, we resize the original image to fit into the frame-shaped artifact. The modified image consists of the frame and the shrunken image. We prefer the resizing approach over simply pasting the frame onto the image, as it avoids to remove potential class evidence near the border of the image. Furthermore, we removed the one mountain-bike image from the test set that exhibits the frame artifact.
For the _carton_ class, we paste the watermark and URL occurring on images of this class to the center and bottom right, respectively, of the image to be modified. In particular, we manually recreated the URL artifact to look like the ones found in the dataset, but without any additional background. Similarly, the watermark artifact we used was cleaned of its background and pasted with transparency.
Due to the small number of evaluation samples in the ImageNet dataset (50 per class, i.e. the official validation set), we apply data augmentation in order to get robuster estimates. We do this by adding a horizontally flipped version of each samples to the test set.
### Refinement Hyper-parameters
In this section we provide the hyper-parameter space we searched over for each refinement method, as well as some additional technical details.
Figure 2: Example of an image from the CelebA dataset. The highlighted area is the mask within which relevance must be concentrated to be accepted as correctly explained.
\begin{table}
\begin{tabular}{|c|l|l|c|} \hline ID & Class Name & Target & Distance \\ \hline
519 & crate & carton & 0.685 \\
549 & envelope & carton & 0.691 \\
692 & packet & carton & 0.695 \\
444 & bicycle-built-for-two & mountain bike & 0.834 \\ \hline \end{tabular}
\end{table}
Table 3: Mean absolute distance to the target class, calculated from the soft-max outputs of a pretrained ResNet50 network.
Retraining.For the Retrain baseline, all layer of the pretrained models are fine-tuned on the available refinement data using the Adam optimizer [2] without further regularization. We use 20% of the refinement data (rounded up) as validation set and optimize over the number of epochs \(N_{e}\in\{1,5,10,20,30,50,100\}\) for all datasets. The learning rates are set as follows. MNIST: \(1\cdot 10^{-3}\), ImageNet(ResNet50): \(5\cdot 10^{-6}\), ImageNet(VGG-16): \(5\cdot 10^{-5}\), ISIC: \(1\cdot 10^{-7}\). We fine-tuned the networks with frozen batch-norm parameters, and gradients clipped to \(10^{-3}\).
Rgem and Ridge.We optimize over the regularization parameter \(\lambda\in\{0.0001,\,0.001,\,0.01,\,1,\,10,\,100,\,1000,\,1000\}\). The difference between the baselines is that Ridge regresses the true labels \(y\) of the refinement samples whereas REGEM regresses the outputs \(\hat{y}\) of the original model.
Egem and PCA-Egem.We optimize over the refinement parameter \(\alpha\in\{0.00001,\,0.0001,\,0.001,\,0.01,\,0.1,\,0.2,\,0.3,\,0.4,\,0.5, \,0.6,\,0.7,\,0.8,\,0.9,\,1\}\). The activations to be refined are the ones immediately after every ResNet50 or VGG-16 block for the parts of the models that are derived from those architectures and additionally after every ReLU activation following a fully connected or convolutional layer outside of such blocks.
## Supplementary Note G. Varying Slack
In this section we report the 0%-poisoning and 100%-poisoning results for varying the value of the slack variable in the hyper-parameter selection for all models in Fig. 3. Low slack means that we require the clean-data accuracy of the refined model to be close to the unrefined model, where larger slack allows for increasingly stronger deviations from this base accuracy. Larger slack implies stronger refinement. Additionally, we show the progression of the average performance with increasing slack in Fig. 4. The 5% slack level chosen in the main body of the paper can be taken as a rule of thumb, which appears to lead to effective refinement without sacrificing too much clean-data accuracy in most cases. Yet, it can be seen that the optimal level depends on the method and the dataset. For example, at 5% slack PCA-EGEM loses about 4% accuracy on clean and poisoned MNIST data, whereas it would be able to perfectly refine the model without loss of accuracy at the 0% level.
It can also be observed in Fig. 3 that the optimal slack value on the various ImageNet tasks is not uniform, which is surprising, since all tasks containing the _'_carton' class are poisoned with the same artifact. We assume that this effect arises from the other class losing critical features at different refinement levels. Fig. 3 demonstrates that PCA-EGEM leads to the most effective refinement over a large number of slack values - the advantage over other methods being even more apparent in settings with lower slack values than the 5% presented in the main body of the paper.
## Supplementary Note H. Varying Samples
In this section we report the 0%-poisoning and 100%-poisoning results for varying the value of the number of available samples for refinement and hyper-parameter selection for all models in Fig. 5.
## Supplementary Note I. Additional Experiments on CelebA
In this section we report additional results for the analysis of PCA-EGEM on the CelebA dataset [5].
### Supplementary Note I.1. Precision and Recall
We here present the precision and recall - with and without application of PCA-EGEM - for all subgroups in the CelebA test data that are induced by the attributes provided in the dataset. In Fig. 6 it can be seen that for a majority of subgroups effect are negligible, but for a minority of the groups the application of PCA-EGEM results in large changes, in particular on recall. The effect on precision is comparably small and in most cases negative, as the strongest CH features appear to be inhibitive. Interestingly, for the
subgroups induced by the attributes 'Wearing_Hat', and 'Blurry', where we see a reduction in recall, we also see increased precision, which may be explained by the classifier relying on spurious features when the hair is not clearly visible.
_Supplementary Note I.2. Validation of the Collar Clever Hans Feature on CelebA_
To corroborate our observation that shirt collars are a CH feature, we created a counterfactual dataset where the bottom section of the images is occluded by an image of a wall (see Fig. 7) and repeated our analysis with occlusion instead of refinement. Indeed, we can observe that occlusion of the bottom section of the images improves recall of the 'Wearing_Necktie', 'Sideburns', 'Goatee', and 'Male' groups. Furthermore, the recall of the 'Wearing_Hat' group is reduced - this is most likely due to the fact that for some samples the blond part of the hair would only be visible in the now-occluded part of the image, as it is covered by a hat in the upper parts of the image. Overall, these results supports our observation of collars being inhibitive CH features.
|
2310.19285 | Facilitating Graph Neural Networks with Random Walk on Simplicial
Complexes | Node-level random walk has been widely used to improve Graph Neural Networks.
However, there is limited attention to random walk on edge and, more generally,
on $k$-simplices. This paper systematically analyzes how random walk on
different orders of simplicial complexes (SC) facilitates GNNs in their
theoretical expressivity. First, on $0$-simplices or node level, we establish a
connection between existing positional encoding (PE) and structure encoding
(SE) methods through the bridge of random walk. Second, on $1$-simplices or
edge level, we bridge edge-level random walk and Hodge $1$-Laplacians and
design corresponding edge PE respectively. In the spatial domain, we directly
make use of edge level random walk to construct EdgeRWSE. Based on the spectral
analysis of Hodge $1$-Laplcians, we propose Hodge1Lap, a permutation
equivariant and expressive edge-level positional encoding. Third, we generalize
our theory to random walk on higher-order simplices and propose the general
principle to design PE on simplices based on random walk and Hodge Laplacians.
Inter-level random walk is also introduced to unify a wide range of simplicial
networks. Extensive experiments verify the effectiveness of our random
walk-based methods. | Cai Zhou, Xiyuan Wang, Muhan Zhang | 2023-10-30T06:03:34Z | http://arxiv.org/abs/2310.19285v1 | # Facilitating Graph Neural Networks with Random Walk on Simplicial Complexes
###### Abstract
Node-level random walk has been widely used to improve Graph Neural Networks. However, there is limited attention to random walk on edge and, more generally, on \(k\)-simplices. This paper systematically analyzes how random walk on different orders of simplicial complexes (SC) facilitates GNNs in their theoretical expressivity. First, on \(0\)-simplices or node level, we establish a connection between existing positional encoding (PE) and structure encoding (SE) methods through the bridge of random walk. Second, on \(1\)-simplices or edge level, we bridge edge-level random walk and Hodge \(1\)-Laplacians and design corresponding edge PE respectively. In the spatial domain, we directly make use of edge level random walk to construct EdgeRMSE. Based on the spectral analysis of Hodge \(1\)-Laplcians, we propose Hodge1Lap, a permutation equivariant and expressive edge-level positional encoding. Third, we generalize our theory to random walk on higher-order simplices and propose the general principle to design PE on simplices based on random walk and Hodge Laplacians. Inter-level random walk is also introduced to unify a wide range of simplicial networks. Extensive experiments verify the effectiveness of our random walk-based methods.
## 1 Introduction
Graph neural networks (GNNs) have recently achieved great success in tasks with graph-structured data, benefiting many theoretical application areas, including combinatorial optimization, bioinformatics, social-network analysis, etc. [11; 29; 16]. Two important aspects to evaluate GNN models are their theoretical expressivity in distinguishing non-isomorphic graphs, and their performance on real-world tasks. Positional encoding (PE) and structure encoding (SE) are widely adopted methods to enhance both theoretical expressivity and real-world performance of GNNs. Generally, PE encodes the information of the nodes' local or global positions, while SE provides information about local or global structures in the graph. For example, Kreuzer et al. [30] uses eigenvectors of the graph Laplacian, Dwivedi et al. [18] proposes to use diagonal elements of the \(t\)-step random walk matrix, and Bouritsas et al. [9] manually count some predefined structures. There are also some methods based on pair-wise node distances, such as the shortest path distance [31], the heat kernel [20], and the graph geodesic [35]. Although some work theoretically analyzes some of these methods [51], there are still some left-out methods, and people lack a unified perspective to view all these PE and SE designs. Moreover, most existing methods focus only on node data, while PE and SE on edge data as well as some higher-order topological structures are waited to be studied.
In addition to PE and SE, geometric deep learning has recently become a central topic. Researchers are inspired by concepts of differential geometry and algebraic topology, which resulted in many works on simplices and simplicial complexes [8; 7; 47]. Despite their capability to deal with higher-order structures, these simplicial networks should follow orientation symmetry, which brings difficulties in their applications in undirected graphs. This work connects these two separate areas via a central
concept: random walk on simplicial complexes. On the one hand, by introducing concepts of higher-order simplicial complexes, we can design more PE and SE methods that are both theoretically and practically powerful. On the other hand, PE and SE greatly facilitate simplicial data and benefit graph learning.
In summary, we first connect a number of existing PE and SE methods through the bridge of node-level random walk on \(0\)-simplices. Then, for \(1\)-simplices or edges, we design two novel sign and basis invariant edge-level PE and SE, namely EdgeRMSE and Hodge1Lap. EdgeRMSE uses an edge-level random walk directly to capture structure information, while Hodge1Lap is based on spectral analysis of Hodge 1 Laplacian, which is closely related to random walk on edges. We further generalize our theory to random walk on higher-order and inter-order simplices to facilitate graph and simplicial learning. Our methods achieve State-Of-The-Art or highly competitive performance on several datasets and benchmarks. Code is available at [https://github.com/zhouc20/HodgeRandomWalk](https://github.com/zhouc20/HodgeRandomWalk).
## 2 Related work
Theoretical expressivity and Weisfeiler-Lehman test.Weisfeiler-Lehman tests are a classical family of algorithms to distinguish non-isomorphic graphs. Previous work has built connections between the expressivity of GNNs and the WL hierarchy. Some classical conclusions include that for \(k\geq 2\), \(k+1\)-dimensional WL is more powerful than \(k\)-WL. [46] proves that traditional message-passing neural networks (MPNN) are not more powerful than \(1\)-WL. There is another variation of the WL test called the Folklore Weisfeiler-Lehman (FWL) test, and \(k\)-FWL is equivalent to \(k\)-WL in expressivity for \(k\geq 1\).
Symmetry in graph and simplicial learning.Symmetry is a central topic in graph and simplicial learning. In graph learning, node features and edge features need to be permutation (i.e., relabeling of nodes or edges) equivariant, while the graph features should be permutation invariant. In simplicial learning, one needs to further orientation symmetry [47] in an oriented simplicial complex (SC). The incidence relations and the simplicial adjacencies in an oriented SC are altered when the orientations are reversed. The \(k\)-form remains invariant to this transformation, while the features of \(k\)-simplices are equivariant in terms of the basis. [32] also state the standard that the graph-level functions (and in the context of SC, \(k\)-forms) should be invariant to both sign and basis (either of orientation or of space), which is a basic rule for our PE and SE designs.
## 3 Preliminary
Graphs.We denote a graph as \(G(V,E,A)\), where \(V,E\) is the set of nodes and the set of edges, respectively, and \(A\) is the adjacency matrix for the nodes. For convenience, we use \(n=|V|\) and \(m=|E|\) to represent the number of nodes and edges in the graph \(G(V,E,A)\). In an undirected graph, for any \(u,v\in V\), we have \((u,v)\in E\Leftrightarrow(v,u)\in E\). Let \(\mathcal{N}(v,G)=\{u\in V|(u,v)\in E\}\) denote the set of neighbors of node \(v\) in graph \(G\). Let diagonal matrix \(D=diag(d_{1},...,d_{n})\), where \(d_{i}\) is the degree of node \(v_{i}\).
The transition matrix of a typical random walk at node level is \(P=D^{-1}A\), which indicates that in each step the walk moves from the current node \(v\) to one of its neighboring nodes \(u\in\mathcal{N}(v,G)\) with equal probabilities. Consequently, a \(t\) step of the aforementioned random walk corresponds to a transition matrix \(P^{t}\).
Discrete Hodge Laplacian of abstract simplicial complex.An abstract simplicial complex \(\mathcal{K}\) on a finite set \(V\) is a collection of subsets of \(V\) that is closed under inclusion. In our paper, \(V\) will be a vertex set \([n]=\{1,2,...,n\}\) if without special statement. An element of cardinality \(k+1\) is called a \(k\)-face or \(k\)-simplex of \(\mathcal{K}\). For instance, \(0\)-faces are usually called vertices, \(1\)-faces are directed edges, and \(2\)-faces are 3-cliques (triangles) with an orientation. We denote the collection of all \(k\)-faces of \(\mathcal{K}\) as \(S_{k}(\mathcal{K})\). The dimension of a \(k\)-face is \(k\), and the dimension of a complex \(\mathcal{K}\) is defined as the maximum dimension of the faces in \(\mathcal{K}\).
The definition of neighbors of simplices is crucial in this paper. Two \(k+1\)-simplices sharing a collective \(k\)-face are called \(k\)-down neighbors, and two \(k\)-simplices sharing a collective \(k+1\)-simplex are called \(k+1\)-up neighbors. Generally, a face \(F\) is chosen as an ordering on its vertices and is said
to be oriented, denoted by \([F]\). For any permutation element \(\sigma\in\mathcal{G}_{k+1}\) where \(\mathcal{G}_{k+1}\) is the symmetric group of permutations on \(\{0,...,k\}\), two orders of vertices transformed by \(\sigma\) are said to determine the same orientation if \(\sigma\) is an even permutation and opposite if \(\sigma\) is odd.
In the Hilbert space, the matrix representations of boundary and coboundary operators are adjacency matrices of order \(k\) and \(k+1\) simplices. In order to keep coordinate with most existing literature, we write the adjacent matrix of \(k\)-th and \(k+1\)-th simplices as \(\mathbf{B}_{k+1}\in\mathbb{R}^{[S_{k}]\times[S_{k+1}]}\). \(\mathbf{B}_{k+1}[i,j]=1\) if the \(i\)-th \(k\)-simplex and \(j\)-th \(k+1\)-simplex are adjacent and share the same direction, \(\mathbf{B}_{k+1}[i,j]=-1\) if adjacent with opposite directions, and \(0\) if they are not adjacent. For example, \(\mathbf{B}_{1}\) is the node-to-edge incidence matrix.
In discrete Hodge-deRham theory, the \(k\)-th order Hodge Laplacian is defined as
\[\mathbf{L}_{k}=\mathbf{B}_{k}^{*}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k +1}^{*} \tag{1}\]
where \(\mathbf{B}_{k}^{*}=\mathbf{B}_{k}^{T}\) is the adjoint of \(\mathbf{B}_{k}\) and is equivalent to the transpose of \(\mathbf{B}_{k}\) in Hilbert space. A special case is that when \(k=0\), \(\mathbf{B}_{0}\) is not defined and \(\mathbf{L}_{0}=\mathbf{B}_{1}\mathbf{B}_{1}^{*}=\mathbf{D}-\mathbf{A}\) is exactly the graph Laplacian. We refer readers to Appendix C.2.2 for an illustrative calculation example of Hodge Laplacians. In our following texts, we will make use of higher-order Hodge Laplacians such as \(\mathbf{L}_{1}\) rather than previously used \(\mathbf{L}_{0}\) alone.
The kernel space of \(\mathbf{L}_{k}\) is called the \(k\)-th cohomology group: \(\tilde{\mathcal{H}}^{k}(\mathcal{K},\mathbb{R}):=\ker(\mathbf{B}_{k+1}^{*})/ \mathrm{im}(\mathbf{B}_{k}^{*})\cong\ker(\mathbf{B}_{k+1}^{*})\cap\ker( \mathbf{B}_{k})=\ker(\mathbf{L}_{k})\). We will write \(\tilde{\mathcal{H}}^{k}(\mathcal{K},\mathbb{R})\) simply as \(\tilde{\mathcal{H}}^{k}\) without causing confusion. The kernel spaces of Hodge Laplacians are closely associated with harmonic functions and will play an important role in our following analysis. Particularly, the multiplicity of zero eigenvalues of \(\mathbf{L}_{k}\), or the dimension of null space of Hodge \(k\)-Laplacian \(\ker(\mathbf{L}_{k})\), is called the \(k\)-th Betti number \(\beta_{k}\)[23]. This is exactly the number of cycles composed of \(k\)-simplicials that are not induced by a \(k\)-boundary, or intuitively, \(k\)-dimensional "holes" in the simplicial complex \(\mathcal{K}\). For example, zero eigenvalues and their eigenvectors of \(\mathbf{L}_{0}\) are associated with the \(0\)-th cohomology group of the graph, corresponding to the connected components of the graph. The zero eigenvalues and eigenvectors of \(\mathbf{L}_{1}\) are associated with cycles (in the usual sense), and those of \(\mathbf{L}_{2}\) correspond to cavities. We refer readers to Appendix C.2.2 for detailed explanations and illustrative examples of cohomology groups.
## 4 Random walk on 0-simplices
Random walk on \(0\)-simplices or at node level has been studied systematically. Previous work has established comprehensive analysis on the theoretical properties of node-level random walk, which provide theoretical insights into the design of random walk-based methods. However, there is still limited research on the theoretical expressivity of random walk-based positional encoding (PE) and structure encoding (SE) methods. In this section, we establish connections between several PE and SE with node-level random walk, and provide theoretical expressive power bounds for them.
Rwse.[52] and Dwivedi et al. [18] propose a structure encoding method based on node-level random walk, which we denote as RWSE. Concretely, RWSE considers \(K\) steps of random walk at the node level of the graph, obtaining \(\mathbf{P},\mathbf{P}^{2},...,\mathbf{P}^{K}\). Then the method only takes into account each node's return probabilities to itself, i.e. the diagonal elements of \(\mathbf{P}^{k},k=1,2,...,K\). For each node \(v_{i}\), the RWSE feature is \(h_{i}^{RWSE}=[\mathbf{P}_{ii},\mathbf{P}_{ii}^{2},...,\mathbf{P}_{ii}^{K}]\). Compared with encoding methods based on graph Laplacian eigenvalues and eigenvectors, this method is sign and basis invariant. It internally captures some structure information within \(K\)-hops and achieves impressive results in experiments [38]. However, there are limited investigations on the theoretical expressivity of RWSE and its extensions. Here, we provide a theoretical bound of positional and structure encoding methods based on random walk transition matrix \(\mathbf{P}\).
**Theorem 4.1**.: _RWSE is strictly less powerful than \(2\)-FWL, i.e. \(\text{RWSE}\prec 2\)-FWL._
The above expressivity bound holds because \(2\)-FWL can simulate the multiplication and injective transformations of a matrix, including the adjacency matrix \(\mathbf{A}\). Therefore, \(2\)-FWL is capable of obtaining \(\mathbf{P}^{k},k\in\mathbb{N}\). Specifically, a block of PPGN [34] can simulate one time of matrix multiplication. Moreover, RWSE is strictly less expressive than \(2\)-FWL, since it loses much structure information when taking the diagonal elements of \(\mathbf{P}^{k}\) only. In other words, RWSE is a summary of full random walk transition probabilities (on spatial domain), which accelerates calculation at the cost of losing expressivity.
Resistance distance and random walk.In addition to RWSE, there are a number of positional encoding methods closely related to the node-level random walk. A.K. et al. [2], Zhang et al. [51] connect commute time in random walks with resistance in electrical networks, which can be used as a PE method called resistance distance (RD). Zhang et al. [51] prove that RD and shortest path distance (SPD) [31] are both upper-bounded by 2-FWL in expressive power.
Positive definite kernels based on graph Laplacian spectrum.Graph Laplacian, or Hodge \(0\)-Laplacian as we refer to later, is closely connected with random walk on graph. The definition of graph Laplacian is \(\mathbf{L}_{0}=\mathbf{D}-\mathbf{A}=\delta_{0}^{*}\delta_{0}=\Delta_{0}\). Through the spectrum of \(\mathbf{L}_{0}\), we are able to define a family of positive definite kernels on graphs [42] by applying a regularization function \(r\) to the spectrum of \(\mathcal{L}_{0}\): \(K_{r}=\sum_{i=1}^{m}r(\lambda_{i})\mathbf{u}_{i}\mathbf{u}_{i}^{T}\), where \(\mathbf{L}_{0}=\sum_{i}\lambda_{i}\mathbf{u}_{i}\mathbf{u}_{i}^{T}\) is the eigenvalue decomposition. For example, the heat kernel or the diffusion kernel [20] can be incorporated if \(r(\lambda_{i})=e^{-\beta\lambda_{i}}\). Other methods directly use eigenvectors as PE [30]. These results imply that spectral analysis of graph Laplacians can also inspire more powerful PE and SE, and we will generalize graph Laplacian \(\mathbf{L}_{0}\) to arbitrary order of Hodge \(k\) Laplacians in the following section to facilitate graph learning.
## 5 Random walk on 1-simplices
While node-level random walk has been widely studied, edge-level random walk is still limited. In this section, we will first introduce Hodge \(1\) Laplacian \(\mathbf{L}_{1}\), as well as its connection with random walk on \(1\)-simplices (in the lifted space) and thus edges of undirected graph. Analogous to node-level RWSE, we introduce EdgeRMSE, a more theoretically powerful PE for edges. Furthermore, we systematically analyze the spectra of \(\mathbf{L}_{1}\) and propose a novel Hodge1Lap PE, the first sign and basis invariant edge-level positional encoding that make use of the spectra of \(\mathbf{L}_{1}\) instead of the previously adopted \(\mathbf{L}_{0}\) only.
### Normalized Hodge-1 Laplacian and edge-level random walk
Theoretical analysis of edge-level random walk.The standard Hodge \(k\)-Laplacian is \(\mathbf{L}_{k}=\mathbf{B}_{k}^{*}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k +1}^{*}\), and there are a number of normalized Hodge Laplacian because the normalization is rather flexible. Schaub et al. [41] propose a normalized form for Hodge \(1\)-Laplacian \(\mathbf{L}_{1}\) with a clear interpretation of a random walk in the lifted edge space. Concretely,
\[\mathbf{\tilde{L}_{1}}=\mathbf{D}_{2}\mathbf{B}_{1}^{*}\mathbf{D}_{1}^{-1} \mathbf{B}_{1}+\mathbf{B}_{2}\mathbf{D}_{3}\mathbf{B}_{2}^{*}\mathbf{D}_{2}^{ -1} \tag{2}\]
where \(\mathbf{D}_{2}\) is the diagonal matrix with adjusted degrees of each edge \(\mathbf{D}_{2}=\max(diag(|\mathbf{B}_{2}|\mathbf{1}),I)\), \(\mathbf{D}_{1}\) is the diagonal matrix of weighted degree of nodes \(\mathbf{D}_{1}=2\cdot diag(|\mathbf{B}_{1}|\mathbf{D}_{2}\mathbf{1})\), and \(\mathbf{D}_{3}=\frac{1}{3}\mathbf{I}\).
To interpret this normalized Hodge \(1\)-Laplacian \(\mathbf{\tilde{L}_{1}}\), Schaub et al. [41] introduce a lifted space of edges, where the original \(m=|S_{1}|\) directed edges are lifted to \(2m\) directed edges. For example, if \((i,j)\in S_{1}\), then we add \((j,i)\) to the lifted space. Consequently, the edge flow \(\mathbf{f}\in\mathcal{C}^{1}\) expands to a larger space \(\mathcal{D}^{1}\) where there are two orientations for each edge, \(|\mathcal{D}^{1}|=2|\mathcal{C}^{1}|\). The matrix representation for this lifting procedure is \(\mathbf{V}=[+\mathbf{I}_{m}\quad-\mathbf{I}_{m}]^{T}\in\mathbb{R}^{2m\times m}\). Then the probability transition matrix for this lifted random walk corresponding to \(\tilde{L}_{1}\) is \(\mathbf{\hat{P}}\):\(-\frac{1}{2}\mathbf{\tilde{L}_{1}}\mathbf{V}^{T}=\mathbf{V}^{T}\mathbf{ \hat{P}}\). In practice, we also perform a simpler row-wise normalization over \(\mathbf{L}_{1}\) to obtain another form of probability transition matrix.
Using \(\mathbf{\hat{P}}\), we can construct an edge-level random walk-based PE method to enrich edge data by encoding structure information, analogous to node-level RWSE. We will also discuss some variations and simplified versions of the aforementioned random walk on \(1\)-simplices and theoretically analyze their expressivity.
EdgeRMSE.Similar to node-level random walk, a well-defined edge-level random walk contains some structure information and can be used to facilitate edge data, namely edge-level positional encoding. While node-level positional encodings have been widely studied, the edge-level positional encoding is a nearly blank field.
Inspired by (node-level) RWSE, EdgeRMSE is based on edge-level random walk. A full version of EdgeRMSE is based on the full edge-level random walk as we have stated above and in [41]. For undirected graphs, two edges with opposite directions \((i,j)\) and \((j,i)\) are again merged by summing
the two probabilities, that is, the lifted space \(\mathcal{D}^{1}\) is mapped back to \(\mathcal{C}^{1}\). Generally speaking, PE can be based on any injection functions \(\psi\) in \(\hat{\mathbf{P}}\) and its powers.
\[\mathrm{EdgeRWSE}(\hat{\mathbf{P}})_{i}=\psi([\hat{\mathbf{P}}^{k}]),k=1,2,...K \tag{3}\]
where \(K\) is the maximum steps we consider. One possible example is to encode the return probability of each edge, which is written \(\mathrm{EdgeRWSE}_{\mathrm{ret}}(\hat{\mathbf{P}})_{i}=\psi([\hat{\mathbf{P}} ^{k}_{i\mid i}]),k=1,2,...K\). If \(\psi\) is well defined, the theoretical expressivity of the full EdgeRWSE above is able to break the \(2\)-FWL bottleneck of node-level RWSE. In practice, we can apply neural networks like MLP or Transformer to encode \(\hat{\mathbf{P}}^{k}\) and concatenate them with the original edge features. Then any standard GNN is applicable for downstream tasks. If the GNN is at least as powerful as \(1\)-FWL, then the GNN with EdgeRWSE is strictly more powerful than \(1\)-FWL and can distinguish some non-isomorphic graph pairs in which \(2\)-FWL fails.
In addition to the edge-level random walk in the lifted space of \(1\)-simplicials in [41], we further define two simplified versions of the edge-level random walk only through lower adjacency. We neglect the \(2\)-simplices or the triangles in our simplified version random walk, i.e. we only consider the \(1\)-down neighbors that share a \(0\)-simplices (node). In this way, \(\hat{\mathbf{P}}\) becomes \(\mathbf{P}_{down}\). This simplification will lead to a theoretically weaker expressivity than using full \(\hat{\mathbf{P}}\), which will be bounded by \(2\)-FWL. However, this simplification is appropriate and beneficial for real-world data that contain a small number of triangles. We illustrate these two variations temporarily on undirected connected graphs without multiple edges and self-loops for simplicity.
The two variations of edge-level random walk via down-neighbors differ in whether two lower adjacent nodes of the edge have the same status. Concretely, the first type of edge-level random walk based on \(\mathbf{P}_{down}\), which we define as _directed \(1\)-down random walk_ follows a two-stage procedure at every step. The walk first selects one of the two lower-adjacent nodes with equal probability \(0.5\) each, then moves towards the neighboring edges connected with the selected node with equal probabilities. If there are no other edges connected to the selected node, the walk returns to the original edge. On the other hand, the second type, which we denote as _undirected \(1\)-down random walk_, chooses the two nodes \(u,v\) with probabilities proportional to their degrees minus one (since we want to exclude the case of returning to \(e\) itself). Consequently, the walk transits to all \(1\)-down neighbors of the source edge with equal probabilities.
In a similar way as the full EdgeRWSE, we propose two simplified versions of EdgeRWSE based on directed \(1\)-down and undirected \(1\)-down random walk, both can be implemented in a rather flexible way. As a special case, the return probabilities of each edge after \(k=1,\ldots,K\) steps are encoded, but notice again that it is not the only implementation choice.
We conclude by summarizing the expressivity of EdgeRWSE.
**Theorem 5.1**.: _Full EdgeRWSE can distinguish some non-isomorphic graphs that are indistinguish by \(2\)-FWL. EdgeRWSE based on directed and undirected \(1\)-down random walk are not more powerful than \(2\)-FWL._
### Sign and basis invariant edge-level positional encoding
Theoretical analysis of Hodge 1-Laplacian spectrum.Recall that the unnormalized Hodge 1-Laplacian is \(\mathbf{L}_{1}=\mathbf{B}_{1}^{T}\mathbf{B}_{1}+\mathbf{B}_{2}\mathbf{B}_{2}^{ T}=\mathbf{L}_{1,down}+\mathbf{L}_{up}\). Here, we analyze the theoretical properties of Hodge 1-Laplacian including its spectrum, which provides solid insights into our following designs.
Note that previous simplicial networks [12; 47; 8; 7] are orientation equivariant and permutation equivariant; thus, they can only be applied to simplicial complexes where all edges are directed. This is frustrating if we want to boost general learning on graphs rather than simplicial complexes alone. However, the spectral analysis of Hodge \(1\)-Laplacian is applicable to undirected graphs. An important property of Hodge Laplacians is that their eigenvalues are invariant to permutation and orientation (if the simplices are oriented), thus they could be directly applied to analyze undirected graphs. Hence in this section, we temporarily omit discussion on permutation and orientation invariance since they naturally hold. Instead, we care more about the sign and basis invariance in the field of spectral analysis [32].
We can show that the nonzero eigenvalues of \(\mathbf{L}_{1,down}\) are the same as \(\mathbf{L}_{0,up}\) and hence \(\mathbf{L}_{0}\). This implies that if there are no \(2\)-simplicials (triangles), Hodge \(1\)-Laplacian has the same nonzero
eigenvalues as Hodge \(0\)-Laplacian. However, the corresponding eigenvectors still provide different information about the nodes and edges, respectively.
**Theorem 5.2**.: _The number of non-zero eigenvalues of Hodge \(1\)-Laplacian \(L_{1}\) is not less than the number of non-zero eigenvalues of Hodge \(0\)-Laplacian \(L_{0}\)._
One direct conclusion is that graph isomorphism based on Hodge 1-Laplacian isospectral is strictly more powerful than Hodge 0-Laplacian. Here we draw a conclusion on the theoretical expressivity of the \(L_{1}\) isospectra:
**Theorem 5.3**.: \(L_{1}\) _isospectral is incomparable with \(1\)-FWL and \(2\)-FWL._
Rattan and Seppelt [39] show that the \(L_{0}\) isospectra is strictly bounded by \(2\)-FWL. The \(L_{1}\) isospectra, through the introduction of \(2\)-simplices (triangles), can distinguish some non-isomorphic graph pairs that are indistinguishable by \(2\)-FWL. See Appendix C for detailed examples.
The zero eigenvalues of \(L_{1}\) have some more important properties. Its multiplicity is the \(1\)-th Betti number \(\beta_{1}\), which is exactly the number of cycles (except triangles) in the graph. We further consider the eigenvectors of \(L_{1}\), each eigenvector \(\mathbf{u}_{i}\) of the eigenvalues \(\lambda_{i}\) has a length \(m\), and each element \(\mathbf{u}_{ij}\) in it reflects the weight of the corresponding edge \(e_{j}\) at this frequency \(\lambda_{i}\). The absolute values of elements corresponding to the edges in cycles are non-zero, while the edges not in cycles have zero weights in the eigenvectors. In other words, the eigenvectors of zero eigenvalues can efficiently mark the edges that are in a cycle. More intuitive illustration and theoretical proof are given in Appendix C.2.2.
Hodge1Lap: sign and basis invariant edge PE.In this section, we propose Hodge1Lap, a novel edge-level positional encoding method based on the spectral analysis of Hodge 1-Laplacian. To the best of our knowledge, this is the first sign and basis invariant edge-level PE based on Hodge \(1\)-Laplacian \(L_{1}\).
Recall the geometric meaning of the Hodge \(1\)-Laplacian spectra in Section 5.2. Zero eigenvalues and eigenvectors reflect the cycles in the graph. These insights of Hodge \(1\)-Laplacian spectra shed light on our design for edge-level positional encoding. Denote the eigenvalues \(\lambda_{i}\) with multiplicity \(m(i)\) as \(\lambda_{i(1)},\lambda_{i(2)},\ldots,\lambda_{i(m_{i})}\), respectively. The corresponding eigenvectors are \(\mathbf{u}_{i(1)},\ldots,\mathbf{u}_{i(m_{i})}\), but note that these eigenvectors are: (i) not sign invariant, since if \(L_{1}\mathbf{u}_{i(j)}=0,j=1,...,m_{i}\), then \(L_{1}(-\mathbf{u}_{i(j)})=0\); (ii) not basis invariant if \(m_{i}>1\), since any \(m_{i}\) linearly independent basis of the kernel space are also eigenvectors, and the subspace they span is identical to the kernel space. This is analogous to the \(L_{0}\) eigenvectors: they are not sign and basis invariant, which makes it difficult for us to design sign and basis invariant positional encodings. Therefore, we propose a novel projection-based method to build Hodge1Lap, a sign and basis invariant edge-level positional encoding.
Formally, Hodge1Lap processes the eigenvalues \(\lambda_{i}\) with multiplicity \(m_{i}\) and relevant eigenvectors as follows. Recall the projection matrix
\[P_{proj,i}=\mathbf{U}\mathbf{U}^{T}=\sum_{j=1}^{m_{i}}\mathbf{u}_{i(j)} \mathbf{u}_{i(j)}^{T} \tag{4}\]
where the subscript \({}_{proj}\) is used to distinguish the projection matrix from probability transition matrix \(P\), and \(\mathbf{U}=[\mathbf{u}_{i(1)},\ldots,\mathbf{u}_{i(m_{i})}]\). For any vector \(\mathbf{v}\in\mathbb{R}^{m}\), \(P_{proj,i}\mathbf{v}\) projects it into the subspace spanned by the eigenvectors \(u_{i(j)},j=1,\ldots,m_{i}\). It is straightforward to verify that the projection in the subspace is independent of the choice of basis \(u_{i(j)}\) as long as they are linearly independent and hence is both sign and basis invariant. As long as the preimage \(\mathbf{v}\) is well defined (e.g., permutation equivariant to edge index), the projection can satisfy permutation equivariance as well as sign and basis invariance. In Hodge1Lap, we propose to use two different forms of preimages: a unit vector \(\mathbf{e}\in\mathbb{R}^{m}\) with each element \(\mathbf{e}_{j}=\frac{1}{\sqrt{m}}\), and the original edge feature \(\mathbf{X}(E)\in\mathbb{R}^{m\times d}\). The first variant considers pure structure information, while the second variant jointly encodes structure and feature information. Taking the first variant as an example, Hodge1Lap implemented by projection can be formulated as
\[\mathrm{Hodge1Lap_{proj}}(E)=\sum_{i}\phi_{i}(P_{proj,i}\mathbf{e}) \tag{5}\]
where \(\phi_{i}\) are injective functions and can be replaced by MLP layers, and the summation is performed over the interested eigen-subspaces.
In addition to the projection-based implementation of Hodge1Lap, we also implement other variants (analogously to the implementation of LapPE [30]): (i) We use a shared MLP \(\phi\) to directly embed the \(n_{eigen}\) eigenvectors corresponding to the smallest \(n_{eigen}\) eigenvalues, where \(n_{eigen}\) is a hyper-parameter shared for all graphs. We refer this implementation as \(\mathrm{Hodge1Lap_{sim}}(E)=\sum_{i=1}^{n_{eigen}}\phi(\mathbf{u}_{i})\). (ii) We take the absolute value of each element in eigenvectors before passing them to the MLP, which we denote as \(\mathrm{Hodge1Lap_{abs}}(E)=\sum_{i=1}^{n_{eigen}}\phi(|\mathbf{u}_{i}|)\), where \(|\cdot|\) means taking element-wise absolute value. It is remarkable that, while \(\mathrm{Hodge1Lap_{proj}}\) is sign-invariant and basis-invariant, \(\mathrm{Hodge1Lap_{sim}}\) is not invariant to both sign and basis, and \(\mathrm{Hodge1Lap_{abs}}\) is sign-invariant yet not basis-invariant. We also allow combination of the above implementations; see Appendix E for more implementation details.
Our Hodge1Lap has elegant geometric meanings thanks to the spectral properties of \(L_{1}\). For example, the kernel space of \(L_{1}\) related to the zero eigenvalues is fully capable of **detecting cycles and rings** in graphs [23], which can play a significant role in many domains. In molecular graphs, for example, cycle structures such as benzene rings have crucial effects on molecular properties. Hodge1Lap is able to extract such rings in a natural way rather than manually listing them, and \(\mathrm{Hodge1Lap_{abs}}\) is able to differentiate edges from distinct cycles. Intuitively, according to the Hodge decomposition theorem, any vector field defined on edges \(\mathcal{C}^{1}\) can be decomposed into three orthogonal components: a solenoidal component, a gradient component and a harmonic (both divergence-free and curl-free) component; see Appendix A. \(\ker(\mathbf{L}_{1})\) is the harmonic component, and since divergence-free and curl-free edge flows can only appear on cycles, the eigenvectors corresponding to \(\ker(\mathbf{L}_{1})\) therefore mark out the cycles in the graph; see Appendix C.2.2 for more technical details and illustrative examples. Moreover, taking into account more subspaces other than the kernel space of \(L_{1}\), Hodge1Lap contains other structure information since the eigenvectors are real and continuous vectors. Ideally, one can apply any sign and basis invariant functions to obtain a universal approximator [32] for functions on \(1\)-faces besides projections, see Section 6 for general conclusions.
## 6 Random walk on higher-order and inter-order simplices
In Section 4 and Section 5, we systematically analyze the random walk and Hodge Laplacian-based PE and SE on \(0\)-simplices (node level) and \(1\)-simplices (edge level), respectively. As we have shown, introducing higher-order simplices into random walk benefits their theoretical expressivity. In this section, we formally introduce random walks on higher-order simplices and analyze their expressivity. We will also investigate the spectral analysis of Hodge \(k\) Laplacians, whose normalization forms are closely related to random walks on \(k\)-simplices. Besides random walk within same-order simplices, we define a novel inter-order random walk that is able to transmit within different orders of simplices. This random walk scheme incorporates and unifies a wide range of simplicial networks [12; 8; 14].
### Higher-order Hodge Laplacians and random walk
The \(k\)-th order Hodge Laplacian is defined as \(\mathbf{L}_{k}=\mathbf{B}_{k}^{*}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k +1}^{*}=\mathbf{L}_{k,down}+\mathbf{L}_{k,up}\). Analogous to \(\mathbf{L}_{1}\), a properly normalized Hodge \(k\) Laplacian \(\mathbf{\tilde{L}_{k}}\) corresponds to a \(k\)-th order random walk on \(k\)-simplices in the lifted space. The matrix representation for the lifting is \(\mathbf{V}_{k}=\left[+\mathbf{I}_{n_{k}}\quad-\mathbf{I}_{n_{k}}\right]^{T} \in\mathbb{R}^{2n_{k}\times n_{k}}\), where \(n_{k}=|S_{k}|\) is the number of \(k\)-simplices in the simplicial complex \(\mathcal{K}\). For undirected graphs, one only needs to sum over different orientations to get the cochain group \(\mathcal{C}^{k}\) from \(\mathcal{D}^{k}\), where \(|\mathcal{D}^{k}|=2|\mathcal{C}^{k}|\) is the cochain group in the lifted space. The transition matrix \(\hat{\mathbf{P}_{k}}\) for \(k\)-th order random walk is defined through \(-\frac{1}{2}\mathbf{\tilde{L}_{k}}\mathbf{V}_{k}^{T}=\mathbf{V}_{k}^{T}\hat{ \mathbf{P}_{k}}\).
Similarly to the edge-level random walk in the lifted space, the transition matrix \(\hat{\mathbf{P}_{k}}\) describes that each step of \(k\)-th order random walk move towards either \(k\)-down neighbors or \(k\)-up neighbors. When going through the upper adjacent \(k+1\) faces, the walk uniformly transits to an upper adjacent \(k\)-simplex with different orientation relative to the shared \(k+1\) face, unless it has no upper adjacent faces. If the step is taken towards a lower-adjacent \(k-1\) face, the walk transits along or against the original direction to one of its \(k\)-down neighbors.
Based on \(\hat{\mathbf{P}_{k}}\), we can design \(k\)-th order RWSE for \(k\)-simplicial data according to the \(k\)-th order random walk, \(k-\mathrm{RWSE}=\psi_{k}(\hat{\mathbf{P}_{k}})\), where \(\psi_{k}\) is an injective function that acts on either \(\hat{\mathbf{P}_{k}}\) or its polynomials. If we maintain all \(k\)-RWSE for \(k=0,1,\ldots,K\) in a simplicial complex \(\mathcal{K}\) with
dimension larger than \(K\), then we can get a more powerful algorithm by adding \(K+1\)-RWSE to the \(K+1\)-simplices in \(\mathcal{K}\).
In addition to directly making use of the random walk on the \(k\)-simplices, spectral analysis of \(\mathbf{L}_{k}\) also sheds light on PE designs for higher-order simplicial data. Based on the eigenvalues and eigenvectors of \(\mathbf{L}_{k}\), we can build permutation equivariant and basis invariant functions defined on \(\mathcal{K}_{k+1}\) that can simulate arbitrary \(k\)-cochain or \(k\)-form. Concretely, if we use the normalized version of \(k\)-th Hodge Laplacian \(\Delta_{k}\) as in [24], the eigenvalues of \(\Delta_{k}\) will be compact \(0\leq\lambda\leq k+2\). Then applying a permutation equivariant and basis-invariant function such as _Unconstrained BasisNet_[32] on the eigenvalues and eigenvectors, we are able to approximate any \(k\)-form which is basis-invariant. We refer interested readers to Appendix C.3 for more details.
### Inter-order random walk
The concept of random walk can be even generalized to a more universal version, which we denote as inter-order random walk. In each step, the inter-order random walk at a \(k\)-simplex can transit not only to the \(k\)-down neighbors and \(k\)-up neighbors (they are all \(k\)-simplices as well), but also to lower adjacent \(k-1\)-simplices and upper adjacent \(k+1\)-simplices. Here we denote the (unnormalized) adjacent matrix for the inter-order random walk on a \(K\)-order simplicial complex \(\mathcal{K}\) as \(\mathcal{A}_{K}(\mathcal{K})\), which is defined as
\[\mathcal{A}_{K}(\mathcal{K})=\begin{bmatrix}\mathbf{L}_{0}&\mathbf{B}_{1}&&\\ \mathbf{B}_{1}^{T}&\mathbf{L}_{1}&\mathbf{B}_{2}&&\\ &...&...&...&\\ &&...&...&...\\ &&&\mathbf{B}_{K-1}^{T}&\mathbf{L}_{K-1}&\mathbf{B}_{K}\\ &&\mathbf{B}_{K}^{T}&\mathbf{L}_{K}\end{bmatrix} \tag{6}\]
which is a block matrix with \(\mathbf{L}_{k}\) in the \(k\)-th diagonal block, \(\mathbf{B}_{k}^{T}\) and \(\mathbf{B}_{k+1}\) in the offset \(\pm 1\) diagonal blocks, while all other blocks are zeros. Although Chen et al. [14] also mentioned a similar block matrix, they do not pose a concrete form of the off-diagonal blocks. The inter-order adjacent matrix we define has a clear physical interpretation that one can only transform to simplices with different orders that are boundaries and co-boundaries of current simplex. A properly normalized version \(\tilde{\mathcal{A}}_{K}\) can describe the inter-order random walk with a certain rule. Here, we give a property of the power of \(\mathcal{A}_{K}\) which still holds in normalized versions.
\[\mathcal{A}_{K}^{r}=\begin{bmatrix}p_{r}(\mathbf{L}_{0})&q_{r-1}(\mathbf{L}_ {0,up})\mathbf{B}_{1}&&\\ q_{r-1}(\mathbf{L}_{1,down})\mathbf{B}_{1}^{T}&p_{r}(\mathbf{L}_{1})&q_{r-1}( \mathbf{L}_{1,up})\mathbf{B}_{2}&&\\ &...&...&...&\\ &&...&...&...\\ &&&q_{r-1}(\mathbf{L}_{K,down})\mathbf{B}_{K}^{T}&p_{r}(\mathbf{L}_{K})\end{bmatrix} \tag{7}\]
where \(p_{r}(\cdot)\) and \(q_{r}(\cdot)\) are polynomials with maximum order \(r\). The above equation states that simplices with differences of order larger than one cannot directly exchange information even after infinite rounds, but they can affect each other through the coefficients in \(p_{r}\) and \(q_{r-1}\) in the blocks on the offset \(\pm 1\)-diagonal blocks.
Several previous works such as [8] can be unified by \(\mathcal{A}_{K}\). Additionally, we can make use of \(\mathcal{A}_{K}^{r}\) to build random walk-based positional encoding for all simplices in the \(K\)-dimensional simplicial complex that contains rich information.
## 7 Experiments
In this section, we present a comprehensive ablation study on Zinc-12k to investigate the effectiveness of our proposed methods. We also verify the performance on graph-level OGB benchmarks. Due to the limited space, experiments on synthetic datasets and more real-world datasets as well as experimental details are presented in Appendix E.
**Ablation study on Zink-12k.** Zinc-12k [17] is a popular real-world dataset containing 12k molecules. The task is the graph-level molecular property (constrained solubility) regression. In our ablation study, we use GINE [25], GAT [45], PNA [15], SSWL+ [50], GPS [38] and GRIT [33] as our base models, where the first three are message-passing based GNNs, SSWL+ is an instance of subgraph GNN, while GPS and GRIT are recent SOTA graph transformers. Four different factors are studied: (1) the node-level PE or SE, where RWSE refers to [18], LapPE refers to [30] and
"-" suggests no node-level PE/SE; (2) EdgeRWSE, the edge-level SE based on spatial domain of \(1\)-down random walk, where "directed" and "undirected" are used to distinguish the two types of simplified version of \(1\)-down random walk; (3) Hodge1Lap, the edge-level PE based on spectra of \(\mathbf{L}_{1}\), where "abs" refers to the sign-invariant method (summing over absolute values of eigenvectors, or \(\mathrm{Hodge1Lap_{abs}}\)), and "project" refers to the sign and basis invariant method (project the unit vector into interested subspace, or \(\mathrm{Hodge1Lap_{proj}}\)); (4) RWMP, a novel Random Walk Message Passing scheme we propose, which performs message passing based on probability calculated by a distance metric; see Appendix D for details of RWMP.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline \hline model & Node PE/SE & EdgeRWSE & Hodge1Lap & RWMP & Test MAE \\ \hline GIN [46] & - & - & - & - & \(0.526\pm 0.051\) \\ GSN [9] & - & - & - & - & \(0.101\pm 0.010\) \\ Graphmer [48] & - & - & - & - & \(0.122\pm 0.006\) \\ SAN [30] & - & - & - & - & \(0.139\pm 0.006\) \\ GIN-AK+ [53] & - & - & - & - & \(0.080\pm 0.001\) \\ CIN [7] & - & - & - & - & \(0.079\pm 0.006\) \\ Specformer [6] & - & - & - & - & \(0.066\pm 0.003\) \\ \hline GINE[25] & - & - & - & - & \(0.133\pm 0.002\) \\ GINE & - & directed & - & - & \(0.110\pm 0.003\) \\ GINE & - & undirected & - & - & \(0.104\pm 0.008\) \\ GINE & - & - & abs & - & \(0.102\pm 0.004\) \\ GINE & - & - & project & - & \(0.091\pm 0.004\) \\ GINE & LapPE & - & - & - & \(0.120\pm 0.005\) \\ GINE & RWSE & - & - & - & \(0.074\pm 0.003\) \\ GINE & RWSE & directed & - & - & \(0.070\pm 0.003\) \\ GINE & RWSE & undirected & - & - & \(0.069\pm 0.002\) \\ GINE & RWSE & - & abs & - & \(0.068\pm 0.003\) \\ GINE & RWSE & - & project & - & \(0.068\pm 0.004\) \\ GINE & RWSE & - & - & True & \(0.068\pm 0.003\) \\ GINE & RWSE & - & project & True & \(0.066\pm 0.003\) \\ \hline GINE & RWSE & Full-EdgeRWSE & - & - & \(0.069\pm 0.003\) \\ GINE & Inter-RWSE & Inter-RWSE & - & - & \(0.083\pm 0.006\) \\ GINE & RWSE & Cellular & - & - & \(0.068\pm 0.003\) \\ \hline GAT [45] & - & - & - & - & \(0.384\pm 0.007\) \\ GAT & - & undirected & - & - & \(0.163\pm 0.008\) \\ GAT & - & - & project & - & \(0.130\pm 0.005\) \\ \hline PNA [15] & - & - & - & - & \(0.188\pm 0.004\) \\ PNA & - & undirected & - & - & \(0.104\pm 0.004\) \\ PNA & - & - & project & - & \(0.074\pm 0.005\) \\ \hline SSWL+ [50] & - & - & - & - & \(0.070\pm 0.005\) \\ SSWL+ & - & undirected & - & - & \(0.067\pm 0.005\) \\ SSWL+ & - & - & project & - & \(0.066\pm 0.003\) \\ \hline GPS [38] & - & - & - & - & \(0.113\pm 0.005\) \\ GPS & RWSE & - & - & - & \(0.070\pm 0.004\) \\ GPS & RWSE & undirected & - & - & \(0.068\pm 0.004\) \\ GPS & RWSE & - & project & - & \(0.064\pm 0.003\) \\ \hline GRIT [33] & - & - & - & - & \(0.149\pm 0.008\) \\ GRIT & RWSE & - & - & - & \(0.081\pm 0.010\) \\ GRIT & SPDPE & - & - & - & \(0.067\pm 0.002\) \\ GRIT & RDPE & - & - & - & \(0.059\pm 0.003\) \\ GRIT & RRWP & - & - & - & \(0.059\pm 0.002\) \\ GRIT & - & undirected & - & - & \(0.103\pm 0.006\) \\ GRIT & - & - & project & - & \(0.086\pm 0.005\) \\ GRIT & RRWP & undirected & - & - & \(0.058\pm 0.002\) \\ GRIT & RRWP & - & project & - & \(0.057\pm 0.003\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation on Zinc-12k dataset [17] (MAE \(\downarrow\)). Highlighted are the first, second results.
The full results of performance on the Zinc dataset are reported in Table 1. Note that all our base models are improved when augmented with our EdgeRWSE or Hodge1Lap: both GAT and PNA reduce by over \(50\%\) MAE. In particular, a simple GINE without using any transformer or subgraph GNN variations is able to surpass GPS with our PE/SE, verifying the impressive effectiveness of our proposed methods. Applying EdgeRWSE and Hodge1Lap to GRIT results in new **State-of-the-Art** performance. Regarding ablation, all variants of our EdgeRWSE and Hodge1Lap can improve performance of base models, see Appendix E for more implementation details of these variants. One may observe that RWSE is significantly beneficial in this task, and combining node-level RMSE and our edge-level PE/SE methods would lead to a further performance gain. In general, Hodge1Lap shows better performance than EdgeRWSE, indicating the effectiveness of embedding structures such as rings through spectral analysis. The effect of whether EdgeRWSE is directed or the implementation method in Hodge1Lap is rather small. We also observe that Full-EdgeRWSE, Inter-RWSE, and CellularRWSE are beneficial, see Appendix E for more details. Additionally, the RWMP mechanism is also capable of improving performance, which we will analyze in Appendix D.
Experiments on OGB benchmarks.We also verify the performance of EdgeRWSE and Hodge1Lap on graph-level OBG benchmarks, including the ogbg-molhiv and ogbg-molpcba datasets. The results are shown in Table 2. We apply our Hodge1Lap and EdgeRWSE to both GatedGCN and GPS(consists of GatedGCN and Transformer) and show that our methods can improve both architectures. In general, both two edge-level PE/SE are able to achieve comparable performance as the SOTA models, though EdgeRWSE suffers from overfitting on ogbg-molhiv. It should be noted that SOTA results on ogbg-molhiv typically involve manually crafted structures, including GSN [9] and CIN [7]. Natural methods and complex models usually suffer from overfitting and cannot generalize well in the test set.
## 8 Conclusions
In this paper, we propose to facilitate graph neural networks through the lens of random walk on simplicial complexes. The random walk on \(k\)-th order simplices is closely related to Hodge \(k\) Laplacian \(\mathbf{L}_{k}\), and we emphasize that both spatial analysis of random walk and spectra of \(\mathbf{L}_{k}\) can improve the theoretical expressive power and performance of GNNs. For \(0\)-simplices, we connect a number of existing PE and SE methods (such as RWSE) via node-level random walk, and further provide a theoretical expressivity bound. For \(1\)-simplices, we propose two novel edge-level PE and SE methods, namely EdgeRWSE and Hodge1Lap. EdgeRWSE directly encodes information based on edge-level random walk, while Hodge1Lap is the first sign and basis invariant edge-level PE based on Hodge-\(1\) Laplacian spectra. We also generalize our theory to arbitrary-order simplices, showing how \(k\)-order and inter-order random walk as well as spectral analysis of Hodge Laplacians can facilitate graph and simplicial learning. Besides analyzing theoretical expressive power and physical meanings of these random walk-based methods, we also verify the effectiveness of our methods, which achieve SOTA or highly competitive performance on several datasets.
\begin{table}
\begin{tabular}{l|c c} \hline \hline model & ogbg-molhiv (AUROC \(\uparrow\)) & ogbg-molpcba (Avg. Precision \(\uparrow\)) \\ \hline GIN+virtual node & \(0.7707\pm 0.0149\) & \(0.2703\pm 0.0023\) \\ GSN (directional) & \(0.8039\pm 0.0090\) & - \\ PNA & \(0.7905\pm 0.0132\) & \(0.2838\pm 0.0035\) \\ SAN & \(0.7785\pm 0.2470\) & \(0.2765\pm 0.0042\) \\ GIN-AK+ & \(0.7961\pm 0.0110\) & \(0.2930\pm 0.0044\) \\ CIN & \(0.8094\pm 0.0057\) & - \\ GPS & \(0.7880\pm 0.0101\) & \(0.2907\pm 0.0028\) \\ Specformer & \(0.7889\pm 0.0124\) & \(0.2972\pm 0.0023\) \\ \hline GPS+EdgeRWSE & \(0.7891\pm 0.0118\) & \(\mathbf{0.2934\pm 0.0025}\) \\ GPS+Hodge1Lap & \(\mathbf{0.8021\pm 0.0154}\) & \(0.2937\pm 0.0023\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experiments on graph-level OGB benchmarks [26]. Highlighted are the first, second, **third** test results.
## Acknowledgments and Disclosure of Funding
Muhan Zhang is partially supported by the National Natural Science Foundation of China (62276003) and Alibaba Innovative Research Program.
|
2308.06689 | Estimator Meets Equilibrium Perspective: A Rectified Straight Through
Estimator for Binary Neural Networks Training | Binarization of neural networks is a dominant paradigm in neural networks
compression. The pioneering work BinaryConnect uses Straight Through Estimator
(STE) to mimic the gradients of the sign function, but it also causes the
crucial inconsistency problem. Most of the previous methods design different
estimators instead of STE to mitigate it. However, they ignore the fact that
when reducing the estimating error, the gradient stability will decrease
concomitantly. These highly divergent gradients will harm the model training
and increase the risk of gradient vanishing and gradient exploding. To fully
take the gradient stability into consideration, we present a new perspective to
the BNNs training, regarding it as the equilibrium between the estimating error
and the gradient stability. In this view, we firstly design two indicators to
quantitatively demonstrate the equilibrium phenomenon. In addition, in order to
balance the estimating error and the gradient stability well, we revise the
original straight through estimator and propose a power function based
estimator, Rectified Straight Through Estimator (ReSTE for short). Comparing to
other estimators, ReSTE is rational and capable of flexibly balancing the
estimating error with the gradient stability. Extensive experiments on CIFAR-10
and ImageNet datasets show that ReSTE has excellent performance and surpasses
the state-of-the-art methods without any auxiliary modules or losses. | Xiao-Ming Wu, Dian Zheng, Zuhao Liu, Wei-Shi Zheng | 2023-08-13T05:38:47Z | http://arxiv.org/abs/2308.06689v2 | # Estimator Meets Equilibrium Perspective:
###### Abstract
Binarization of neural networks is a dominant paradigm in neural networks compression. The pioneering work BinaryConnect uses Straight Through Estimator (STE) to mimic the gradients of the sign function, but it also causes the crucial inconsistency problem. Most of the previous methods design different estimators instead of STE to mitigate it. However, they ignore the fact that when reducing the estimating error, the gradient stability will decrease concomitantly. These highly divergent gradients will harm the model training and increase the risk of gradient vanishing and gradient exploding. To fully take the gradient stability into consideration, we present a new perspective to the BNNs training, regarding it as the equilibrium between the estimating error and the gradient stability. In this view, we firstly design two indicators to quantitatively demonstrate the equilibrium phenomenon. In addition, in order to balance the estimating error and the gradient stability well, we revise the original straight through estimator and propose a power function based estimator, **R**ectified **S**traight **T**hrough **E**stimator (**R****eSTE** for short). Comparing to other estimators, ReSTE is rational and capable of flexibly balancing the estimating error with the gradient stability. Extensive experiments on CIFAR-10 and ImageNet datasets show that ReSTE has excellent performance and surpasses the state-of-the-art methods without any auxiliary modules or losses.
## 1 Introduction
Deep neural networks have revolutionary development in recent years since its admirable ability to learn discriminate features [31, 20, 15, 38, 45]. But they tend to require massive computational cost and memory cost, which is unsuitable to deploy at some resource-limited devices. To this end, many network compression methods have been proposed [25, 19, 23], such as pruning [39, 60, 59, 13], tiny model design [56, 25, 49, 29], distillation [40, 50, 57] and tensor decomposition[43]. Among them, network quantization [8, 44, 32, 21, 1] is a kind of excellent method with high compression ratio and little performance degradation. Binary Neural Networks (BNNs) [8, 9, 27], an extreme case of network quantization which aims to quantize 32-bit inputs into 1-bit, attract great research enthusiasm in recent years due to its extremely high compression ratio and great performance in neural networks compression.
In BNNs research, the pioneering work BinaryConnect [8] proposes to apply sign function to binary the full-precision inputs in forward process, and use the straight through estimator (STE) to mimic the gradients of the sign function when backpropagation, which achieves great performance. However, the differe
Figure 1: The intuitive illustrations of the equilibrium perspective of BNNs training, i.e., the equilibrium between the estimating error and the gradient stability. When reducing the estimating error, the gradients will become highly divergent, which harms the model training and increases the risk of gradient vanishing and gradient exploding. Blue, orange lines represent the estimators and sign function respectively.
and the backward processes causes the crucial inconsistency problem in BNNs training. To reduce the degree of inconsistency, many previous works design different estimators instead of STE, attempting to narrow the estimating error. Nevertheless, they neglect the fact that when reducing the estimating error, the gradient stability will decrease concomitantly. This will make the gradients highly divergent, harming the model training and increasing the risk of gradient vanishing and gradient exploding.
To fully take the gradient stability into consideration, we present a new perspective to the BNNs training, regarding it as the equilibrium between the estimating error and the gradient stability, as shown in Fig. 1. In this view, we firstly design two indicators to measure the degree of the equilibrium between estimating error and the gradient instability. With these indicators, we can quantitatively demonstrate the equilibrium phenomenon. In addition, to balance the estimating error with the gradient stability well, we revise the original straight through estimator (STE) and propose a power function based estimator, **R**ectified **S**traight **T**hrough **E**stimator, **ReSTE** for short. The insight is from the fact that STE is a special case of the power function. With this design, ReSTE is always rational, i.e., having less estimating error than STE, and capable of flexibly balancing the estimating error and the gradient stability, which are the two main advantages of ReSTE comparing to other estimators.
Sufficient experiments on CIFAR-10 [30] and large-scale ImageNet ILSVRC-2012 [11] datasets show that our method has good performance and surpasses the state-of-the-art methods without any auxiliaries, e.g., additional modules or losses. Moreover, by two carefully-designed indicators, we demonstrate the equilibrium phenomenon and further show that ReSTE can flexibly balance the estimating error and the gradient stability. Our source code is available at [https://github.com/DravenALG/ReSTE](https://github.com/DravenALG/ReSTE), hoping to help the development of the BNNs community.
## 2 Revisiting Binary Neural Networks
Binary Neural Networks (BNNs) [8, 9, 44, 6] aim to binarize full-precision inputs, weights or features (also called activations in BNNs literature) in each layers into 1-bits, which is an extreme case of network quantization. Essentially, the optimization of BNNs is a constraint optimization problem. Naively using brute-force algorithms to solve this problem is intractable due to the huge combinatorial probabilities when the dimensions of input are large.
The exploration of tractable solutions to binary neural networks training can be traced back to many pioneering works [28, 7, 48]. Among them, BinaryConnect [8] forms the main optimization paradigm in this domain due to its great performance. BinaryConnect connects a sign function between the full-precision inputs and the following calculation modules in forward process. Since the gradients of the sign function are zero almost everywhere, BinaryConnect uses an identity function to substitute for the sign function when calculating the gradients in backward process, which is also known as straight through estimator (STE) [22, 4]. For convenience, we respectively donate \(\mathbf{z}\) and \(\mathbf{z}_{b}\) as the full-precision inputs and the binarized outputs. The forward and backward processes of the binary procedure in BinaryConnect are as follows:
\[\text{Forward:}\mathbf{z}_{\mathbf{b}}=\mathbf{sign}(\mathbf{z}), \tag{1}\] \[\text{Backward:}\frac{\partial\mathcal{L}}{\partial\mathbf{z}}= \frac{\partial\mathcal{L}}{\partial\mathbf{z}_{b}},\]
where \(\mathcal{L}\) is the loss function and \(\mathbf{sign}\) represents the element-wise sign function. It means that the gradients with respect to the full-precision inputs straightly equals to the gradients of the binarized outputs, which is also the origin of the name straight through estimator.
To improve the performance of binary neural networks, many different improvement strategies have been proposed. Some works try to modify the model architectures of the backbone, which heightens the expressive ability of the binary neural networks [37, 36]. In spite of the performance improvement, these works revise the architectures of the backbone, which is not universal to all architectures and adds additional computational and memory cost in inference. In addition, some other works focus on improving the forward process with some additional assistance, e.g., modules [54, 33, 53, 51], losses [3, 17, 18, 35, 46, 52] and even distillation [50]. This type of methods significantly increase parameters and the computational cost when training.
Besides, many works mainly focus on the essential and vital component of binary neural networks training, i.e, the estimator to mimic the gradients of the sign function. BNN+ [10] designs a SignSwish function, Bi-Real-Net [37] models a piece-wise polynomial function, DSQ [16] proposes a tanh-based function, RQ [55] proposes a root based function similar to our ReSTE but more complex and not focuses on balancing the equilibrium, IR-Net [42] gives the EDE function and FDA [53] applies Fourier series to simulate the gradients. Although they have achieved excellent performance, they ignore the fact that when reducing the estimating error, the gradient stability will decrease concomitantly, which means that the gradient will become highly divergent, harming the model training and increasing the risk of gradient vanishing and gradient exploding. To fully consider the gradient stability in BNNs training, we present a new perspective, viewing it as the equilibrium between the estimating error and the gradient stability. From this perspective, we revise the original STE and propose a power function based estimator, Rectified Straight Through Estimator (ReSTE for short). Comparing to the estimators above, ReSTE is rational, i.e., having less or equal estimating error than STE and capable of flexibly balancing the es
timating error and gradient stability. Sufficient experiments show that our method surpasses the state-of-the-art methods without any auxiliaries, e.g., additional modules or losses.
## 3 Estimator Meets Equilibrium Perspective
### Equilibrium Perspective
The **inconsistency problem** is inevitable but crucial in BNNs training since we use estimators to mimic the gradients of sign function in backpropogation. To mitigate the degree of the inconsistency, lots of follow-up works design different estimators instead of STE, aiming to reduce the estimating error. Although they improve the performance of BNNs, they only care about reducing the estimating error and ignore the concomitant gradient instability. The gradients will become highly divergent, which increases the risk of gradients vanishing and gradients exploding, as shown in Fig. 1. For more persuasive, we visualize the gradient distributions of STE [44] and the influential work IR-Net [42] in Fig. 2. Although IR-Net attempts to reduce the estimating error by approximating the sign function as it claims, it suffers from the problem of highly divergent gradients, which will harm the model training.
To fully take the gradients stability into consideration, we present a new perspective, considering BNNs training as the equilibrium between the estimating error and the gradient stability. For clear description, we first give the definition of the estimating error and the gradient stability. We define that the **estimating error** is the difference between the sign function and the estimator, which reflects how close between the estimator and sign function. We define the **gradient stability** as the divergence of the gradients of all parameters in each iteration. The insight is that when we use estimator to close to sign function, the gradients of all parameters in one iteration are inevitably divergent, which is intuitively shown in Fig. 1. This may lead to a wrong updating direction and harm the model training.
With the definitions, we now formally discuss our **equilibrium perspective**. Since the BNNs training is the equilibrium between the estimating error and the gradient stability, we should not reduce estimating error without limits. Instead, we should design an estimator which can easily adjust the degree of equilibrium to obtain better performance.
### Indicators of Estimating Error and Gradient Instability
To quantitatively and clearly demonstrate the equilibrium phenomenon, we firstly design two indicators to quantitatively analyze the degree of the estimating error and the gradient instability.
Since the estimating error is the difference between the sign function and the estimator, we stipulate that the estimating error can be evaluated by the distance between the results through the element-wise sign function and the results through the estimator in each iteration. We define \(\mathbf{f}(\cdot)\) as the estimator and \(D\) as the distance metric. The degree of estimating error can be formally described as:
\[e=D(\mathbf{sign}(\mathbf{z}),\mathbf{f}(\mathbf{z})), \tag{2}\]
where \(D(\cdot)\) is L2-norm in our method. We call \(e\) as the **estimating error indicator**.
In addition, to measure the degree of the gradient stability, we design a **gradient instability indicator**. Since the gradient stability is the divergence of the gradients of all parameters in each iteration, we use the variance of gradients of all the parameters in each iteration to evaluate it. We design the indicator as follows:
\[s=\mathrm{var}(|\mathbf{g}|)), \tag{3}\]
where \(\mathbf{g}\) donates the gradients, \(|\cdot|\) is the element-wise absolute operation and \(\mathrm{var}(\cdot)\) stands for the variance operator. Here we use absolute operation since we only care about the gradients magnitude (the updating directions are not relevant to the gradient stability). Note that \(s\) is the gradient instability indicator that the magnitude of \(s\) reflects the degree of the instability.
### Rectified Straight Through Estimator
To balance the estimating error and the gradient stability, we should design an estimator that can easily adjust the degree of equilibrium well. Before that, we firstly claim that sign function and STE are two extremes in gradient stability. The sign function has zero gradients almost everywhere and has infinite gradients at the origin of the coordinate, whose gradients are completely vanishing or exploding. Therefore sign function has the highest gradient instability. In contract, STE uses linear function to estimate the gradients of sign function, which not at all changes the gradients backward in the estimating process. So STE is with the lowest instability. We want to design an estimator close to sign function to get less estimating error,
Figure 2: Illustrations of the gradient distributions of STE (left) and IR-Net (right). X-axes represent the values of the gradients, y-axes are the frequency.
but not too much unstable to train. Based on this intuition, we design two properties that an estimator should satisfy: **1) Rational property:** It should always have less or equal estimating error than straight through estimator (the identity function), which can be formally described as \(D(\mathbf{sign}(\mathbf{z}),\mathbf{f}(\mathbf{z}))-D(\mathbf{sign}(\mathbf{z}), \mathbf{z})\leq 0\). The rational property is rational since the fact that if an estimator has more estimating error than STE in some ranges, directly applying STE to mimic the gradients in these ranges is more reasonable, which is more stable and has less estimating error.
**2) Flexible property:** It should be capable of flexibly adjusting the degree of the estimating error and the gradient stability to adjust the degree of equilibrium. The flexible property consists of two aspects. First, the estimator can change from STE to sign function. Second, the changing should be gradually. Gradually changing means that each point can move a small step closer to sign function when we adjust the estimator, which is the key to find a suitable degree of the equilibrium.
To achieve these goals, we revise the STE and propose a power function based estimator, **R**ectified **S**traight **T**hrough **E**stimator, **ReSTE** for short. The inspiration of ReSTE is that the STE strategy (identity function) is a special case of the power function, when the power is one for specific. When the power function is close to STE, the gradient is stable, but the estimating error is large. When the power increases, the power function will close to sign function and have less estimating error, while increasing the instability of the gradients. In a word, power function can easily change from STE to sign function, demonstrating its ability of adjusting the degree of equilibrium.
Under such observation, we propose to use power function as the estimator in backward process to balance the estimating error and the gradient stability. Our ReSTE function has the following form:
\[\begin{split}&\mathbf{f}(\mathbf{z})=\mathbf{sign}(\mathbf{z})| \mathbf{z}|^{\frac{1}{o}},\\ & s.t.\quad o\geq 1,\end{split} \tag{4}\]
where \(o\) are hyper-parameters controlling the power, which is also the degree of the equilibrium. In detail, \(o\) decides the ratified degree of ReSTE to balance the estimating error and gradient stability. Note that when \(o=1\), the ReSTE function is the basic STE. With o increasing, the ReSTE function closes to sign function, which has less estimating error gradually. With simple derivation, the gradients of the ReSTE function is:
\[\begin{split}\mathbf{f}^{\prime}(\mathbf{z})=\frac{1}{o}| \mathbf{z}|^{\frac{1-o}{o}}.\end{split} \tag{5}\]
Comparing to other estimators, ReSTE satisfies the properties proposed above, i.e., rational and capable of flexibly balancing the estimating error and the gradient stability, which are the two main advantages of our method. To prove that, we firstly give the following lemma.
**Lemma 3.1**.: _If \(o_{1}\geq o_{2}\), \(D(\mathbf{sign}(\mathbf{z}),\mathbf{f}(\mathbf{z},o_{1}))-D(\mathbf{sign}( \mathbf{z}),\mathbf{f}(\mathbf{z},o_{2}))\leq 0\) holds. The proof is as follows:_
\[\begin{split}& D(\mathbf{sign}(\mathbf{z}),\mathbf{f}(\mathbf{z},o_{1})) \\ &=\sqrt{\sum_{i=1}^{d}(\operatorname{sign}(z_{i})-\operatorname{ f}(z_{i},o_{1}))^{2}}\\ &=\sqrt{\sum_{i=1}^{d}(\operatorname{sign}(z_{i})-\operatorname{ sign}(z_{i})|z_{i}|^{\frac{1}{o_{1}}})^{2}}\\ &=\sqrt{\sum_{i=1}^{d}|1-|z_{i}|^{\frac{1}{o_{1}}}|^{2}},\end{split} \tag{6}\]
where \(|\cdot|\) is the absolute operation. Since \(o_{1}\geq o_{2}\), with the nature of the power function, we can achieve that when \(|z_{i}|\leq 1\), \(|1-|z_{i}|^{\frac{1}{o_{1}}}|=1-|z_{i}|^{\frac{1}{o_{1}}}\leq 1-|z_{i}|^{\frac{1}{o_{ 2}}}=|1-|z_{i}|^{\frac{1}{o_{2}}}|\), and when \(|z_{i}|\geq 1\), \(|1-|z_{i}|^{\frac{1}{o_{1}}}|=|z_{i}|^{\frac{1}{o_{1}}}-1\leq|z_{i}|^{\frac{1 }{o_{2}}}-1=|1-|z_{i}|^{\frac{1}{o_{2}}}|\). Thus, \(|1-|z_{i}|^{\frac{1}{o_{1}}}|\leq|1-|z_{i}|^{\frac{1}{o_{2}}}|\) always holds. Then, we can write:
\[\begin{split} D(\mathbf{sign}(\mathbf{z}),\mathbf{f}(\mathbf{z}) )&=\sqrt{\sum_{i=1}^{d}|1-|z_{i}|^{\frac{1}{o_{1}}}|^{2}}\\ &\leq\sqrt{\sum_{i=1}^{d}|1-|z_{i}|^{\frac{1}{o_{2}}}|^{2}}\\ &=D(\mathbf{sign}(\mathbf{z}),\mathbf{f}(\mathbf{z},o_{2})). \end{split} \tag{7}\]
Under the lemma, we give the proof of the two properties. As for the rational property, since STE equals to \(\mathbf{f}(\mathbf{z},1)\) and ReSTE has the condition \(o\geq 1\), we can easily get that \(D(\mathbf{sign}(\mathbf{z}),\mathbf{f}(\mathbf{z}))-D(\mathbf{sign}(\mathbf{z }),\mathbf{z})\leq 0\) always holds by lemma 3.1. About the flexible property, we know that STE equals to \(\mathbf{f}(\mathbf{z},1)\) and when \(o\rightarrow\infty\), \(\mathbf{f}(\mathbf{z})\rightarrow\mathbf{sign}(\mathbf{z})\), thus ReSTE can change from STE to sign function. Moreover, from the proof of lemma 3.1 we can observe that if \(o_{1}\geq o_{2}\)
Figure 3: Illustrations of the forward (left) and backward (right) processes of ReSTE.
\(|1-|z_{i}|^{\frac{1}{s_{i}}}|\leq|1-|z_{i}|^{\frac{1}{s_{i}}}|\) always holds for any \(z_{i}\), thus \(|\mathrm{sign}(z_{i})-\mathrm{f}(z_{i},o_{1})|\leq|\mathrm{sign}(z_{i})-\mathrm{f }(z_{i},o_{2})|\) always holds for any \(z_{i}\). So the changing of ReSTE is gradually, where any \(z_{i}\) moves a small step closer to sign function when increasing \(o\). Therefore, ReSTE satisfies the flexible property. The rational and flexible properties are designed based on the equilibrium perspective and form the main advantages between ReSTE and other estimators in previous methods.
In addition, for more stable gradients, we use some gradients truncation tricks to our estimator. First, we clip the gradients where the corresponding full-precision inputs with the absolute value larger than a threshold \(t\) to zero, which considers the saturation in BNNs training [9]. Next, since the gradients of ReSTE may be large when the input is sufficiently small, we set a threshold \(m\) and the gradients within the threshold \((0,m),(-m,0)\) use numerical method \((f(m)-f(0))/m,(f(0)-f(-m))/m\) to simulate.
For clear illustration, we demonstrate the forward and backward processes of ReSTE in Fig. 3.
### Overall Binary Method
We summarize the overall Binary procedure of our method. As for the forward process of binarization, we employ DoReFa-Net [58] as most of the previous methods do[42, 33, 53, 46], which uses sign function to binarize the inputs and endows a layer-level scalar \(\beta=\left\|\mathbf{z}\right\|_{11}/n\) (\(n\) is the dimensions of \(\mathbf{z}\)) for binarization to enhance the representative ability. In backpropagation, we apply ReSTE as the estimator to simulate the gradients of the sign function. About the hyper-parameter \(o\) to adjust the degree of equilibrium, we use the progressive adjusting strategy, which is proposed in [42] and widely used in recent works[33, 53]. We change \(o\) from 1 to \(o_{\text{end}}\) when training, which we use \(o_{\text{end}}=3\) in our experiments. Comparing to the fixed strategy, the progressive adjusting strategy ensures sufficient updating at the beginning and accurate gradients at the end of the training. Experiments about the design for the tuning strategies of \(o\) are shown in supplementary materials.
In BNNs literature, there have two types of options to binary a neural network. The first type is that only the weights are binarize and the second type is weights and activations are both to be binarized, which significantly improves the
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Backbone & Method & W/A & Auxiliary & Acc(\%) \\ \hline \multirow{6}{*}{ResNet-18} & FP & 32/32 & - & 94.84 \\ & RAD [12] & 1/1 & Loss & 90.50 \\ & IR-Net [42] & 1/1 & Module & 91.50 \\ & LCR-BNN [46] & 1/1 & Loss & 91.80 \\ & RBNN [33] & 1/1 & Module & 92.20 \\ & ReSTE (ours) & 1/1 & - & **92.63** \\ \hline \multirow{6}{*}{ResNet-20} & FP & 32/32 & - & 91.70 \\ & DSQ [16] & 1/1 & - & 84.11 \\ & DoReFa-Net [58] & 1/1 & - & 84.44 \\ & IR-Net [42] & 1/1 & Module & 85.40 \\ & LCR-BNN [46] & 1/1 & Loss & 86.00 \\ & FDA & 1/1 & Module & 86.20 \\ & RBNN [33] & 1/1 & Module & 86.50 \\ & ReSTE (ours) & 1/1 & - & **86.75** \\ \cline{2-4} & IR-Net [42] & 1/1 & Module & 86.30 \\ & LCR-BNN [46] & 1/1 & Loss & 87.20 \\ & RBNN [33] & 1/1 & Module & 87.50 \\ & ReSTE * (ours) & 1/1 & - & **87.92** \\ \cline{2-4} & FP & 32/32 & - & 91.70 \\ & DoReFa-Net [58] & 1/32 & - & 90.00 \\ & LQ-Net [54] & 1/32 & - & 90.10 \\ & DSQ [16] & 1/32 & - & 90.20 \\ & IR-Net [42] & 1/32 & Module & 90.80 \\ & LCR-BNN [46] & 1/32 & - & **91.20** \\ & ReSTE (ours) & 1/32 & - & **91.32** \\ \hline \multirow{6}{*}{VGG-small} & FP & 32/32 & - & 93.33 \\ & LBA [24] & 1/1 & - & 87.70 \\ & Xnor-Net [44] & 1/1 & - & 89.80 \\ \cline{1-1} & BNN [9] & 1/1 & - & 89.90 \\ \cline{1-1} & RAD [12] & 1/1 & Loss & 90.00 \\ \cline{1-1} & IR-Net [42] & 1/1 & Module & 90.40 \\ \cline{1-1} & RBNN [33] & 1/1 & Module & 91.30 \\ \cline{1-1} & ReSTE (ours) & 1/1 & - & **92.55** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison with SOTA methods in CIFAR-10 dataset. Auxiliary refers to whether some additional assistance is used (module or loss). FP is the full-precision version of the backbone. * donates the method with Bi-Real structure. W/A is the bit width of weights or activations. Best results are shown in black bold font.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Backbone & Method & W/A & Auxiliary & Top-1(\%) & Top-5(\%) \\ \hline \multirow{6}{*}{ResNet-18} & FP & 32/32 & - & 69.60 & 89.20 \\ & ABC-Net [34] & 1/1 & - & 42.70 & 67.60 \\ & Xnor-Net [44] & 1/1 & - & 51.20 & 73.20 \\ & BNN [5] & 1/1 & Loss & 53.00 & 72.60 \\ & DoReFa-Net [58] & 1/2 & - & 53.40 \\ & Bi-Real [37] & 1/1 & - & 56.40 & 79.50 \\ & Xnor-Net [45] & 1/1 & - & 57.10 & 79.90 \\ & IR-Net [42] & 1/1 & Module & 58.10 & 80.00 \\ & LCR-BNN [46] & 1/1 & Loss & 59.60 & 81.60 \\ & RBNN [33] & 1/1 & Module & 59.90 & 81.90 \\ & FDA & 1/1 & Module & 60.20 & 82.30 \\ & ReSTE (ours) & 1/1 & - & **60.88** & **82.59** \\ \cline{2-4} & FP & 32/32 & - & 69.60 & 89.20 \\ & SQ-BNN [14] & 1/32 & - & 58.40 & 81.60 \\ & BWN [44] & 1/32 & - & 60.80 & 83.00 \\ & HWGQ [6] & 1/32 & - & 61.30 & 83.20 \\ & TWN [2] & 2/32 & - & 61.80 & 84.20 \\ & SQ-TNN [14] & 2/32 & - & 63.80 & 85.70 \\ & BWIN [26] & 1/32 & - & 64.30 & 85.90 \\ & IR-Net [42] & 1/32 & Module & 66.50 & 86.80 \\ & LCR-BNN [46] & 1/32 & Loss & 66.90 & 86.40 \\ & ReSTE (ours) & 1/32 & - & **67.40** & **87.20** \\ \hline \multirow{6}{*}{ResNet-34} & FP & 32/32 & - & 73.30 & 91.30 \\ & ABC-Net [34] & 1/1 & - & 52.40 & 76.50 \\ & Bi-Real [37] & 1/1 & - & 62.20 & 83.90 \\ & IR-Net [42] & 1/1 & Module & 62.90 & 84.10 \\ & RBNN [33] & 1/1 & Module & 63.10 & 84.40 \\ & LCR-BNN [46] & 1/1 & Loss & 63.50 & 84.60 \\ & ReSTE(ours) & 1/1 & - & **65.05** & **85.78** \\ \cline{2-4} & FP & 32/32 & - & 73.30 & 91.30 \\ & ReSTE(ours) & 1/32 & - & **70.40** & **89.50** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison with SOTA methods in ImageNet dataset. Auxiliary refers to whether some additional assistance is used (module or loss). FP is the full-precision version of the backbone. W/A is the bit width of weights or activations. Best results are in black bold font.
inference speed via XNOR and Bitcount operations [9, 42]. After binarization, the model size decreases 32x comparing to the original full-precision model and the inference process is accelerated.
## 4 Experiments
### Datasets and Settings
**Datasets.** In this work, we choose two datasets, i.e. CIFAR-10 [30] and ImageNet ILSVRC-2012 [11], which are widely-used in binary neural networks literature [42, 33, 53]. CIFAR-10 is a common datasets for image classification, which contains 50k training images and 10k testing images with 10 different categories. Each image is of size 32x32 with RGB color channels. ImageNet ILSVRC-2012 is a large-scale dataset with over 120k training images and 50k verification images. Each image contains 224x224 resolutions with RGB color channels. It has 1000 different categories.
**Implementation Details.** we follow the same setting as other binary methods [42, 33] used for fair comparison. For specific, we apply RandomCrop, RandomHorizontalFlip and Normalize for both CIFAR-10 and ImageNet pre-processing. We use SGD and set learning rate beginning from 0.1. Cosine learning rate descent schedule is adopted when training. What's more, we only use cross entropy as the loss function for classification. As for the hyper-parameter \(o_{\text{end}}\), we set \(o_{\text{end}}=3\) in all the experiments. We find that this value is suitable and robust to balance the estimating error and the gradient stability. Regarding the hyper-parameters \(t\) and \(i\) for gradient truncation, we simply set \(t=1.5\) and \(i=0.1\). All the models are implemented with PyTorch [41] on NVIDIA RTX3090 GPUs or NVIDIA RTX A6000 GPUs. For more details about the experiments parameters, please refer to our published codes and the README file in GitHub.
### Performance Study
To prove the performance of our method, we conduct performance study in comparison with other binary methods. Note that our method only modify the estimators in backward process without other auxiliaries, e.g., additional modules or losses. To highlight the superiority of our approach, we add a column to note the auxiliaries used in other methods in the result tables.
We first test the performance of ReSTE on CIFAR-10 [30] with the SOTA methods. In detail, we binarize three backbone models, ResNet-18, ResNet-20 [20] and VGG-small [47]. We compare a list of SOTA methods to validate our performance, including LBA [24], RAD [12], DSQ [16], Xnor-Net [44], DoReFa-Net [58], LQ-Net [54], IR-Net [42], LCR-BNN [46], RBNN [33], FDA [53]. For ResNet-20, we both evaluate the performance of our method in the basic ResNet architecture and the Bi-Real architecture [37]. Experiments results are exhibited in Table 1. From the table we can find that our ReSTE shows excellent performance, outperforming all the SOTA methods both at the setting of 1W/1A and 1W/32A without any assistance, e.g., modules or losses. For example, with ResNet-20 as the backbone, ReSTE respectively obtains 0.25% and 0.45% enhancement over the SOTA method RBNN [33] in the basic ResNet architecture and in the Bi-Real architecture [37], at the setting of 1W/1A, even that RBNN additionally adds a rotation module into the training. As for the setting of 1W/32A, ReSTE has 0.12% improvement over the SOTA method LQ-Net [54], which additional uses a Lipschitz loss to improve the training.
Moreover, we employ ReSTE on ResNet-18, ResNet-34 [20] and validate the performance on large-scale ImageNet ILSVRC-2012 [11]. In this setting, we compare ReSTE with ABC-Net [34], BWN [44], TWN [2], SQ-BWN and SQ-TWN [14], Xnor-Net [44], HWGO [6], BWHN [26], BNN+ [10], DoReFa-Net [58], Bi-Real [37],
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Estimators & Formula & Type & Rational & Flexible & Acc(\%) \\ \hline DSQ [16] & \(\mathbf{f}(\mathbf{z})=l+\Delta\left(1+(s\mathbf{tanh}\left(k\left(\mathbf{z }-\mathbf{m}\right)\right)+1)/2\right)\) & Tanh-alike & Not rational & Little flexible & 84.11 \\ \hline STE [58] & \(\mathbf{f}(\mathbf{z})=\mathbf{z}\) & Identity function & Rational & Not flexible & 84.44 \\ \hline EDE [42] & \(\mathbf{f}(\mathbf{z})=k\mathbf{tanh}(t\mathbf{z})\) & Tanh-alike & Not rational & Little flexible & 85.20 \\ \hline FDA \(\dagger\)[53] & \(\mathbf{f}(\mathbf{z})=\frac{4}{\pi}\sum_{i=0}^{k}\mathbf{sin}((2i+1)\omega \mathbf{z})/(2i+1)\) & Fourier series & Not rational & Little flexible & 85.80 \\ \hline RBNN \(\dagger\)[33] & \(\mathbf{f}(\mathbf{z})=k\cdot\left(-\mathbf{sign}(\mathbf{z})\frac{2^{2}}{2} +\sqrt{2}t\mathbf{z}\right)\) & Polynomial function & Not rational & Little flexible & 85.87 \\ \hline ReSTE (ours) & \(\mathbf{f}(\mathbf{z})=\mathbf{sign}(\mathbf{z})|\mathbf{z}|^{\frac{1}{5}}\) & Power function & Rational & Flexible & **86.75** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of the estimators comparison. \(\dagger\) means we only use the estimators for fair comparison (without some additional modules, the overall comparison can be found in Sec. 4.2). "Rational” means that the estimator satisfies the rational property proposed in Sec. 3.3 while ”Not rational” indicates dissatisfaction. ”Flexible” means that the estimator satisfies the flexible property proposed in Sec. 3.3 while ”Not flexible” and ”Little flexible” means dissatisfaction. ”Not flexible” implies that the estimator can not reduce the estimator. ”Little flexible” indicates that the estimator can reduce the estimating error in some kind but not fully satisfy the flexible property. The best results are shown in black bold font.
Xnor-Net++ [5], IR-Net [42], LCR-BNN [46], RBNN [33], FDA [53]. At the setting of 1W/1A, we use the Bi-Real architecture as most previous methods [42, 33, 53, 50] do for fair comparison. The results are shown in Table 2. Similar as the analysis on CIFAR-10 dataset, ReSET also displays excellent performance and outperforms all the SOTA methods without any assistance, e.g., modules or losses. For example, with ResNet-18 as backbone, ReSET has 0.68% over the SOTA method FDA [53], at the setting of 1W/1A, even that FDA [53] adds a noise adaptation module to help the training. About the 1W/32A setting, ReSET also has 0.50 improvement over the SOTA method LQ-Net [54], which has an additional loss to assist the training.
To sum up, we can conclude that ReSET has excellent performance and outperforms the SOTA methods in both CIFAR-10 and large-scale ImageNet ILSVRC-2012 datasets. The reason is that our ReSET is always rational, with less estimating error than STE, as well as that we obtain the desirable degree of the equilibrium by the ReSET, which is capable of flexibly balancing the estimating error and the gradient stability. Moreover, ReSET surpasses other binary methods without any assistance of additional modules or losses, showing the importance of fully considering the gradient stability and finding the suitable degree of equilibrium to BNNs training.
### Estimators Comparison
To further evaluate the effectiveness of our approach, we compare ReSTE with other estimators in the same and fair setting without other auxiliaries, e.g., modules or additional losses.
Specifically, we use ResNet-20 as our backbone, comparing ReSET with STE [9], DSQ [16], EDE [42], FDA [53], RBNN [33] on CIFAR-10 [30] at the setting of 1W/1A. Note that FDA here doesn't contain the noise adaptation module [53] and RBNN doesn't use the rotation procedure since we only use the sign function with scalar in forward process for fair comparison. Experiments results are shown in Table 3. From the table we can observe that although ReSET is concise, it significantly surpasses all the estimators in SOTA binary methods at the fully fair setting, with about 0.88% and 0.95% improvement over the estimators in RBNN and FDA. There are two facets of reasons. First is that our ReSET always guarantees the rational property, with less estimating error than STE. Second is that we find out the desirable degree of the equilibrium with the assistance of the excellent ReSET, which is capable of flexibly balancing the estimating error and the gradient stability.
### Analysis of the Equilibrium Perspective
To quantitatively and clearly demonstrate the equilibrium phenomenon and show the balancing ability of ReSET, we adjust \(o_{\text{end}}\) at different scales and meanwhile test the estimating error, gradient stability and the model performance. To make the results more convincing, we conduct the experiments with three widely-used backbones, ResNet-20, ResNet-18 and VGG-small. All the experiments are conducted on CIFAR-10 dataset at the setting of 1W/1A. We evaluate the estimating error and gradient stability layers by layers with the indicators proposed in Sec. 3.1 and use the average results of all the binarized layers. Meanwhile, we will collect the results from different training epochs to obtain the final indicators for an overall training, as shown in Fig. 4.
From the figures we can observe that with \(o_{\text{end}}\) increasing, the estimating error becomes smaller and smaller, while the gradient instability becomes bigger and bigger. This observation shows that although the estimating error can be reduced by adjusting the estimator close to the sign function, the gradient stability will decline along with. In addition,
Figure 4: Illustrations of the estimating error indicators (above), gradient instability indicators (above) and the Top-1 accuracy (below) at different scales of \(o_{\text{end}}\) on CIFAR-10 dataset.
the model performance increases first and then decreases with the change of \(o_{\text{end}}\), which implies that the large gradient instability will harm the model performance. Such changes clearly reflect the equilibrium phenomenon and validate our claim that highly divergent gradients will harm the BNNs training.
In addition, it can also be seen from the figures that ReSTE can adjust the degree of equilibrium by easily changing the hyper-parameter \(o_{\text{end}}\). Moreover, the desirable degrees of equilibrium, i.e., the desirable \(o_{\text{end}}\) to produce high performance, are same in all the backbones, showing the robustness and universality of ReSTE. When applying ReSTE at different backbones for different applications, we can simply adjust \(o_{\text{end}}\) to find out the suitable degree of the equilibrium and obtain good performance. More experiments about equilibrium analysis are shown in supplementary materials.
To obtain intuitive visualizations of the equilibrium phenomenon, we additional visualize the distributions of the estimating error and the distributions of gradient at different scales of \(o\). We use ResNet-18 as backbone and conduct the experiment on CIFAR10 dataset at the setting of 1W/1A. The results are shown in Fig. 5. From the figure we can observe that with \(o_{\text{end}}\) increasing, the peak values of the estimating error distribution become smaller, but the gradients become more divergent, which harms the model training and increases the risk of gradient vanishing or exploding. This visualization further demonstrate the equilibrium phenomenon and highlight the importance of finding the suitable degree of it.
To further validate our claim that highly divergent gradients will harm the model training, we demonstrate an example in Fig. 6. In this example, we use \(o_{\text{end}}=10\) with ResNet-20 as backbone and test on CIFAR-10 dataset at the setting of 1W/1A. We can observe that the training loss has huge fluctuations at about 600 to 700 epochs due to the divergent gradients, causing the final accuracy decreases from 86.75 to 82.86. When \(o_{\text{end}}\) further increase, the training will fail irreversible. This phenomenon verifies the harm of highly divergent gradients to model training and further demonstrates the importance of the equilibrium perspective.
## 5 Conclusion
In this work, we consider BNNs training as the equilibrium between the estimating error and the gradient stability. In this view, we firstly design two indicators to quantitatively and clearly demonstrate the equilibrium phenomenon. In addition, to balance the estimating error and the gradient stability well, we look back to the original STE and revise it into a new power function based estimator, rectified straight through estimator (ReSTE). Comparing to other estimators, ReSTE is rational and is capable of flexibly balancing the estimating error and the gradient stability. Extensive performance study on two datasets have demonstrated the effectiveness of ReSTE, surpassing state-of-the-art methods. By two carefully-designed indicators, we demonstrate the equilibrium phenomenon and shows the ability of ReSTE to adjust the degree of equilibrium.
## 6 Acknowledgments
This work was supported partially by the NSFC (U21A20471, U1911401, U1811461), Guangdong NSF Project (No. 2023B1515040025, 2020B1515120085).
Figure 5: Illustrations of distributions of the estimating error (left) and the gradients (right) at different scales of \(o_{\text{end}}\). X-axes represent the values of the estimating error and the gradients, y-axes are the frequency.
Figure 6: Illustrations of an example that divergent gradients (\(o_{\text{end}}=10\)) will harm the BNNs training. |
2306.02816 | MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale
Training of Physics-informed Neural Networks | Physics-informed Neural Networks (PINNs) have recently achieved remarkable
progress in solving Partial Differential Equations (PDEs) in various fields by
minimizing a weighted sum of PDE loss and boundary loss. However, there are
several critical challenges in the training of PINNs, including the lack of
theoretical frameworks and the imbalance between PDE loss and boundary loss. In
this paper, we present an analysis of second-order non-homogeneous PDEs, which
are classified into three categories and applicable to various common problems.
We also characterize the connections between the training loss and actual
error, guaranteeing convergence under mild conditions. The theoretical analysis
inspires us to further propose MultiAdam, a scale-invariant optimizer that
leverages gradient momentum to parameter-wisely balance the loss terms.
Extensive experiment results on multiple problems from different physical
domains demonstrate that our MultiAdam solver can improve the predictive
accuracy by 1-2 orders of magnitude compared with strong baselines. | Jiachen Yao, Chang Su, Zhongkai Hao, Songming Liu, Hang Su, Jun Zhu | 2023-06-05T12:12:59Z | http://arxiv.org/abs/2306.02816v1 | # MultiAdam: Parameter-wise Scale-invariant Optimizer for
###### Abstract
Physics-informed Neural Networks (PINNs) have recently achieved remarkable progress in solving Partial Differential Equations (PDEs) in various fields by minimizing a weighted sum of PDE loss and boundary loss. However, there are several critical challenges in the training of PINNs, including the lack of theoretical frameworks and the imbalance between PDE loss and boundary loss. In this paper, we present an analysis of second-order non-homogeneous PDEs, which are classified into three categories and applicable to various common problems. We also characterize the connections between the training loss and actual error, guaranteeing convergence under mild conditions. The theoretical analysis inspires us to further propose MultiAdam, a scale-invariant optimizer that leverages gradient momentum to parameter-wisely balance the loss terms. Extensive experiment results on multiple problems from different physical domains demonstrate that our MultiAdam solver can improve the predictive accuracy by 1-2 orders of magnitude compared with strong baselines.
Machine Learning, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Nonlinear Optimization, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam, Multi-Adam, Multi-Adam, Multi-Adam, Adam Multi-Adam,
reweighting. Some works (Wight and Zhao, 2020; Elhamod et al., 2022) use manual hyper-parameters to adjust the weights. However, these non-adaptive methods depend on empirical conclusions, which can lead to sub-optimal results. More research works focus on adaptively balancing PINN losses. For example, (Wang et al., 2021) designed a learning rate annealing algorithm using statistics of back-propagated gradients. (Wang et al., 2022) proposed another method to adjust weights from the perspective of Neural Tangent Kernel (NTK). In (Bai et al., 2022), the loss function is modified using the Least Squares Weighted Residual (LSWR) method. Nevertheless, these methodologies primarily concentrate on modifying loss functions, implying that they consider the effect on parameters as a whole. As such, they might overlook the impact of domain scaling on individual parameters of the model.
In this paper, we aim to address the above issues to effectively train PINNs. Specificially, we first present a theoretical error analysis of loss functions for different types of PDEs under mild conditions. This analysis provides a connection between the loss function and the actual performance of the model by bounding the \(L^{\infty}\) error with its PDE loss and boundary loss. This new error boundary not only ensures convergence towards the ground truth under a sufficiently low loss but also serves as an optimization objective as its minimization enables the neural network to approach the true solution more effectively. (Wang et al., 2022)'s work supports that \(L^{\infty}\) loss is a better choice that \(L^{2}\) loss.
Building on the upper-bound error, we propose a scale-invariant optimizer, MultiAdam. MultiAdam leverages the observation that the second momentum of Adam acts as an excellent indicator of the gradient scale. We categorize losses of different scales into separate groups, maintaining the second momentum individually for each group. This momentum is subsequently utilized to re-scale the gradients, aligning them to a nearly identical scale. Extensive experiments demonstrate that the MultiAdam optimizer is robust against unbalanced losses and is effective in various complex PDEs across different domain scales. Moreover, MultiAdam exhibits remarkable stability and a high convergence rate under these conditions.
The rest of the paper is organized as follows. In section 2, we briefly review existing variants of PINNs, especially reweighting techniques. In section 3, we go over the original PINN model and Adam optimizer. Section 4 introduces the effect of domain scaling on PINN losses using an example of 2D Poisson's equation, followed by an introduction to our new optimizer MultiAdam. Then we provide a theoretical analysis on error bounds for PINNs and show the connection between the existing problem and MultiAdam. Section 5 presents numerical experiments and evaluates MultiAdam using a range of representative benchmark examples. Finally, Section 6 encapsulates our findings and contributions.
## 2 Related Work
**Physics-informed Neural Networks** (PINNs) (Raissi et al., 2019) are capable of learning to represent the nonlinear relationship in dynamic systems and providing fast predictions (Karniadakis et al., 2021). However, theoretical analysis for PINNs is typically insufficient. For some special equations such as Kolmogorov equations and Navier-Stokes equations, the total error can be estimated with regard to the training loss and network settings (De Ryck and Mishra, 2022; De Ryck et al., 2022). A more general result is attained on second-order elliptic equations, where the convergence of PINNs is proved (Shin et al., 2020) and the \(L^{\infty}\) error bound is given (Peng et al., 2020) under mild constraints. Yet it still remains unclear for many other PDE problems. Also, analyzing the convergence and accuracy of PINNs is of tremendous challenge, especially for systems with multi-scale characteristics (Li and Feng, 2022). PINNs are commonly optimized by Adam (Kingma and Ba, 2014) and L-BFGS (Liu and Nocedal, 1989). However, they often reach ill situations when the scale and convergence rate of loss terms vary significantly (Hao et al., 2022).
**Reweighting techniques for PINNs** To correct the imbalance, a standard approach is the introduction of weights in the loss functions (McClenny and Braga-Neto, 2020). Currently, several adaptive reweighting methods have been proposed. (Wang et al., 2021) designed a learning rate annealing algorithm using statistics of back-propagated gradients to mitigate the pathology. Neural Tangent Kernel also provides a novel perspective to adaptively adjust the weights (Wang et al., 2022). In (Bai et al., 2022), the loss function is modified using the LSWR method to alleviate the biased training issue.
**Multitask learning methods** The PINN optimization can be regarded as a multitask learning problem since each equation and boundary condition is an individual objective. Therefore, it is also worthwhile to learn from multitask learning (MTL). GradNorm (Chen et al., 2018) and PCGrad (Yu et al., 2020) are two promising approaches along this line. GradNorm tunes gradient magnitudes based on the average gradient norm and the relative training rate of each task, while PCGrad projects the conflicting gradients onto the normal plane.
## 3 Preliminaries
### Physics-Informed Neural Networks
The main objective for Physics-Informed Neural Networks (PINNs) is to solve a physical system using known phys
ical laws and available data. Assume the system can be described by the following PDEs:
\[\begin{split}& f(x;\frac{\partial u}{\partial x_{1}},\cdots,\frac{ \partial u}{\partial x_{d}};\frac{\partial^{2}u}{\partial x_{1}^{2}},\frac{ \partial^{2}u}{\partial x_{1}\partial x_{2}},\cdots;\lambda)=0\\ & B(u,x)=0,\forall x\in\partial\Omega\end{split} \tag{1}\]
where \(f\) is the differential equation, \(u\) is the solution to that equation, \(\Omega\) is the domain and \(\partial\Omega\) is the boundary of it. Moreover, \(\lambda\) is an additional parameter and \(B\) is the boundary condition.
To solve the physical system, PINNs use neural networks to approximate the solution of PDEs. In order to train a neural network meeting all the constraints in Eq. (1), PINNs transform the equations into loss functions defined as follows:
\[\begin{split}& L_{f}(\theta,\lambda;T_{f})=\frac{1}{|T_{f}|}\sum_{x \in T_{f}}\|f(x,\frac{\partial\hat{u}_{\theta}}{\partial x_{1}},\ldots;\frac{ \partial^{2}\hat{u}_{\theta}}{\partial x_{1}^{2}},\cdots;\lambda)\|_{2}^{2} \\ & L_{b}(\theta,\lambda;T_{b})=\frac{1}{|T_{b}|}\sum_{x\in T_{b}}\| B(\hat{u}_{\theta},x)\|_{2}^{2}\end{split} \tag{2}\]
where \(L_{f}\) is the residual loss for the PDE, and \(L_{b}\) is the loss for boundary condition. \(u_{\theta}\) is the prediction by neural network with parameter \(\theta\) and \(T_{f},T_{b}\) are sampling points.
The overall training objective of PINN is then defined as a weighted sum of the two losses:
\[L(\theta,\lambda;T)=w_{f}L_{f}(\theta,\lambda;T_{f})+w_{b}L_{b}(\theta, \lambda;T_{b}) \tag{3}\]
where \(w_{f},w_{b}\) are the non-negative weights for different losses. To effectively train a PINN, we have to optimize the two loss terms at the same time and make every loss as low as possible. Therefore, it is natural to treat it as a multitask learning problem.
### Adam Optimizer
The Adaptive Momentum Estimation (Adam), as proposed by (Kingma & Ba, 2014), is a commonly adopted optimization method for PINNs. It maintains the moving average of the squared gradient, known as the second momentum, to adjust the learning rate for each parameter. The specifics of this algorithm can be seen in Algorithm 1. Despite Adam's robust capability to minimize a single loss function for neural networks, it may struggle with handling multiple optimization objectives. Consequently, the network may fail to converge if the weights in Eq. (3) are not appropriately configured. A detailed discussion on this matter will be provided in Section 4.1.
```
0: learning rate \(\gamma\), betas \(\beta_{1},\beta_{2}\), max epoch \(M\), objective function \(f(\theta)\)
1:for all\(t=1\) to \(M\)do
2:\(g_{t}\leftarrow\nabla_{\theta}f(\theta_{t-1})\)
3:\(m_{t}\leftarrow\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}\)
4:\(v_{t}\leftarrow\beta_{2}v_{t-1}+(1-\beta_{2})g_{t}^{2}\)
5:\(\hat{m}_{t}\gets m_{t}/(1-\beta_{1}^{*})\)
6:\(\hat{v}_{t}\gets v_{t}/(1-\beta_{2}^{*})\)
7:\(\theta_{t}\leftarrow\theta_{t-1}-\gamma\hat{m}_{t,i}/(\sqrt{\hat{v}_{t,i}}+\varepsilon)\)
8:endfor
9:return\(\theta_{t}\)
```
**Algorithm 1** Adam
## 4 Method
We now present our method in detail, starting with an analysis of the imbalance between the terms in the loss objective.
### The effect of domain scaling on loss balancing
We first observe that the PDE loss and boundary loss may be several orders of magnitude away in real cases, leading to a failure to approach the correct solution by the standard Adam optimizer. One of the main reasons for the issue is the improper scaling of the domain. Most PDEs are not scaling invariant, which causes the change in domain to rescale PDE loss. The influence is characterized by the following theorem:
**Theorem 4.1** (Effect of scaling for homogeneous PDEs, Proof in Appendix B.1).: _Suppose \(\Omega\) is the domain of a homogeneous PDE of \(k\) order and \(L^{2}\) loss is used for PINNs. Then, if we narrow the domain by \(t\) times, the boundary loss will stay fixed while the PDE loss will be multiplied by \(t^{2k}\)._
We illustrate this with an example of Poisson's equation in a complex domain. The reference solution is depicted in Figure 1, with the detailed setup available in Appendix A. In this case, we condense the original domain, which spans an \(8\times 8\) square, by a factor of \(8\), resulting in a \(1\times 1\) square. As shown in Figure 2, when training on the \(8\times 8\) domain, the PDE loss and boundary loss do not significantly differ. However, when training on the \(1\times 1\) domain, the PDE loss is nearly \(8^{4}\) times larger than the boundary loss. This substantial discrepancy poses considerable challenges in training PINNs, as demonstrated in Figure 1.
This example further exposes the gap between the loss function that PINN optimizes and its actual performance. In Figure 3, we train PINN on the \(1\times 1\) domain using two different settings--one incorporating manual reweighting of loss while the other not. In the absence of manual reweighting, PINN fails to approach the ground truth. Yet, its loss is lower than that of the reweighted scenario for the first \(10000\) epochs, during which its \(L^{2}\) rela
to the ground truth is significantly higher compared to the reweighted scenario. This suggests that the loss optimized in PINN does not reliably represent the actual performance in this case.
### Error Analysis
Considering the observed inconsistency between total loss and actual performance, we find it crucial to revisit the well-posedness of our objective function, i.e., we question whether optimization based on the loss indeed leads to improved solutions. We offer a theoretical examination of the relationship between loss and error. Given that the majority of PDEs employed across various disciplines do not exceed second order, and that linear ones are relatively prevalent and simpler to analyze, our study primarily concentrates on elliptic, parabolic, and select hyperbolic equations. These represent the majority of second-order linear PDEs (Strauss, 2007). Based on the error bounds, we establish links between the losses in PINNs and the absolute error of the PINN output.
Specifically, we provide error bounds for three types of PDEs separately in the following theorems. Thanks to Theorem 2.1 and Corollary 2.2 in (Peng et al., 2020), we can directly obtain the proof of Error bounds of PINNs on elliptic PDEs as follows:
**Theorem 4.2** (Error bounds of PINNs on elliptic PDEs).: _Suppose \(\Omega\subset\mathbb{R}^{d}\) is a bounded domain, \(\mathcal{L}\) is an elliptic operator and \(\tilde{u}\in C^{0}(\overline{\Omega})\cap C^{2}(\Omega)\) is a solution to the following PDE:_
\[\begin{split}\mathcal{L}[u](x)&=f(x),\ \forall x\in\Omega\\ u(x)&=g(x),\ \forall x\in\partial\Omega\end{split} \tag{4}\]
_If the output \(u_{\theta}\) of the PINN with parameter \(\theta\) satisfies:_
\[\begin{split}& u_{\theta}\in C^{0}(\overline{\Omega})\cap C^{2}( \Omega)\\ &\sup_{x\in\partial\Omega}|u_{\theta}-\tilde{u}|<\delta_{1}\\ &\sup_{x\in\Omega}|\mathcal{L}[u_{\theta}]-f|<\delta_{2},\end{split} \tag{5}\]
_then the absolute error over \(\Omega\) is upper-bounded:_
\[\sup_{x\in\Omega}|u_{\theta}-\tilde{u}|\leq\delta_{1}+C\delta_{2}. \tag{6}\]
_Here, \(C\) is a constant depending only the operator \(\mathcal{L}\) and the domain \(\Omega\). If \(\operatorname{diam}\Omega=d\), then \(C\) is proportional to \(e^{d}-1\) when \(\operatorname{diam}\Omega\) changed._
And we further provide the Error bounds of PINNs on Parabolic PDEs and Hyperbolic PDEs in Theorem 4.3 and 4.4, respectively. The detailed proof is included in the Appendix.
**Theorem 4.3** (Error bounds of PINNs on Parabolic PDEs, proof in Appendix B.2).: _Suppose \(\Omega\subset\mathbb{R}^{d}_{x}\times\mathbb{R}_{t}\) is a bounded domain, \(\mathcal{L}\) is an parabolic operator and \(\tilde{u}\in C^{0}(\overline{\Omega})\cap C^{2}(\Omega)\) is a solution to the PDE in equation 4. If the output \(u_{\theta}\) of the PINN with parameter \(\theta\) satisfies:_
\[\begin{split}& u_{\theta}\in C^{0}(\overline{\Omega})\cap C^{2}( \Omega)\\ &\sup_{x\in\partial\Omega}|u_{\theta}-\tilde{u}|<\delta_{1}\\ &\sup_{x\in\Omega}|\mathcal{L}[u_{\theta}]-f|<\delta_{2},\end{split} \tag{7}\]
_then the absolute error over \(\Omega\) is upper-bounded:_
\[\sup_{x\in\Omega}|u_{\theta}-\tilde{u}|\leq C_{1}(\delta_{1}+C\delta_{2}), \tag{8}\]
_where \(C,C_{1}\) are constants depending only on \(\Omega\) and \(\mathcal{L}\). If \(\operatorname{diam}\Omega=d\), then \(C\) is proportional to \(e^{\alpha d}-1\) when \(\operatorname{diam}\Omega\) changed._
**Theorem 4.4** (Error Bounds for PINNs on Hyperbolic PDEs, proof in Appendix B.3).: _Suppose \(\Omega\subset\mathbb{R}_{x}\times\mathbb{R}^{+}_{t}\) is an admissible domain (defined in Appendix B.3) and \(\mathcal{L}\) is an hyperbolic operator satisfies the requirements in Appendix B.3. If the PINN with parameter \(\theta\) satisfies that:_
\[\begin{split}& u_{\theta}\in C^{1}(\overline{\Omega})\cap C^{2}( \Omega)\\ &\sup_{x\in\partial\Omega}|u_{\theta}-\tilde{u}|<\delta_{1}\\ &\sup_{x\in\Omega}|\mathcal{L}[u_{\theta}]-f|<\delta_{2}\end{split} \tag{9}\]
_Then, we have:_
\[\sup_{x\in\Omega}|u_{\theta}-\tilde{u}|\leq\delta_{1}+C\delta_{2}\]
_where \(C\) is constant depending only on \(\Omega\) and \(\mathcal{L}\). If \(\operatorname{diam}\Omega=d\), then \(C\) is proportional to \(e^{\alpha d}-1\) when \(\operatorname{diam}\Omega\) changed._
We finally provide how to control the absolute error using PINNs' \(L^{2}\) loss as
**Theorem 4.5** (Control Absolute Error using PINNs' \(L^{2}\) Loss, proof in Appendix B.4).: _Suppose the second-order PDE operator \(\mathcal{L}\) and the PINN with parameter \(\theta\) satisfy that:_
\[\sup_{x\in\Omega}|u_{\theta}-\tilde{u}|\leq C_{1}(\sup_{x\in\partial\Omega}|u _{\theta}-\tilde{u}|+C\sup_{x\in\Omega}|\mathcal{L}[u_{\theta}]-f|) \tag{10}\]
_where \(C,C_{1}\) are constants. Then, the error can be bounded by \(L^{2}\) loss of the PINN:_
\[\|u_{\theta}-\tilde{u}\|_{L_{\infty}}\leq C_{2}(\sqrt{L_{b}}+C\sqrt{L_{f}}) \tag{11}\]
_where \(C_{2}\) is constant depend on \(C_{1}\) and selection of sampling points and base functions (used in proof). The detailed definition is in Appendix B.4._
Theorem 4.5 delineates the relationship between the loss of PINNs and the actual error. Although the unweighted sum of losses does not directly reflect the performance of PINNs, the introduction of appropriate weights to the losses can ensure a more accurate correspondence to error. This underlines the necessity of reweighting techniques for PINNs. Broadly, the more precise the estimate of \(C\), the narrower the gap between the optimization objective and the actual error.
The theorem also illuminates the role of domain scaling. For all three types of PDEs, scaling the domain influences the constants \(C\), changing proportionally to \(e^{\alpha d}-1\), where \(d=\operatorname{diam}\Omega\) serves as an indicator of scale. This modification in \(C\) subsequently affects the optimal weight of the two losses. Therefore, it is imperative for the model to account for the scale of the domain to properly adjust the loss weights.
Motivated by this understanding, we propose our MultiAdam optimizer. It maintains the second momentum of gradients for each group of losses, which is then used to adjust the scale of the update, effectively reweighting all loss terms. We found that gradient-based estimation can approximate the factor \(C\), leading to enhanced accuracy.
### Algorithm
Inspired by the analysis above, we introduce MultiAdam, a novel optimizer designed to better estimate the relative importance of losses.
Our motivation stems from two key observations. First, the Adam optimizer maintains estimates of both the first and second momentum, and these momentums tend to be relatively stable. Second, the second momentum effectively reflects the inherent difference between the scale of PDE loss and boundary loss. Utilizing the second momentum as weights allows the PDE loss and boundary loss to be normalized to a comparable scale.
The crux of MultiAdam lies in partitioning the PINN loss into several groups. Specifically, we segregate each PDE loss into a separate group, while all boundary losses are
Figure 1: The left image presents the reference solution for the case. The central image depicts the training result of the baseline PINN on an \(8\times 8\) domain, while the right image showcases the same on a \(1\times 1\) domain. It is evident that the model encounters difficulties in fitting the boundary condition when trained on the \(1\times 1\) domain.
Figure 3: The left figure shows the sum of unweighted loss \(L_{f}+L_{b}\) during training. The right figure shows the \(L^{2}\) relative error between PINN’s prediction and the ground truth. While the loss is lower in the unreweighted case, the prediction is worse off.
Figure 2: The loss curve of PINNs when solving Poisson equation on the \(8\times 8\) domain and the \(1\times 1\) domain using Adam optimizer (manually reweighted on \(1\times 1\) case). \(L_{f},L_{b}\) are defined in equation (2). While the losses are almost the same on \(8\times 8\) case, they differ by several orders of magnitude on the \(1\times 1\) case.
grouped together. We maintain the first and second momentum independently for each group, determining the update for every group in a manner akin to Adam. Lastly, we average the updates for each group and apply this as the final update to the network parameters.
The specific algorithm is outlined in Algorithm 2. We recommend the hyper-parameter settings as \(\gamma=0.001,\beta_{1}=0.99,\beta_{2}=0.99\). The rationale behind these choices can be found in Appendix D.
```
1:learning rate \(\gamma\), betas \(\beta_{1},\beta_{2}\), max epoch \(M\), objective functions \(f_{1}(\theta),f_{2}(\theta),\cdots,f_{n}(\theta)\)
2:for all\(t=1\) to \(M\)do
3:for all\(i=1\) to \(n\)do
4:\(g_{t,i}\leftarrow\nabla_{\theta}f_{i}(\theta_{t-1})\)
5:\(m_{t,i}\leftarrow\beta_{1}m_{t-1,i}+(1-\beta_{1})g_{t,i}\)
6:\(v_{t,i}\leftarrow\beta_{2}v_{t-1,i}+(1-\beta_{2})g_{t,i}^{2}\)
7:\(\hat{m}_{t,i}\gets m_{t,i}/(1-\beta_{1}^{*})\)
8:\(\hat{v}_{t,i}\gets v_{t,i}/(1-\beta_{2}^{*})\)
9:endfor
10:\(\theta_{t}\leftarrow\theta_{t-1}-\frac{\gamma}{n}\sum_{i=1}^{n}\hat{m}_{t,i}/( \sqrt{\hat{v}_{t,i}}+\varepsilon)\)
11:endfor
12:return\(\theta_{t}\)
```
**Algorithm 2** MultiAdam
The reason why we divide every PDE into separate groups is that different PDE has a different intrinsic scaling factor, leading to an imbalance within the same group. Conversely, all Dirichlet boundary losses are grouped together, as they are calculated by measuring the \(L^{2}\) error on sampling points, which remains invariant to the scaling of the domain.
## 5 Experiments
In this section, we deploy our proposed MultiAdam optimizer on various benchmarks to evaluate its convergence and accuracy. Initially, we consider Poisson's equation, a two-dimensional second-order linear PDE. This serves to examine MultiAdam's efficacy in mitigating the imbalance of weights and achieving convergence. We also compare its weight estimation to the theoretically suggested weight, demonstrating its consistency across diverse domain scales. Subsequently, we apply this method to solve the non-linear elliptic-type Helmholtz equation, underscoring the efficiency of MultiAdam. Lastly, we assess the performance of our method against other techniques in solving time-dependent PDEs, such as the Burgers' equation. An ablation study on the selection of hyper-parameters is presented, which is relegated to Appendix D.
We compare our method with a few strong baselines: 1) The Adam optimizer utilized by the original PINNs (Raissi et al., 2019) 2) The learning rate annealing (LRA) algorithm for PINNs (Wang et al., 2021) and 3) The adaptive weighting from the NTK perspective (Wang et al., 2022). Since PINNs involve the interplay of multiple loss terms from PDE and boundary conditions, some multi-task learning methods may be applied to PINNs. Here, we choose two well-known methods, i.e., 4) GradNorm (Chen et al., 2018) and 5) PCGrad (Yu et al., 2020), to compare with.
### Poisson's equation
Poisson's equation is a useful elliptic partial differential equation in theoretical physics for calculating electric or gravitational fields (Wikipedia, 2023), taking the form:
\[\Delta u=f \tag{12}\]
In order to show the scale-invariant ability of MultiAdam, we consider two Poisson's systems, Poisson-8 and Poisson-1, which are actually examples presented in Section 4.1. The Poisson-8 case is as Equation 18 in Appendix A., while the Poisson-1 case just resizes the domain from \([-4,4]^{2}\) to \([-0.5,0.5]^{2}\).
As shown in Table 1, MultiAdam is nearly invariant to the domain scaling and maintains an accurate estimate. For Poisson-8, NTK has the highest precision. However, in the Poisson-1 case, things have changed. Most of the optimizers, other than MultiAdam and NTK, fail to find the solution. MultiAdam performs the best while a significant downgrade (4.17%) is observed on NTK. Overall, MultiAdam can easily handle the domain-scaling effect and keep good performance on both tests while others cannot.
#### 5.1.1 Comparison of weight estimation
To give a deeper understanding on why MultiAdam outperforms other methods when domain is changed, we compare the weights given by different reweighting algorithms with a theoretically suggested weight as summarized in the following theorem.
**Theorem 5.1** (Error bound of Poisson's equation, Proof in Appendix B.5).: _Let \(\Omega\) be the domain described in section 5.1, and \(G:\Omega\times\Omega\rightarrow\mathbb{R}\) be the Green function of Poisson's equation. Denote \(\hat{u}_{\theta}\) as the PINN output and \(\tilde{u}\) the reference
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Poisson-8} & \multicolumn{2}{c}{Poisson-1} \\ & Absolute & Relative & Absolute & Relative \\ \hline Adam & 7.49E-03 & 2.63\% & 2.98E-01 & 70.78\% \\ LRA & 1.06E-02 & 4.67\% & 6.48E-02 & 16.88\% \\ NTK & **6.58E-03** & **1.94\%** & 2.21E-02 & 6.11\% \\ GradNorm & 8.74E-03 & 2.34\% & 2.94E-01 & 69.10\% \\ PCGrad & N/A & N/A & 3.40E-01 & 77.84\% \\ MultiAdam & 1.10E-02 & 2.94\% & **1.44E-02** & **4.49\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean absolute error and relative \(L^{2}\) error of different optimization methods on Poisson’s equation. PCGrad runs into NaN due to numerical instability.
solution, then we have:_
\[\|\hat{u}_{\theta}-\tilde{u}\|_{L^{1}}\leq C_{1}\sqrt{L_{f}}+C_{2}\sqrt{L_{b}}, \tag{13}\]
_where \(L_{f},L_{b}\) are losses of PINN and \(C_{1},C_{2}\) are constants by the Green function \(G(x,\xi)\) as follows:_
\[\begin{split} C_{1}&=\int_{\Omega}\sqrt{|\Omega| \int_{\Omega}G^{2}(x,\xi)d\xi}dx\\ C_{2}&=\int_{\Omega}\sqrt{|\partial\Omega|\int_{ \partial\Omega}(\nabla_{\xi}G(x,\xi)\cdot\mathbf{n})^{2}}dx.\end{split} \tag{14}\]
According to the above theorem, the best strategy to minimize \(\|v\|_{L^{1}}\) is to minimize \(\sqrt{C_{1}^{2}L_{f}}+\sqrt{C_{2}^{2}L_{b}}\). This implies the assignment of weight \(C_{1}^{2}\) to the PDE loss and \(C_{2}^{2}\) to the boundary loss.
Then we run MultiAdam for multiple times and record the norm of second momentum for the PDE loss group and boundary loss group separately. Since we use second momentum to rescale the gradients, its norm reflects how we scale the gradient as a whole. Therefore, in the following comparison, the norm of second momentum is used as our estimated weight for different losses.
For comparison purposes, we also incorporate two other reweighting techniques, LRA and NTK. By normalizing the weight on the boundary loss to 1, we can directly compare the normalized weight on the PDE loss and discern how the algorithms balance between different losses. We run the different methods three times, with the results displayed in Figure 4.
We observe that the weight assigned by MultiAdam closely aligns with the theoretical prediction. This implies that MultiAdam accurately discerns the relative importance of different tasks, enabling it to balance the gradients of various groups and approximate the ground truth closely. It's worth noting that the slightly higher PDE weight, compared to the theoretical estimation, is attributed to the difficulty PINNs face in optimizing the PDE loss.
More crucially, MultiAdam successfully mirrors the growth trend of PDE weight under differnt scales. As depicted in Figure 5, MultiAdam exhibits superior estimation in most scales compared to other methods. These results provide a support for MultiAdam's ability to handle problems under different scales.
#### 5.1.2 Gradient pathology
To further investigate the pathology of imbalanced gradients, we study the distribution of the gradients regarding the PDE residual and the boundary loss. The results are shown in Figure 6. We can see that MultiAdam can mitigate the gradient-vanishing problem in PINNs and effectively update parameters. The PDE gradients of the original PINNs are heavily concentrated around zero and barely can parameters be optimized, leading to stagnation. This observation is inline with (Wang et al., 2021)'s work. By contrast, the PDE gradients of MultiAdam PINNs are more spread, thus more parameters can attain useful information, accelerating the overall optimization.
### Helmholtz equation
The Helmholtz equation is a non-linear elliptic differential system representing a time-independent form of the wave
Figure 4: The comparison of normalized weight for PDE loss between MultiAdam, LRA, NTK and theoretical suggestion during training. The domain \(\Omega\) lies in \([-0.5,0.5]^{2}\). The estimation given by MultiAdam is closest to the theoretical suggestion.
Figure 5: The comparison of normalized weight for PDE loss between MultiAdam, LRA, NTK and theoretical suggestion under different domain scales. When domain scale is \(x\), we indicates that the domain \(\Omega\) lies in \([-x/2,x/2]^{2}\)
equation. It appears in various fields of physics, including electromagnetic radiation, seismology, and acoustics (Wikipedia, 2023a). The Helmholtz equation is a good testbed to demonstrate the ability to cope with highly nonlinear problems. Specifically, the equation takes the following form:
\[\begin{split}& u_{xx}+u_{yy}+k^{2}u-f=0,\ \forall x\in\Omega\\ & u(x)=0,\ \forall x\in\partial\Omega\\ &\Omega=[-\frac{b}{2},\frac{b}{2}]^{2},\end{split} \tag{15}\]
where \(k\) is a parameter. The initial-boundary value problem has exact solution \(u(x,y)=\sin(a_{i}x)\sin(a_{2}x)\) when
\[f(x,y)=(k^{2}-a_{1}^{2}\pi^{2}-a_{2}^{2}\pi^{2})\sin(a_{1}\pi x)\sin(a_{2}\pi y) \tag{16}\]
We consider two cases, \((k=1,a_{1}=a_{2}=1,b=1)\) and \((k=1,a_{1}=a_{2}=10,b=0.2)\), denoted as Helmholtz-1 and Helmholtz-0.2 respectively. Figure 9 in Appendix C.3 presents the reference solution.
From the perspective of both absolute error and relative error in Table 2, MultiAdam achieves the highest accuracy among these techniques. It improves the relative \(L^{2}\) error by roughly two orders of magnitude. After resizing the domain, MultiAdam does not suffer while the competitors do, which again demonstrates the robustness of our method against re-scaling.
#### 5.2.1 Rate of convergence
We choose three representative algorithms, namely Adam, MultiAdam and PCGrad, to compare their convergence speeds from the perspective of \(L^{2}\) loss curves. As shown in Figure 7, it is interesting to see that MultiAdam moves slow in the beginning phase (e.g., \(<5000\) epochs), while can quickly converge to better solutions. The reason for this phenomenon is that MultiAdam is estimating the momentum of the PDE and boundary objectives and once it obtains a good estimate, the super-fast convergence rate is observed. In contrast, the other methods converge slowly with much more unstable phenomenons. These results demonstrate the high efficiency and stability of MultiAdam.
### Burgers' equation
The Burgers' equation is a fundamental PDE that describes the evolution of a velocity field in one spatial dimension, represented as follows:
\[\begin{split} u_{t}+uu_{x}-\nu u_{xx}&=0,\ \forall x\in[-1,1],t\in[0,1]\\ u(0,x)&=-\sin(\pi x)\\ u(t,-1)&=u(t,1)=0,\end{split} \tag{17}\]
where \(\nu=\frac{0.01}{\pi}\). It can display parabolic or hyperbolic behaviors depending on the relative importance of the forces present.
Table 3 show the results. We can see that our method has 2.92% lower error than the baseline PINNs, yet NTK reweighting is even lower in this case. Comparing with NTK reweighting, MultiAdam is more stable, as illustrated in Figure 8, where we present the curves of relative \(L^{2}\) error when using Adam, MultiAdam, and NTK methods. We
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Helmholtz-1} & \multicolumn{2}{c}{Helmholtz-0.2} \\ & Absolute & Relative & Absolute & Relative \\ \hline Adam & 8.50E-02 & 22.46\% & 3.45E-01 & 93.46\% \\ LRA & 4.00E-03 & 1.11\% & 1.65E-01 & 45.87\% \\ NTK & 8.32E-02 & 21.76\% & 5.05E-01 & \(>\)100\% \\ GradNorm & 6.15E-02 & 16.06\% & 3.97E-01 & \(>\)100\% \\ PCGrad & 1.79E-02 & 4.80\% & 8.67E-02 & 22.92\% \\ MultiAdam & **1.56E-03** & **0.43\%** & **3.23E-03** & **0.87\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean absolute error and relative \(L^{2}\) error of different optimization methods on Helmholtz equation.
Figure 6: The distribution of the back-propagated gradients over different loss groups (PDE, boundary) at epoch 4000.
Figure 7: \(L^{2}\) loss curves in the Helmholtz-1 case trained with Adam, MultiAdam or PCGrad.
can see that for MultiAdam the error stays at a relatively low position since the middle of training, while Adam's error periodically rises up to as large as 30%. The spike phenomenon is not so eminent for NTK reweighting, but it is still remarkably worse than MultiAdam's.
## 6 Conclusion
This study primarily aimed to develop a scale-invariant approach for training Physics-Informed Neural Networks (PINNs). We highlighted the impact of domain scaling on PDE loss terms, which significantly contributes to unbalanced losses, and discussed its negative effect on PINN training. To address this issue, we introduced MultiAdam, a parameter-wise scale-invariant optimizer specifically designed for training PINNs. Our numerical experiments demonstrated that this optimizer is capable of handling a variety of cases across different scales, offering a relatively stable training process. At the same time, we provided a theoretical analysis of the error bounds of PINNs, which characterize the relationship between the PINN loss terms and the actual performance.
## Acknowledgements
This work was supported by the NSF of China Projects (Nos. 62061136001, 61620106010, 62076145, U19B2034, U1811461, U19A2081, 6197222, 62106120, 62076145); a grant from Tsinghua Institute for Guo Qiang; the High Performance Computing Center, Tsinghua University. J.Z was also supported by the New Cornerstone Science Foundation through the XPLORER PRIZE.
|
2302.11351 | Abrupt and spontaneous strategy switches emerge in simple regularised
neural networks | Humans sometimes have an insight that leads to a sudden and drastic
performance improvement on the task they are working on. Sudden strategy
adaptations are often linked to insights, considered to be a unique aspect of
human cognition tied to complex processes such as creativity or meta-cognitive
reasoning. Here, we take a learning perspective and ask whether insight-like
behaviour can occur in simple artificial neural networks, even when the models
only learn to form input-output associations through gradual gradient descent.
We compared learning dynamics in humans and regularised neural networks in a
perceptual decision task that included a hidden regularity to solve the task
more efficiently. Our results show that only some humans discover this
regularity, whose behaviour was marked by a sudden and abrupt strategy switch
that reflects an aha-moment. Notably, we find that simple neural networks with
a gradual learning rule and a constant learning rate closely mimicked
behavioural characteristics of human insight-like switches, exhibiting delay of
insight, suddenness and selective occurrence in only some networks. Analyses of
network architectures and learning dynamics revealed that insight-like
behaviour crucially depended on a regularised gating mechanism and noise added
to gradient updates, which allowed the networks to accumulate "silent
knowledge" that is initially suppressed by regularised (attentional) gating.
This suggests that insight-like behaviour can arise naturally from gradual
learning in simple neural networks, where it reflects the combined influences
of noise, gating and regularisation. | Anika T. Löwe, Léo Touzo, Paul S. Muhle-Karbe, Andrew M. Saxe, Christopher Summerfield, Nicolas W. Schuck | 2023-02-22T12:48:45Z | http://arxiv.org/abs/2302.11351v4 | # Regularised Neural Networks mimic Human Insight
###### Abstract
Humans sometimes show sudden improvements in task performance that have been linked to moments of insight. Such insight-related performance improvements appear special because they are preceded by an extended period of impasse, are unusually abrupt, and occur only in some, but not all, learners. Here, we ask whether insight-like behaviour also occurs in artificial neural networks trained with gradient descent algorithms. We compared learning dynamics in humans and regularised neural networks in a perceptual decision task that provided a hidden opportunity which allowed to solve the task more efficiently. We show that humans tend to discover this regularity through insight, rather than gradually. Notably, neural networks with regularised gate modulation closely mimicked behavioural characteristics of human insights, exhibiting delay of insight, suddenness and selective occurrence. Analyses of network learning dynamics revealed that insight-like behaviour crucially depended on noise added to gradient updates, and was preceded by "silent knowledge" that is initially suppressed by regularised (attentional) gating. This suggests that insights can arise naturally from gradual learning, where they reflect the combined influences of noise, attentional gating and regularisation.
Inights Neural Networks Learning
1 Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
2 Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, 14195 Berlin, Germany
3 Department of Physics, Ecole Normale Superieure, Paris, France 75005
4 Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK
5 School of Psychology, University of Birmingham, Birmingham B15 2SA, UK
6 Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
7 Gatsby Computational Neuroscience Unit, University College London, London, UK W1T 4JG
8 Sainsbury Wellcome Centre, University College London, London, UK W1T 4JG
9 CIFAR Azrieli Global Scholar, CIFAR, Toronto, Canada
10 Institute of Psychology, Universitat Hamburg, Von-Melle-Park 5, 20254 Hamburg, Germany
*equal contribution
E-mail: [email protected]
## 1 Introduction
he ability to learn from experience is common to all animals and some artificial agents. Neural networks trained with stochastic gradient descent (SGD) are a current theory of human learning that can account for a wide range of learning phenomena, but while standard networks seem to imply that all learning is gradual, humans may sometimes learn in an abrupt manner.
Such non-linear improvements in task performance or problem solving have been described as insights or aha-moments Kohler (1925); Durstewitz et al. (2010), and are often thought to reflect a qualitatively different, discrete learning mechanism Stuyck et al. (2021); Weisberg (2015). One prominent idea, dating back to Gestalt psychology Kohler (1925), is that an insight occurs when an agent has found a novel problem solution by restructuring an existing task representation Kounios and Beeman (2014). It has also been noted that humans often lack the ability to trace back the cognitive process leading up to an insight Jung-Beeman et al. (2004), suggesting that insights involve unconscious processes becoming conscious. Moreover, so called "aha-moments" can sometimes even be accompanied by a feeling of relief or pleasure in humans Kounios and Beeman (2014); Danek et al. (2014); Kounios and Beeman (2015). Such putative uniqueness of the insight phenomenon would also be in line with work that has related insights to brain regions distinct from those associated with gradual learning Shen et al. (2018); Jung-Beeman et al. (2004). These include, for instance, the anterior temporal gyrus Jung-Beeman et al. (2004); Tik et al. (2018), as well as subcortical areas such as the left amygdala or right hippocampal gyrus Shen et al. (2018). Altogether, these findings have led psychologists and neuroscientists to propose that insights are governed by a distinct learning process Jung-Beeman et al. (2004), that cannot be accounted for by current common theories of learning.
Here, we show that insight-like phenomena can occur without dedicated mechanisms for re-representation or a division of labour between conscious and unconscious processes. Our argument does not concern the subjective experiences related to insights, but focuses on showing how insight-like behaviour can emerge from gradual learning algorithms. Specifically, we aim to explain the following three main observations Schuck et al. (2015, 2022); Gaschler et al. (2019, 2013, 2015): First, insights trigger abrupt behavioural changes, accompanied by meta-cognitive suddenness (a "sudden and unexpected flash") Bowden et al. (2005); Gaschler et al. (2013); Metcalfe and Wiebe (1987); Weisberg (2015). These abrupt behavioural changes are often accompanied by fast neural transitions, which have been observed in humans as well as animals Durstewitz et al. (2010); Karlsson et al. (2012); Miller and Katz (2010); Schuck et al. (2015); Allegra et al. (2020). Second, insights occur selectively in some subjects, while for others improvement in task performance arises only gradually, or never Schuck et al. (2015). Finally, insights occur "spontaneously", i.e. without the help of external cues Friston et al. (2017), and are therefore observed after a seemingly random duration of impasse Ohlsson (1992) or delay after a change in environmental contingencies for different participants. In other words, participants seem to be "blind" to the new solution for an extended period of time, before it suddenly occurs to them. Insights are thus characterised by suddenness, selectivity, and delay.
The idea that insight-like behaviour can arise naturally from gradual learning is supported by previous work on neural networks trained with gradient descent Power et al. (2022). Saxe and colleagues Saxe et al. (2014), for instance, have shown that non-linear learning dynamics, i.e. suddenness in the form of saddle points and stage-like transitions, can result from gradient descent even in linear neural networks, which could explain sudden behavioural improvements. Other work has shown a delayed or stage-like mode of learning in neural networks that is reminiscent of the period of impasse observed in humans, and reflected for instance in the structure of the input data Saxe et al. (2019); Schapiro and McClelland (2009); McClelland and Rogers (2003), or information compression of features that at some point seemed task-irrelevant Flesch et al. (2022); Saxe et al. (2019). Finally, previous work has also found substantial individual differences between neural network instances that are induced by random differences in weight initialisation, noise, or the order of training examples Bengio et al. (2009); Flesch et al. (2018), which can become larger with training Mehrer et al. (2020).
Two factors that influence discontinuities in learning in neural networks are regularisation and gating. Regularisation plays a key role in the suppression of input features. While this avoids overfitting and can help a network to escape a local minimum Liu et al. (2020), it might also cause above mentioned "blindness" to a solution that involves inputs which were once erroneously deemed irrelevant. Gating, on the other hand, is known to cause exponential transitions in learning that are widely seen in multiplicative dynamical systems like the logistic growth model. Both techniques are widely used in artificial neural networks Bishop (2006); Krishnamurthy et al. (2022); Jozefowicz et al. (2015), and are inspired by biological brains Groschner et al. (2022); Poggio et al. (1985); Costa et al. (2017). Regularisation and gating could therefore be important aspects of network structure and training that are related to the temporary impasse followed by a sudden performance change, akin to insight-like behaviour.
Based on these findings, we hypothesised that insight-like behaviour - as characterised by suddenness, selectivity, and delay - can occur in simple neural networks trained with gradient descent. As indicated above, a simple neural network architecture with multiplicative gates and regularisation served as our candidate model. We predicted that due to the multiplicative nature of gating, regularising gates during training could lead to blindness of some relevant features that are key to a solution. We focused specifically on L1-regularisation because it forces gates of irrelevant inputs most strongly towards 0, compared to the less aggressive L2-regularisation. We reason that applying L1-regularisation, besides creating non-linear learning dynamics due to the multiplicative nature of the weights and gates, will lead to a sustained suppression period before the fast transition, similar to the delay observed in humans.
## Results
To study insight-like learning dynamics, 99 participants and 99 neural networks, matched in their behavioural performance to their human counterparts (see below for details), performed a decision task that required a binary choice about circular arrays of moving dots Rajananda et al. (2018) for humans and a symbolic version in which inputs were two scalars for networks. Dots were characterised by two features with different degrees of noise, (1) a motion direction (four possible orthogonal directions: NW, NE, SW, SE) and (2) a colour (orange or purple) (Fig.1A). Participants and networks had to learn the correct choice in response to each stimulus from trial-wise binary feedback, and were not instructed which features of the stimulus to pay attention to.
Importantly, the task provided a hidden opportunity to improve one's decision strategy that could be discovered through insight, similar to the spontaneous strategy switch task developed earlier Schuck et al. (2015). Participants first underwent an initial training phase (4 blocks, 100 trials each in humans, 8 blocks/800 trials in networks), during which only the motion direction predicted the correct choice, while stimulus colour was random (_motion phase_, see Fig.1D). Without any announcement, stimulus colour became predictive of the correct response in a later phase, such that from then on both features could be used to determine choice (_motion and colour phase_, 5 blocks for humans and networks, Fig.1D). Such unannounced changes in feature relevance elicit insights, i.e. behaviour exhibits changes that are sudden, delayed and selective, and post-experimental verbal questionnaires indicate that these changes go hand in hand with gaining consciousness about the new regularity Gaschler et al. (2019).
To test whether and when participants employed the hidden colour insight, we assessed whether choices were sensitive to the motion direction (using the colour insight meant that stimulus motion could be ignored). Specifically, following an initial pre-training period (see Methods) the amount of motion noise varied randomly in five levels of motion coherence ( 5%, 10%, 20%, 30% or 45%, noise variability started in the last two blocks before the onset of the _motion and colour phase_). Behaviour in trials with the highest amount of noise in dot motion (5% coherence, 30 trials per block) was then used to test whether participants had an insight about the usefulness of the colour, as high performance in these trials could only be achieved by using the colour information Schuck et al. (2015). Colour difficulty was constant and consistently allowed participants and networks to identify colour easily. A second measure that we used to investigate insight was a post-experimental questionnaire, in which participants were asked whether (1) they had noticed a rule in the experiment, (2) how long it took them to notice the rule, (3) whether they had paid attention to colour during choice. The questionnaire was administered after the _motion and colour phase_, and was followed by a instruction block that served as a sanity check (see Methods).
### Human Behaviour
Data from the _training phase_, during which motion directions were highly coherent and colours changed randomly (Block 1-2, dark grey tiles in Fig 1D), showed that participants learned the response mapping for the four motion directions well (78% correct, t-test against chance: \(t(98)=30.8\), \(p<.001\)). In the following task phase, noise was added to the motion, while the colour remained uncorrelated (_motion phase_, blocks 3-4, grey tiles in Fig. 1D). This resulted in an accuracy gradient that depended on noise level (linear mixed effects model of accuracy: \(\chi^{2}(1)\) = 726.36, \(p<.001\); RTs: \(\chi^{2}(1)\) = 365.07, \(p<.001\); N = 99, Fig.2A). Crucially, performance during this phase was heavily diminished in the conditions with the largest amounts of motion noise, i.e. the two lowest coherence conditions: the percentage of correct choices was at only 60% and 63% in the two lowest coherence conditions, and did not change over time (paired t-test block 3 vs 4: \(t(195.9)=-1.13\), \(p=0.3\), \(d=0.16\)). Hence, performance (improvements) largely beyond these low baseline levels can only be attributed to colour use, rather than heightened motion sensitivity.
The noise level continued to influence performance in the _motion and colour phase_, as evidenced by a difference between performance in high vs. low coherence trials (20, 30 & 45% vs 5 & 10 % coherent motion, respectively; \(M=93\pm 6\%\) vs \(M=77\pm 12\%\); \(t(140.9)=12.5\), \(p<.001\), \(d=1.78\), see Fig.2A-B). Notably, however, the onset of the colour correlation triggered performance improvements across all coherence levels (\(t(187.2)=-12.4\), \(p<.001\), \(d=1.8\); end of _motion phase_: \(M=78\pm 7\%\) vs. end of _motion and colour phase_: \(M=91\pm 8\%\)), contrasting the stable performance found during the motion phase and suggesting that at least some participants leveraged colour information once available.
We asked whether these improvements are related to gaining conscious insight by analysing the post-experimental questionnaire. Results show that conscious knowledge about the colour regularity arose in some, but not all, participants: 57.6% (57/99) reported in the questionnaire to have used colour, while 42.4% indicated to not have noticed or used the colour. We then checked whether these conscious insights were related to the key behavioural characteristics of suddenness, selectivity, and variable delay. To test for sadness, we fitted each participant's time course of accuracy on low coherence trials by either (1) a linear ramp or (2) a sigmoid function. While a linear model (free parameters: intercept \(y_{0}\) and slope \(m\)) can capture gradual improvements in behaviour that might occur without insight, a better fit
of a non-linear sigmoid function indicates sudden behavioural transitions (free parameters: slope \(m\), inflection point \(t_{s}\) and function maximum \(y_{max}\)). Performance across participants on low coherence trials was best fit by a non-linear sigmoid function, indicating at least a subsection of putative insight participants (BIC sigmoid function: \(M=-6.7\), \(SD=0.7\), protected exceedance probability: 1, BIC linear function: \(M=-6.4\), \(SD=0.5\), protected exceedance probability: 0). The sigmoid function also outperformed a step function with free parameters inflection point \(t_{s}\) and function maximum \(y_{max}\) (BIC step function: \(M=-6.5\), \(SD=0.6\), protected exceedance probability: 0) (Fig.2D-E, Fig.S2).
We next tested insight selectivity, i.e. whether all participants, or only a subset, showed abrupt behavioural transitions, as indicated by participants' self-reports. Chance level of suddenness was determined by an out-of-sample null distribution of sigmoid steepness derived from a control experiment (N = 20), in which participants performed an identical task, except that colour never started to correlate with motion, and hence no insight was possible. Fitting the same sigmoid
Figure 1: Stimuli and task design **(A)** Stimuli and stimulus-response mapping: dot clouds were either coloured in orange or purple and moved to one of the four directions NW, NE, SE, SW with varying coherence. A left response key, “X”, corresponded to the NW/SE motion directions, while a right response key “M” corresponded to NE/SW directions. **(B)** Schematic of simple neural network with regularised gate modulation with colour codes corresponding to respective colour and motion _weights_ and _gates_. Number of nodes shown is the exact number of nodes used in the neural network simulations. **(C)** Trial structure: a fixation cue is shown for a duration that is shuffled between 400, 600, 800 and 1000 ms. The random dot cloud stimulus is displayed for 2000 ms. A response can be made during these entire 2000 ms and a feedback cue will replace the fixation cue at the centre of the stimulus until the end of the stimulus display duration. **(D)** Task structure of the two-alternative forced choice task for humans and neural networks: each block consisted of 100 trials. Colour was predictive of correct choices and correlated with motion directions as well as correct response buttons in the last five blocks (_motion and colour phase_). In the last block, humans and networks were instructed to use colour inputs to respond. In the _motion phase_, colour changed randomly and was not predictive. A first training block only for humans contained only 100% motion coherence trials to familiarise subjects with the S-R mapping. The remaining training blocks contained only high coherence (0.2, 0.3, 0.45) trials.
function to this data, we derived a baseline distribution of the steepness (see Methods for details). Comparing the steepness values (at the inflection point) obtained in our experimental sample to the baseline distribution derived from the control group with no colour correlation, showed that about half of participants (48/99, 48.5%) had values larger than the 100% percentile of the control distribution. This thus suggests that truly abrupt insight occurred selectively in these "insight participants" (Fig.2F). 79.2% of the participants classified as insight subjects also self-reported to have used colour to make correct choices (Fig. S6A-B). Hence, our behavioural marker of unexpectedly sudden performance changes can serve as a valid indicator for insight.
We validated our behavioural metric of selectivity through additional analyses. Splitting behaviour into two separate insight (participants with steepness values larger than the 100% percentile of the control distribution) and no-insight groups showed that, as expected based on the dependency of accuracy and our behavioural metric, insight subjects started to perform significantly better in the lowest coherence trials once the _motion and colour phase_ (Fig.2C) started, (mean proportion correct in _motion and colour phase_: \(M=83\pm 10\%\)), compared to participants without insight (\(M=66\pm 8\%\)) (\(t(92)=9.5\), \(p<.001\), \(d=1.9\)). Unsurprisingly, a difference in behavioural accuracy between insight participants and no-insight participants also held when all coherence levels were included (\(M=91\pm 5\%\) vs. \(M=83\pm 7\%\), respectively, t-test: \(t(95.4)=6.9\), \(p<.001\), \(d=1.4\)). Interestingly, accuracy in the _motion phase_, which was not used in steepness fitting, did not differ between groups (low coherence trials: \(M=59\%\), vs. \(M=62\%\); \(t(94.4)=-1.9\), \(p=0.07\), \(d=0.38\); all noise levels: \(M=76\%\) vs \(M=76\%\),\(t(96)=0.45\), \(p=0.7\), \(d=0.09\)). Reaction times, which are independent from the choices used in model fitting and thus served as a sanity check for our behavioural metric split, reflected the same improvements upon switching to the colour strategy. Subjects that showed insight about the colour rule (\(M=748.47\pm 171.1\) ms) were significantly faster (\(t(96.9)=-4.9\), \(p<.001\), \(d=0.97\)) than subjects that did not (\(M=924.2\pm 188.9\) ms) on low coherence trials, as well as over all noise levels (\(t(97)=-3.8\), \(p<.001\), \(d=0.87\)) (\(M=675.7\pm 133\) ms and \(M=798.7\pm 150.3\) ms, respectively).
Finally, we asked whether insights occurred with random delays, as reported earlier. To quantify this key characteristic, insight moments were defined as the time points of inflection of the fitted sigmoid function, i.e. when performance exhibited abrupt increases (see Methods). We verified the precision of our switch point identification by time-locking the data to the individually fitted switch points. This showed that accuracy steeply increased between the halved task block (50 trials) immediately before vs. after the switch, as expected (\(M=62\%\) vs \(M=83\%\)\(t(89)=-11.2\), \(p<.001\), \(d=2.34\), Fig.2C, Fig. S5A). Additionally, reaction times dropped steeply from pre- to post-switch (\(M=971.63\) ms vs. \(M=818.77\) ms, \(t(87)=3.34\), \(p<.001\), \(d=0.7\)). The average delay of insight onset was 1.3 task blocks (130 trials) (\(\pm 95\) trials / \(0.95\) blocks, Fig.2G). The distribution of delays among insight participants ranged from 0 to 3 blocks after the start of the _motion and colour phase_, and statistically did not differ from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: \(D(48)=0.15\), \(p=0.69\)).
Hence, the behaviour of human subjects showed all characteristics of insight: sudden improvements in performance that occurred only in a subgroup and with variable delays.
#### Neural Network Behaviour
To probe whether insight-like behaviour can arise in simple neural networks trained with gradient descent, we simulated 99 network models performing the same decision making task. The networks had two input nodes (\(x_{c}\), \(x_{m}\), for colour and motion, respectively), two input-specific gates (\(g_{m}\), \(g_{c}\)) and weights (\(w_{m}\), \(w_{c}\)), and one output node (\(\hat{y}\), Fig.1B). Network weights and gates were initialised at 0.01. The stimulus features motion and colour were reduced to one input node each, which encoded colour/motion direction of each trial by taking on either a positive or a negative value. More precisely, given the correct decision \(y=\pm\), the activities of the input nodes were sampled from i.i.d. normal distributions with means \(yM_{m}\) and \(yM_{c}\) and standard deviations \(\sigma_{m}=0.01\) and \(\sigma_{c}=0.01\) for colour and motion respectively. Hence \(M_{m}\) and \(M_{c}\) determine the signal to noise ratio in each input. We fixed the colour mean shift \(M_{c}=0.22\), while the mean shifts of the motion node differed by noise level and were fitted individually such that each human participant had one matched network with comparable pre-insight task accuracy in each motion noise condition (see below).
The network multiplied each input node by two parameters, a corresponding weight, and a gate, and returned a decision based on the output node's sign \(\hat{y}\):
\[\hat{y}=\mathrm{sign}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta) \tag{1}\]
where \(\eta\sim\mathcal{N}(0,\sigma)\) is Gaussian noise, and weights and gates are the parameters learned online through gradient descent. To train L1-networks we used a simple squared loss function with L1-regularisation of gate weights:
\[\mathcal{L}=\frac{1}{2}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta-y)^{2}+\lambda(|g_ {m}|+|g_{c}|) \tag{2}\]
with a fixed level of regularisation \(\lambda=0.07\).
During training, Gaussian noise was added to each gradient update to mimic learning noise and induce variability between individual networks (same gradient noise level for all networks). \(\xi\sim\mathcal{N}(\mu_{\xi}=0,\sigma_{\xi}=0.05)\) was added to each gradient update, yielding the following update equations for noisy SGD of the network's weights
\[\Delta w_{m}= -\alpha x_{m}g_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+\eta-y)+\xi_{w_ {m}}, \tag{3}\]
and gates,
\[\Delta g_{m}= -\alpha x_{m}w_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+\eta-y)\] \[-\alpha\mathrm{sign}(\mathrm{g}_{m})+\xi_{\mathrm{g}_{m}} \tag{4}\]
where we have not notated the dependence of all quantities on trial index \(t\) for clarity; and analogous equations hold for colour weights and gates with all noise factors \(\xi_{g_{m}}\,\xi_{w_{m}}\) etc, following the same distribution.
Using this setup, we studied whether L1-regularisation would lead the network to show key characteristics of insight-like behaviour. Specifically, we reasoned that L1-regularisation of the gate weights would introduce competitive dynamics between the input channels that can lead to non-linear learning dynamics. We focused on L1-regularisation because it forces gates of irrelevant inputs most strongly towards 0, compared to L2-regularisation, which is less aggressive in particular once gates are already very small. While the multiplicative nature of the weights and gates results in non-linear quadratic and cubic gradient dynamics, applying L1-regularisation will lead to a sustained suppression period before the fast transition (see Methods).
Networks received an extended pre-task training phase of 6 blocks, but then underwent a training curriculum precisely matched to the human task (2 blocks of 100 trials in the _motion phase_ and 5 blocks in the _motion and colour phase_, see Fig. 1D). We adjusted direction specificity of motion inputs (i.e. difference in distribution means from which \(x_{m}\) was drawn for left vs right trials) separately for each participant and coherence condition, such that performance in the motion phase was equated between each pair of human and network (Fig.3A, see Methods). Moreover, the colour and
Figure 2: Humans: task performance and insight-like strategy switches **(A)** Accuracy (% correct) during the _motion phase_ increases with increasing motion coherence. N = 99, error bars signify standard error of the mean (SEM). **(B)** Accuracy (% correct) over the course of the experiment for all motion coherence levels. First dashed vertical line marks the onset of the colour predictiveness (_motion and colour phase_), second dashed vertical line the “instruction” about colour predictiveness. Blocks shown are halved task blocks (50 trials each). N = 99, error shadows signify SEM. **(C)** Switch point-aligned accuracy on lowest motion coherence level for insight (48/99) and no-insight (51/99) subjects. Blocks shown are halved task blocks (50 trials each). Error shadow signifies SEM. **(D)** Illustration of the sigmoid function for different slope steepness parameters. **(E)** Difference between BICs of the linear and sigmoid function for each human subject. N = 99. **(F)** Distributions of fitted slope steepness at inflection point parameter for control experiment and classified insight and no-insight groups. **(G)** Distribution of switch points. Dashed vertical line marks onset of colour predictiveness. Blocks shown are halved task blocks (50 trials each).
motion input sequences used for network training were sampled from the same ten input sequences that humans were exposed to. A learning rate of \(\alpha=0.6\) (same for all participants) was selected to match average learning speed.
### L1-regularised Neural Networks
Networks learned the motion direction-response mapping well in the training phase, during which colour inputs changed randomly and output should therefore depend only on motion inputs (_motion phase_, 75% correct, t-test against chance: \(t(98)=33.1\), \(p<.001\), the accuracy of humans in this phase was \(M=76\pm 6\%\)). As in humans, adding noise to the motion inputs (_motion phase_) resulted in an accuracy gradient that depended on noise level (linear mixed effects model of accuracy: \(\chi^{2}\)(1) = 165.61, \(p<.001\); N = 99, Fig.3A), as expected given that input distributions were set such that network performance would equate to human accuracy (Fig.3A-B). Networks also exhibited low and relatively stable performance levels in the two lowest coherence conditions (58% and 60%, paired t-test to assess stability in the _motion phase_: \(t(98)=-0.7\), \(p=0.49\), \(d=0.02\)), and had a large performance difference between high vs low coherence trials (\(M=88\%\pm 6\%\) vs. \(M=74\pm 13\%\), \(t(137.3)=9.6\), \(p<.001\), \(d=1.36\) for high, i.e. \(\geq\) 20% coherence, vs. low trials). Finally, humans and networks also performed comparably well at the end of learning (last block of the _colour and motion phase_: \(M(nets)=79\%\pm 17\%\) vs. \(M(humans)=82\pm 17\%\), \(t(195.8)=1.1\), \(p=0.27\), \(d=0.16\), Fig. S8C), suggesting that at least some networks did start to use colour inputs. Hence, networks' baseline performance and learning were successfully matched to humans.
To look for characteristics of insight in network performance, we employed the same approach used for modelling human behaviour, and investigated suddenness, selectivity, and delay. To identify sudden performance improvements, we fitted each network's time course of accuracy on low coherence trials by (1) a linear model and (2) a non-linear sigmoid function, which would indicate gradual performance increases or insight-like behaviour, respectively. As in humans, network performance on low coherence trials was best fit by a non-linear sigmoid function, indicating at least a subsection of putative "insight networks" (BIC sigmoid function: \(M=-10\), \(SD=1.9\), protected exceedance probability: 1, BIC linear function: \(M=-9\), \(SD=2.4\), protected exceedance probability: 0)(Fig.3D).
We then tested whether insight-like behaviour occurred only in a subset of networks (selectivity) by assessing in how many networks the steepness of the performance increase exceeded a chance level defined by a baseline distribution of the steepness. As in humans, we ran simulations of 99 control networks with the same architecture, which were trained on the same task except that during the _motion and colour phase_, the two inputs remained uncorrelated. About half of networks (48/99, 48.5%) had steepness values larger than the 100% percentile of the control distribution, matching exactly the value we observed in the human sample. The L1-networks that showed sudden performance improvements were not matched to insight humans more often than chance (\(\chi^{2}\)(47) = 27.9, \(p=0.99\)), suggesting that network variability did not originate from baseline performance levels or trial orders. Hence, a random subset of networks showed sudden performance improvements comparable to those observed during insight moments in humans (Fig.3E).
For simplicity reasons in comparing network behaviour to humans, we will refer to the two groups as "insight and no-insight networks". Analysing behaviour separately for the insight and no-insight networks showed that switches to the colour strategy improved the networks' performance on the lowest coherence trials once the _motion and colour phase_ started, as compared to networks that did not show a strategy shift (\(M=83\pm 11\%\), vs. \(M=64\pm 9\%\), respectively, \(t(89.8)=9.2\), \(p<.001\), \(d=1.9\), see Fig.3C). The same performance difference between insight and no-insight networks applied when all coherence levels of the _motion and colour phase_ were included (\(M=88\pm 7\%\) vs. \(M=77\pm 6\%\), \(t(93.4)=7.8\), \(p<.001\), \(d=1.57\)). Unexpectedly, insight networks performed slightly worse on low coherence trials in the motion phase, i.e. before the change in predictiveness of the features, (\(t(97)=-3.1\), \(p=0.003\), \(d=0.62\)) (insight networks: \(M=58\pm 8\%\); no-insight networks: \(M=64\pm 9\%\)), and in contrast to the lack of pre-insight differences we found in humans.
Finally we asked whether insight-like behaviour occurred with random delays in neural networks, again scrutinising the time points of inflection of the fitted sigmoid function, i.e. when performance exhibited abrupt increases (see Methods). Time-locking the data to these individually fitted switch points verified that, as in humans, the insight-like performance increase was particularly evident around the switch points: accuracy was significantly increased between the halved task blocks preceding and following the insight-like behavioural switch, for colour switching networks (\(M=66\pm 8\%\) vs. \(M=86\pm 7\%\), \(t(91.6)=-12.7\), \(p<.001\), \(d=2.6\), see Fig.3C, Fig. SSB).
Among insight networks, the delay distribution ranged from 1 to 4 blocks after the start of the _motion and colour phase_, and did not differ from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: \(D(48)=0.13\), \(p=0.85\)). The average delay of insight-like switches was 1.75 task blocks (\(\pm 1.05\)), corresponding to 175 trials (Fig.3F). The insight networks' delay was thus slightly longer than for humans (\(M=130\pm 95\) trials vs. \(M=175\pm 105\) trials, \(t(92.7)=-2.1\), \(p=0.04\), \(d=0.42\)). The variance of insight induced strategy switch onsets as well as the relative variance in the abruptness of the switch onsets thus qualitatively matched our behavioural results observed in human participants. The behaviour of L1-regularised neural networks therefore showed all characteristics
of human insight: sudden improvements in performance that occurred selectively only in a subgroup with variable random delays.
### L2-regularised Neural Networks
Following our observation that L1-regularised networks exhibited human-like insight behaviour, we investigated whether this was specific to the form of regularisation. We therefore trained otherwise identical networks with a L2-regularisation term on the gate weights. We hypothesised that L2-regularisation would also lead to competitiveness between input nodes, but to a lower extent than L1-regularisation. We reasoned that in particular the fact that during the _motion phase_ the networks motion weights would not shrink as close to 0 would lead to more frequent and earlier insight-like behavioural switches.
While L2-regularised gate weights led to switches that were similar to those previously observed in their abruptness (Fig. S7C), such insight-like behaviours were indeed much more frequent and clustered: 96% of networks switched to a colour strategy, with a switch point distribution that was much more centred around the onset of the colour predictiveness (Fig. S7F, average delay of 1 task block (\(SD=1.1\)) corresponding to 100 trials after onset of the colour correlation (_motion and colour phase_). This was significantly shorter than for L1-regularised networks (\(M=1.05\pm 1.1\) vs. \(M=1.75\pm 1.05\), \(t(59.6)=4\), \(p<0.001\), \(d=0.9\)) and also differed from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: \(D(95)=0.26\), \(p=0.005\)). Additionally, performance on the lowest coherence level in the last block of the _colour and motion phase_ before colour instruction was centred just below ceiling and thus did not indicate a range of colour use like humans and L1-regularised networks (\(M(L2-networks)=97\%\pm 2\%\) vs. \(M(humans)=82\pm 17\%\), \(t(101.6)=-8.8\), \(p<.001\), \(d=1.25\), Fig. S8C).
While L2-regularised networks thus showed abrupt behavioural transitions, they failed to show the other two key characteristics of insight: selectivity and delay.
Figure 3: L1-regularised neural networks: task performance and insight-like strategy switches **(A)** Accuracy (% correct) during the _motion phase_ increases with increasing motion coherence. N = 99, error bars signify SEM. Grey line is human data for comparison. **(B)** Accuracy (% correct) over the course of the experiment for all motion coherence levels. First dashed vertical line marks the onset of the colour predictiveness (_motion and colour phase_), second dashed vertical line the “instruction” about colour predictiveness. Blocks shown are halved task blocks (50 trials each). N = 99, error shadows signify SEM. **(C)** Switch point-aligned accuracy on lowest motion coherence level for insight (48/99) and no-insight (51/99) networks. Blocks shown are halved task blocks (50 trials each). Error shadow signifies SEM. **(D)** Difference between BICs of the linear model and sigmoid function for each network. **(E)** Distributions of fitted slope steepness at inflection point parameter for control networks and classified insight and no-insight groups. **(F)** Distribution of switch points. Dashed vertical line marks onset of colour predictiveness. Blocks shown are halved task blocks (50 trials each).
### Non-regularised Neural Networks
In non-regularised networks, the effects observed in L2-regularised networks are enhanced. 99% of the networks started using colour inputs (Fig. S8A), but colour use occurred in a more linear, less abrupt way than for L1- or L2-regularised networks. Additionally, there was very little delay of only 0.7 task blocks (70 trials, (\(\pm 0.25\))) between onset of the _motion and colour phase_ and the start of the networks making use of the colour input predictiveness (Fig. S8B). As for L2-networks, this delay was significantly shorter than for L1-regularised networks (\(M=0.7\pm 0.55\) vs. \(M=1.75\pm 1.05\), \(t(49.3)=6.6\), \(p<0.001\), \(d=1.6\)) and also differed from a normal distribution taking into account the hazard rate (Exact two-sided Kolmogorov-Smirnov test: \(D(98)=0.35\), \(p<.001\)). Similarly, performance on the lowest coherence level in the last block indicated that all networks used colour inputs (\(M=100\%\pm 0.3\%\) vs. \(M=82\pm 17\%\), \(t(98)=-10.4\), \(p<.001\), \(d=1.5\), Fig. S8C). Thus non-regularised networks also did not show the insight key behavioural characteristics of selectivity and delay.
### Origins of Insight-like Behaviour in Neural Networks
Having established the behavioural similarity between L1-networks and humans in an insight task, we asked what gave rise to insight-like switches in some networks, but not others. We therefore investigated the dynamics of gate weights and the effects of noise in insight vs. no-insight networks, and the role of regularisation strength parameter \(\lambda\).
### Colour Gradients Increase after Colour Becomes Predictive
Our first question was how learning about stimulus colour differed between insight and no-insight L1 networks, as expressed by the dynamics of network gradients. We time-locked the time courses of gradients to each network's individual switch point. Right when the switch occurred (at t of the estimated switch), colour gate weight gradients were significantly larger in insight compared to no-insight L1-networks (\(M=0.06\pm 0.06\) vs. \(M=0.02\pm 0.03\), \(t(73.2)=5.1\), \(p<.001\), \(d=1.05\)), while this was not true for motion gate weight gradients (\(M=0.18\pm 0.16\) vs. \(M=0.16\pm 0.16\), \(t(97)=0.7\), \(p=0.5\), \(d=0.13\)).
Notably, insight networks had larger colour gate weight gradients even before any behavioural changes were apparent, right at the beginning of the _motion and colour phase_ (first 5 trials of _motion and colour phase_: \(M=0.05\pm 0.07\) vs. \(M=0.01\pm 0.01\); \(t(320)=8.7\), \(p<.001\)), whereas motion gradients did not differ (\(t(576.5)=-0.1\), \(p=0.95\)). This increase in colour gate weight gradients for insight networks happened within a few trials after correlation onset (colour gradient last trial of _motion phase_: \(M=0\pm 0\) vs. 5th trial of _motion and colour phase_: \(M=0.06\pm 0.08\); \(t(47)=-5.6\), \(p<.001\), \(d=1.13\)), and suggests that insight networks start early to silently learn more about colour inputs compared to their no-insight counterparts. A change point analysis considering the mean and variance of the gradients confirmed the onset of the _motion and colour phase_ to be the change point of the colour gradient mean, with a difference of \(0.04\) between the consecutive pre-change and change time points for insight networks vs \(0.005\) for no-insight networks (with a change point detected two trials later), indicating considerable learning about colour for insight networks.
### "Silent" Colour Knowledge Precedes Insight-like Behaviour
A core feature of our network architecture is that inputs were multiplied by two factors, a gate \(g\), and a weight \(w\), but only gates were regularised. This meant that some networks might have developed larger colour weights, but still showed no signs of colour use, because the gates were very small. This could explain the early differences in gradients reported above. To test this idea, we investigated the absolute size of colour gates and weights of insight vs no-insight L1-networks before and after insight-like switches had occurred.
Comparing gates at the start of learning (first trial of the _motion and colour phase_), there were no differences between insight and no-insight networks for either motion or colour gates (colour gates: \(M=0\pm 0.01\) vs. \(M=0\pm 0.01\); \(t(95.3)=0.8\), \(p=0.44\), motion gates: \(M=0.5\pm 0.3\) vs. \(M=0.6\pm 0.3\); \(t(93.1)=-1.7\), \(p=0.09\), see Fig.4A, Fig.4H,J). Around the individually fitted switch points, however, the gates of insight and no-insight networks differed only for colour gates (colour gates: \(0.2\pm 0.2\) vs \(0.01\pm 0.02\) for insight vs no-insight networks, \(t(48)=6.7\), \(p<0.001\), \(d=1.4\), motion gates: \(0.5\pm 0.3\) vs \(0.5\pm 0.3\) for insight vs no-insight networks, \(t(95.6)=0.2\), \(p=0.9\), \(d=0.04\)). Insight networks' increased use of colour inputs was particularly evident at the end of learning (last trial of the _motion and colour phase_) and reflected in larger colour gates (\(0.7\pm 0.3\) vs \(0.07\pm 0.2\) for insight vs no-insight networks, \(t(73.7)=13.4\), \(p<0.001\), \(d=2.7\)) while the reverse was true for motion gates (\(M=0.2\pm 0.2\) vs \(M=0.5\pm 0.3\), respectively, \(t(81)=-7.5\), \(p<0.001\), \(d=1.5\), see Fig.4B, Fig.4H,J). Hence, differences in gating between network subgroups were only present after, but not before learning, and did not explain the above reported gradient differences or which network would show insight-like behaviour.
A different pattern emerged when investigating the weights of the networks. Among insight networks colour weights were significantly larger already at the start of learning (first trial of the _motion and colour phase_), as compared to no-insight networks (insight: \(M=1.2\pm 0.6\); no-insight: \(M=0.4\pm 0.3\), \(t(66.2)=8.1\), \(p<.001\), \(d=1.7\), see Fig.4C, Fig.4G,I). This was not true for motion weights (insight: \(M=3.4\pm 0.7\); no-insight: \(M=3.5\pm 0.5\), \(t(89.5)=-1.1\), \(p=0.3\), \(d=0.2\), see Fig.4C, Fig.4G,I). Thus, colour information appeared to be encoded in the weights of insight networks already before any insight-like switches occurred. Because the colour gates were suppressed through the L1-regularisation mechanism before learning, the networks did not differ in any observable colour sensitivity. An increase of colour gates reported above could then unlock the "silent knowlegde" of colour relevance.
To experimentally test the effect of pre-learning colour weights, we ran a new sample of L1-networks (\(N=99\)), and adjusted the colour and motion weight of each respective network to the mean absolute colour and motion weight size we observed in insight networks at start of learning (first trial of _motion and colour phase_). Gates were left untouched. This increased the number of insight networks from 48.5% to 70.7%, confirming that encoding of colour information at an early stage was an important factor for later switches, but also not sufficient to cause insight-like behaviour in all networks. Note that before weights adjustments were made, the performance of the new networks did not differ from the original L1-networks (\(M=0.8\pm 0.07\) vs \(M=0.8\pm 0.07\), \(t(195)=0.2\), \(p=0.9\), \(d=0.03\)). In our new sample, networks that would later show insight-like behaviour or not also did not differ from each other (insight: \(M=0.7\pm 0.07\) vs \(M=0.7\pm 0.07\), \(t(100.9)=1.4\), \(p=0.2\), \(d=0.3\), no-insight: \(M=0.8\pm 0.05\) vs \(M=0.8\pm 0.07\), \(t(71)=0.9\), \(p=0.4\), \(d=0.2\)). Weight and gate differences between L1- and L2-networks are reported in the Supplementary Material (see also Fig.4E-F).
#### Noise is Needed For Insight-like Behaviour
One possible factor that could explain the early differences between the weights of network subgroups is noise. The networks were exposed to noise at two levels: on each trial noise was added at the output stage (\(\eta\sim\mathcal{N}(0,\,\sigma_{\eta}^{2})\)), and to the gate and weight gradients during updating (\(\xi\sim\mathcal{N}(0,\,\sigma_{\xi}^{2})\)).
We probed whether varying the level of noise added during gradient updating, i.e. \(\sigma_{\xi}\), would affect the proportion of networks exhibiting insight-like behaviour. Parametrically varying the variance of noise added to colour and motion gates and weights led to increases in insight-like behaviour, from no single insight network when no noise was added to 100% insight networks when \(\sigma_{\xi_{\xi}}\) reached values of larger than approx. 0.05 (Fig.5A). Since gate and weight updates were coupled (see Eq. 4-7), noise during one gradient update could in principle affect other updates as well. We therefore separately manipulated the noise added to updates of colour gates and weights, motion gates and weights, all weights and all gates. This showed that adding noise to only weights during the updates was sufficient to induce insight-like behaviour (Fig.5B). In principle, adding noise to only gates was sufficient for insight-like switches as well, although noise applied to the gates had to be relatively larger to achieve the same effect as applying noise to weight gradients (Fig.5B), presumably due the effect of regularisation. Adding noise only to the gradients of motion gates or weights, but not to the colour gradients, was not sufficient to induce insight-like switches (Fig.5B). On the other hand, noise added only to the colour parameter updates quickly led to substantial amounts of insight-like behavioural switches (Fig.5B).
An analysis of _cumulative_ noise showed that the effects reported above are mostly about momentary noise fluctuations: cumulative noise added to the output did not differ between insight and no-insight networks at either the start (first trial of the _motion and colour phase_) or end of learning (last trial of the _motion and colour phase_) (start: \(M=-0.3\pm 4.7\) vs. \(M=-0.6\pm 3.9\); \(t(91.2)=0.4\), \(p=0.7\), end: \(M=0.6\pm 7.1\) vs. \(M=0.5\pm 7.1\); \(t(96.7)=0.07\), \(p=1\)), and the same was true for cumulative noise added during the gradient updates to weights and gates (see Supplementary Material for details).
We therefore conclude that Gaussian noise added to updates of particularly colour gate weights, in combination with "silent knowledge" about colour information stored in suppressed weights, is a crucial factor for insight-like behavioural changes.
#### Regularisation Parameter \(\lambda\) Affects Insight Delay and Frequency
In our previous results, the regularisation parameter \(\lambda\) was arbitrarily set to \(0.07\). We next tested the effect of of \(\lambda\) on insight-like behaviour. The number of L1-regularised insight networks linearly decreased with increasing \(\lambda\) (Fig.5C). Lambda further had an effect on the delay of the insight-like switches, with smaller \(\lambda\) values leading to decreased average delays of switching to a colour strategy after predictiveness of the inputs had changed (Fig.5D). The regularisation parameter \(\lambda\) thus affects two of the key characteristics of human insight - selectivity and delay.
Figure 4: Gate and weight size differences at the start and end of learning and dynamics. Colour and motion gates at **(A)** the first trial and **(B)** the last trial of the _motion and colour phase_. **(C)** Colour and motion weights at the first trial and **(D)** the last trial of the _motion and colour phase_. Error bars signify SEM. **(E)** Gate weight sizes for colour and motion gate weights at the first trial and **(F)** the last trial of the _motion and colour phase_ for L1- and L2-regularised networks. **(G)** Weights of insight L1-networks. The dashed vertical line marks the onset of the _motion and colour phase_. Error shadows signify SEM. **(H)** Gates of insight L1-networks. The dashed vertical line marks the onset of the _motion and colour phase_. Error shadows signify SEM. **(I)** Weights of no-insight L1-networks. The dashed vertical line marks the onset of the _motion and colour phase_. Error shadows signify SEM. **(J)** Gates of no-insight L1-networks. The dashed vertical line marks the onset of the _motion and colour phase_. Error shadows signify SEM.
## Discussion
We investigated insight-like learning behaviour in humans and neural networks. In a binary decision making-task with a hidden regularity that entailed an alternative way to solve the task more efficiently, a subset of regularised neural networks with multiplicative gates of their input channels (as an attention mechanism) displayed spontaneous, jump-like learning that signified the sudden discovery of the hidden regularity - mysterious insight moments boiled down to the simplest expression.
Networks exhibited all key characteristics of human insight-like behaviour in the same task (suddenness, selectivity, delay). Crucially, neural networks were trained with standard stochastic gradient descent that is often associated with gradual learning. Our results therefore suggest that the behavioural characteristics of aha-moments can arise from gradual learning mechanisms, and hence suffice to mimic human insight.
Network analyses identified the factors which caused insight-like behaviour in L1-networks: noise added during the gradient computations accumulated to non-zero weights in some networks. As long as colour information was not useful yet, i.e. prior to the onset of the hidden regularity, close-to-0 colour gates rendered these weights "silent", such that no effects on behaviour can be observed. Once the hidden colour regularity became available, the non-zero colour weights helped to trigger non-linear learning dynamics that arise during gradient updating, and depend on the starting point. Hence, our results hint at important roles of "attentional" gating, noise, and regularisation as the computational origins of sudden, insight-like behavioural changes. We report several findings that are in line with this interpretation: addition of gradient noise \(\xi\) in particular to the colour weights and gates, pre-learning adjustment of colour weights and a reduction of the regularisation parameter \(\lambda\) all increased insight-like behaviour. We note that our networks did not have a hidden layer, witnessing the fact that no hidden layer is needed to produce non-linear learning dynamics.
Our findings have implications for the conception of insight phenomena in humans. While present-day machines clearly do not have the capacity to have aha-moments due to their lack of meta-cognitive awareness, our results show that the remarkable behavioural signatures of insights by themselves do not necessitate a dedicated process. This raises
Figure 5: Influence of Gaussian noise distribution variance \(\sigma_{\xi}\) and regularisation parameter \(\lambda\) on insight-like switches in L1-regularised networks **(A)** Influence of noise standard deviation (\(\sigma_{\xi}\)) applied to all gradient updates on the frequency of switches to a colour strategy (number of networks defined as having “insight”). The frequency of insight-like switches increases gradually with \(\sigma_{\xi}\) until it plateaus. Error bars are SD. We ran 10 x 99 simulations. **(B)** Effects of noise added only to either all weights (\(\sigma_{\xi_{w}}\)), all gates (\(\sigma_{\xi_{0}}\)), all motion parameters (i.e. motion weight and motion gates, \(\sigma_{\xi_{gm},...,m}\)) and all colour parameters (\(\sigma_{\xi_{gm},...,w}\)) on the frequency of insight-like switches when it is only applied to the network _weights_ and/or _gates_. The frequency of insight-like switches increases gradually with \(\sigma_{\xi_{w}}\) until it plateaus (dashed purple line), while it jumps abruptly after relatively high levels of \(\sigma_{\xi_{g}}\) (solid purple line). \(\sigma_{\xi_{gm},...,w}\) on motion alone is not sufficient for insight-like switches (lightest purple shade), but small \(\sigma_{\xi_{g},...,w_{z}}\) is sufficient for the frequency of insight networks to plateau (darkest purple shade). Error bars are SD. We ran 10 x 99 simulations. Colour scheme as in Fig. 1B **(C)** Influence of \(\lambda\) on the frequency of switches to a colour strategy (number of networks defined as having “insight”). The frequency of insight-like switches declines with increasing \(\lambda\) for L1-regularised networks, but is largely unaffected for L2-regularised networks. **(D)** Influence of \(\lambda\) on the averaged switch points. The averaged switch point occurs later in the task with increasing \(\lambda\) for both L1 and L2-regularised networks. Error bars signify SEM.
the possibility that sudden behavioural changes which occur even during gradual learning could in turn lead to the subjective effects that accompany insights Fensch et al. (2003); Esser et al. (2022).
Our results also highlight noise and regularisation as aspects of brain function that are involved in the generation of insights. Cellular and synaptic noise is omnipresent in brain activity Faisal et al. (2008); Waschke et al. (2021), and has a number of known benefits, such as stochastic resonance and robustness that comes with probabilistic firing of neurons based on statistical fluctuations due to Poissonian neural spike timing Rolls et al. (2008). It has also been noted that noise plays an important role in jumps between brain states, when noise provokes transitioning between attractor states Rolls and Deco (2012). Previous studies have therefore noted that stochastic brain dynamics can be advantageous, allowing e.g. for creative problem solving (as in our case), exploratory behaviour, and accurate decision making Rolls and Deco (2012); Faisal et al. (2008); Garrett et al. (2013); Waschke et al. (2021). Our work adds a computationally precise explanation of how noise can lead to insights to this literature. Questions about whether inter-individual differences in neural variability predict insights Garrett et al. (2013), or about whether noise that occurs during synaptic updating is crucial remain an interesting topic for future research.
Previous work has also suggested the occurrence and possible usefulness of regularisation in the brain. Regularisation has for instance been implied in synaptic scaling, which helps to adjust synaptic weights in order to maintain a global firing homeostasis Lee et al. (2019), thereby aiding energy requirements and reducing memory interference Tononi and Cirelli (2014); De Vivo et al. (2017). It has also been proposed that regularisation modulates the threshold for induction of long-term potentiation Lee et al. (2019). These mechanisms therefore present possible synaptic factors that contribute to insight-like behaviour in humans and animals. We note that synaptic scaling has often been linked to sleep Tononi and Cirelli (2014), and regularisation during sleep has also been suggested to help avoid overfitting to experiences made during the day, and therefore generalisation Hoel (2021). Since our experiments were conduced in an uninterrupted fashion during daylight, our findings could not reflect any sleep effects. The findings above nevertheless suggests a possible link between sleep, synaptic scaling and insight Wagner et al. (2004); Lacaux et al. (2021).
On a more cognitive level, regularisation has been implied in the context of heuristics. In this notion, regularisation has been proposed to function as an infinitely strong prior in a Bayesian inference framework Parpart et al. (2018). This infinitely strong prior would work as a sort of attention mechanism and regularise input and information in a way that is congruent with the specific prior, whereas a finite prior would under this assumption enable learning from experience Parpart et al. (2018). Another account regards cognitive control as regularised optimisation Ritz et al. (2022). According to this theory, better transfer learning is supported by effort costs regularising towards more task-general policies. It therefore seems possible that the factors that impact regularisation during learning can also lead to a neural switch between states that might be more or less likely to govern insights.
The occurrence of insight-like behaviour with the same characteristics as found in humans was specific to L1-regularised networks, while no comparable similarity occurred in L2- or non-regularised networks. Although L2-regularised neural networks learned to suppress initially irrelevant colour feature inputs and showed abrupt performance increases reminiscent of insights, only L1 networks exhibited a wide distribution of time points when the insight-like switches occur (delay) as well as a selectivity of the phenomenon to a subgroup of networks, as found in humans. We note that L2- and non-regularised networks technically performed better on the task, because they collectively improve their behavioural efficiency sooner. One important question therefore remains under which circumstances L1 would be the most beneficial form of regularisation. One possibility could be that the task is too simple for L1-regularisation to be beneficial. It is conceivable that L1-regularisation only starts being advantageous in more complex task settings when generalisation across task sets is required and a segregation of task dimensions to learn about at a given time would prove useful.
Taken together, gradual training of neural networks with gate modulation leads to insight-like behaviour as observed in humans, and points to roles of regularisation, noise and "silent knowledge" in this process. These results make an important contribution to the general understanding of learning dynamics and representation formation in environments with non-stationary feature relevance in both biological and artificial agents.
## Methods
### Task
#### Stimuli
We employed a perceptual decision task that required a binary choice about circular arrays of moving dots Rajananda et al. (2018), similar to the spontaneous strategy switch task developed earlier Schuck et al. (2015). Dots were characterised by two features, (1) a motion direction (four possible orthogonal directions: NW, NE, SW, SE) and (2) a
colour (orange or purple, Fig.1A). The noise level of the motion feature was varied in 5 steps (5%, 10%, 20%, 30% or 45% coherent motion), making motion judgement relatively harder or easier. Colour difficulty was constant, thus consistently allowing easy identification of the stimulus colour. The condition with most noise (5% coherence) occurred slightly more frequently than the other conditions (30 trial per 100, vs 10, 20, 20, 20 for the other conditions).
The task was coded in JavaScript and made use of the jsPsych 6.1.0 plugins. Participants were restricted to use desktops (no tablets or mobile phones) of at least 13 inch width diagonally. Subjects were further restricted to use either a Firefox or Google Chrome browser to run the experiment.
On every trial, participants were presented a cloud of 200 moving dots with a radius of 7 pixels each. In order to avoid tracking of individual dots, dots had a lifetime of 10 frames before they were replaced. Within the circle shape of 400 pixel width, a single dot moved 6 pixel lengths in a given frame. Each dot was either designated to be coherent or incoherent and remained so throughout all frames in the display, whereby each incoherent dot followed a randomly designated alternative direction of motion.
The trial duration was 2000 ms and a response could be made at any point during that time window. After a response had been made via one of the two button presses, the white fixation cross at the centre of the stimulus would turn into a binary feedback symbol (happy or sad smiley) that would be displayed until the end of the trial (Fig.1C). An inter trial interval (ITI) of either 400, 600, 800 or 1000 ms was randomly selected. If no response was made, a "TOO SLOW" feedback was displayed for 300 ms before being replaced by the fixation cross for the remaining time of the ITI.
### Task Design
For the first 400 trials, the _motion phase_, the correct binary choice was only related to stimulus motion (two directions each on a diagonal were mapped onto one choice), while the colour changed randomly from trial to trial (Fig.1D). For the binary choice, participants were given two response keys, "X" and "M". The NW and SE motion directions corresponded to a left key press ("X"), while NE and SW corresponded to a right key press ("M") (Fig.1A). Participants received trial-wise binary feedback (correct or incorrect), and therefore could learn which choice they had to make in response to which motion direction (Fig.1C).
We did not specifically instruct participants to pay attention to the motion direction. Instead, we instructed them to learn how to classify the moving dot clouds using the two response keys, so that they would maximise their number of correct choices. To ensure that participants would pick up on the motion relevance and the correct stimulus-response mapping, motion coherence was set to be at 100% in the first block (100 trials), meaning that all dots moved towards one coherent direction. Participants learned this mapping well and performed close to ceiling (87% correct, t-test against chance: \(t(98)=37.4\), \(p<.001\)). In the second task block, we introduced the lowest, and therefore easiest, three levels of motion noise (20%, 30% and 45% coherent motion), before starting to use all five noise levels in block 3. Since choices during this phase should become solely dependent on motion, they should be affected by the level of motion noise. We assessed how well participants had learned to discriminate the motion direction after the fourth block. Participants that did not reach an accuracy level of at least 85% in the three lowest motion noise levels during this last task block of the pre-training were excluded from the _motion and colour phase_. All subjects were notified before starting the experiment, that they could only advance to the second task phase (_motion and colour phase_, although this was not communicated to participants) if they performed well enough in the first phase and that they would be paid accordingly for either one or two completed task phases.
After the _motion phase_, in the _motion and colour phase_, the colour feature became predictive of the correct choice in addition to the motion feature (Fig.1D). This meant that each response key, and thus motion direction diagonal, was consistently paired with one colour, and that colour was fully predictive of the required choice. Orange henceforth corresponded to a correct "X" key press and a NW/SE motion direction, while purple was predictive of a correct "M" key press and NE/SW motion direction (Fig.1A). This change in feature relevance was not announced to participants, and the task continued for another 400 trials as before - the only change being the predictiveness of colour.
Before the last task block we asked participants whether they 1) noticed a rule in the experiment, 2) how long it took until they noticed it, 3) whether they used the colour feature to make their choices and 4) to replicate the mapping between stimulus colour and motion directions. We then instructed them about the correct colour mapping and asked them to rely on colour for the last task block. This served as a proof that subjects were in principle able to do the task based on the colour feature and to show that, based on this easier task strategy, accuracy should be near ceiling for all participants in the last instructed block.
### Human Participants
Participants between eighteen and 30 years of age were recruited online through Prolific.
Participation in the study was contingent on showing learning of the stimulus classification. Hence, to assess whether participants had learned to correctly identify motion directions of the moving dots, we probed their accuracy on the three easiest, least noisiest coherence levels in the last block of the uncorrelated task phase. If subjects reached an accuracy level of at least 85%, they were selected for participation in the experiment.
Ninety-six participants were excluded due to insufficient accuracy levels after the _motion phase_ as described above. 99 participants learned to classify the dots' motion direction, passed the accuracy criterion and completed both task phases. These subjects make up the final sample included in all analyses. 34 participants were excluded due to various technical problems or premature quitting of the experiment. All participants gave informed consent prior to beginning the experiment. The study protocol was approved by the local ethics committee of the Max Planck Institute for Human Development. Participants received 3\(\upvarepsilon\) for completing only the first task phase and 7\(\upvarepsilon\) for completing both task phases.
#### Neural Networks
#### L1-regularised Neural Networks
We utilise a simple neural network model to reproduce the observations of the human behavioural data in a simplified supervised learning regression setting. We trained a simple neural network with two input nodes, two input gates and one output node on the same decision making task (Fig.1B).
The network received two inputs, \(x_{m}\) and \(x_{c}\), corresponding to the stimulus motion and colour, respectively, and had one output, \(\hat{y}\). Importantly, each input had one associated multiplicative gate (\(g_{m}\), \(g_{c}\)) such that output activation was defined as \(\hat{y}=\mathrm{sign}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta)\) where \(\eta\sim\mathcal{N}(0,\sigma)\) is Gaussian noise (Fig.1B).
To introduce competitive dynamics between the input channels, we added L1-regularisation on the gate weights \(g\), resulting in the following loss function:
\[\mathcal{L}=\frac{1}{2}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta-y)^{2}+\lambda( |g_{m}|+|g_{c}|) \tag{5}\]
The network was trained in a gradual fashion through online gradient descent with Gaussian white noise \(\xi\) added to the gradient update and a fixed learning rate \(\alpha\). Given the loss function, this yields the following update equations for noisy stochastic gradient descent (SGD):
\[\Delta w_{m}=-\alpha x_{m}g_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+ \eta-y)+\xi_{w_{m}} \tag{6}\] \[\Delta g_{m}=-\alpha x_{m}w_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+ \eta-y)\] \[-\alpha\lambda\mathrm{sign}(\mathrm{g}_{m})+\xi_{\mathrm{g}_{m}}\] (7) \[\Delta w_{c}=-\alpha x_{c}g_{c}(x_{c}g_{c}w_{c}+x_{m}g_{m}w_{m}+ \eta-y)+\xi_{w_{c}}\] (8) \[\Delta g_{c}=-\alpha x_{c}w_{c}(x_{c}g_{c}w_{c}+x_{m}g_{m}w_{m}+ \eta-y)\] \[-\alpha\lambda\mathrm{sign}(\mathrm{g}_{c})+\xi_{\mathrm{g}_{c}} \tag{9}\]
with \(\lambda\) = 0.07, \(\alpha\) = 0.6 and \(\xi\) = 0.05.
This implies that the evolution of the colour weights and gates will exhibit non-linear quadratic and cubic dynamics, driven by the interaction of \(w_{c}\) and \(g_{c}\). Multiplying the weights \(w\) with the regularised gate weights \(g\) leads to smaller weights and therefore initially slower increases of the colour weights \(w_{c}\) and respective gate weights \(g_{c}\) after colour has become predictive of correct choices.
To understand this effect of non-linearity analytically, we used a simplified setup of the same model without gate weights:
\[\mathcal{L}=[w_{m}x_{m}+w_{c}x_{c}+\eta-y]^{2} \tag{10}\]
Using this model, we observe exponential increases of the colour weights \(w_{c}\) after the onset of the _motion and colour phase_. This confirms that the interaction of \(w_{c}\) and \(g_{c}\), as well as the regularisation applied to \(g_{c}\) are necessary for the insight-like non-linear dynamics including a distribution of insight onsets as well as variety in slope steepness of insight-like switches.
Note that because the regularisation term is non-differentiable at \(0\), we cannot take the limit \(\alpha\to 0\), but averaged over the data instead. To avoid oscillations of the coefficients around \(0\) due to the non-differentiability, we added the following rules after each update of the gates: (1) if the gate \(g^{t}\) was zero before the update, a regularisation term \(-\mathrm{min}(\alpha\lambda,|\mathrm{g}^{t+1}|)\mathrm{sign}(\mathrm{g}^{t+1})\) was added and (2) if the gate changed sign during the update, the value was set to \(0\).
The accuracy is given by:
\[\begin{split}&\mathbb{P}[\hat{y}=y|w_{m},g_{m},w_{c},g_{c}]\\ &=\frac{1}{2}[1+\text{erf}(\frac{\text{g}_{\text{m}}\text{w}_{ \text{m}}\text{x}_{\text{m}}+\text{g}_{\text{c}}\text{w}_{\text{c}}\text{x}_{ \text{c}}}{\sqrt{2((\text{g}_{\text{m}}\text{w}_{\text{m}}\text{x}_{\text{m}}) ^{2}+(\text{g}_{\text{c}}\text{w}_{\text{c}}\text{\sigma}_{c})^{2}+\sigma^{2}) }})]\end{split} \tag{11}\]
We trained the network on a curriculum precisely matched to the human task, and adjusted hyperparameters (noise levels), such that baseline network performance and learning speed were carefully equated between humans and networks.
Specifically, we simulated the same number of networks than humans were included in the final analysis sample (\(N=99\)). We matched the motion noise based performance variance of a given simulation to a respective human subject using a non-linear COBYLA optimiser. While the mean of the colour input distribution (0.22) as well as the standard deviations of both input distributions were fixed (0.01 for colour and 0.1 for motion), the respective motion input distribution mean values were individually fitted for each single simulation as described above.
The input sequences the networks received were sampled from the same ten input sequences that humans were exposed to in task phase two. This means that for the task part where colour was predictive of the correct binary choice, _motion and colour phase_ (500 trials in total), networks and humans received the same input sequences.
The networks were given a slightly longer _training phase_ of six blocks (600 trials) in comparison to the two blocks _training phase_ that human subjects were exposed to (Fig.1D). Furthermore, human participants first completed a block with 100% motion coherence before doing one block with low motion noise. The networks received six _training phase_ blocks containing the three highest motion coherence levels. Both human subjects and networks completed two blocks including all noise levels in the _motion phase_ before colour became predictive in the _motion and colour phase_.
### L2-regularised Neural Networks
To probe the effect of the aggressiveness of the regulariser on insight-like switch behaviour in networks, we compared our L1-regularised networks with models of the same architecture, but added L2-regularisation on the gate weights \(g\). This yielded the following loss function:
\[\mathcal{L}=\frac{1}{2}(g_{m}w_{m}x_{m}+g_{c}w_{c}x_{c}+\eta-y)^{2}+\frac{ \lambda}{2}(|g_{m}|+|g_{c}|)^{2} \tag{12}\]
From the loss function we can again derive the following update equations for noisy stochastic gradient descent (SGD):
\[\Delta w_{m} =-\alpha x_{m}g_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+\eta-y)+\xi_{ w_{m}} \tag{13}\] \[\Delta g_{m} =-\alpha x_{m}w_{m}(x_{m}g_{m}w_{m}+x_{c}g_{c}w_{c}+\eta-y)\] \[\quad-\alpha\lambda\text{sign}(\text{g}_{\text{m}})\text{abs}( \text{g}_{\text{m}})+\xi_{\text{g}_{\text{m}}}\] (14) \[\Delta w_{c} =-\alpha x_{c}g_{c}(x_{c}g_{c}w_{c}+x_{m}g_{m}w_{m}+\eta-y)+\xi_{ w_{c}}\] (15) \[\Delta g_{c} =-\alpha x_{c}w_{c}(x_{c}g_{c}w_{c}+x_{m}g_{m}w_{m}+\eta-y)\] \[\quad-\alpha\lambda\text{sign}(\text{g}_{\text{c}})(\text{g}_{ \text{m}})\text{abs}(\text{g}_{\text{c}})+\xi_{\text{g}_{\text{c}}} \tag{16}\]
with \(\lambda\) = 0.07, \(\alpha\) = 0.6 and \(\xi\) = 0.05.
The training is otherwise the same as for the L1-regularised networks.
### Modelling of insight-like switches
#### Models of colour use
In order to probe whether strategy switches in low coherence trials occurred abruptly, we compared three different models with different assumptions about the form of the data. First, we fitted a linear model with two free parameters:
\[y=mt+y_{0}\]
where \(m\) is the slope, \(y_{0}\) the y-intercept and \(t\) is time (here, task blocks)(Fig. S2). This model should fit no-insight participants' data well where colour use either increases linearly over the course of the experiment or stays at a constant level.
Contrasting the assumptions of the linear model, we next tested whether colour-based responses increased abruptly by fitting a step model with three free parameters, a switch point \(t_{s}\), the step size \(s\) and a maximum value \(y_{max}\) (Fig. S2), so that
\[y=\begin{cases}y_{max}-s&\text{if }t<t_{s}\\ y_{max}&\text{if }t\geq t_{s}\end{cases}\]
We also included a sigmoid function with three free parameters as a smoother approximation of the step model:
\[y=y_{max}-y_{min}\frac{1}{+}e^{-m(t-t_{s})}+y_{min}\]
where \(y_{max}\) is the fitted maximum value of the function, \(m\) is the slope and \(t_{s}\) is the inflection point (Fig. S2). \(y_{min}\) was given by each individual's averaged accuracy on 5% motion coherence trials in block 3-4.
Comparing the model fits across all subjects using the Bayesian Information Criterion (BIC) and protected exceedance probabilities yielded a preference for the sigmoid function over both a step and linear model, for both humans (Fig.2E) and L1-regularised neural networks (Fig.3D). On the one hand, this supports our hypothesis that insight-like strategy switches do not occur in an incremental linear fashion, but abruptly, with variance in the steepness of the switch. Secondly, this implies that at least a subset of subjects shows evidence for an insight-like strategy switch.
#### Human participants
To investigate these insight-like strategy adaptations, we modelled human participants' data using the individually fitted sigmoid functions (Fig. S3). The criterion we defined in order to assess whether a subject had switched to the colour strategy, was the slope at the inflection point, expressing how steep the performance jump was after having an insight about colour. We obtained this value by taking the sigmoid function's partial derivative of time
\[\frac{\partial y}{\partial t}=(y_{max}-y_{min})\frac{me^{-m(t-t_{s})}}{(1+e^{-m (t-t_{s})})^{2}}\]
and then evaluating the above equation for the fitted switch point, \(t=t_{s}\), which yields:
\[y^{\prime}(t_{s})=\frac{1}{4}m(y_{max}-y_{min})\]
Switch misclassifications can happen that are caused by irregularities and small jumps in the data - irrespective of a colour strategy switch. We therefore corrected for a general fit of the data to the model by subtracting the individually assessed general model fit from the slope steepness at the inflection point. Insight subjects were then classified as those participants whose corrected slope steepness at inflection point parameters were outside of the 100% percentile of a control group's (no change in predictiveness of colour) distribution of that same parameter. By definition, insights about a colour rule cannot occur in this control condition, hence our derived out-of-sample distribution evidences abrupt strategy improvements hinting at insight (Fig.3F).
Before the last task block we asked participants whether they used the colour feature to make their choices. 57.6% of participants indicated that they used colour to press correctly. The 48.5% insight participants we identified using our classification method overlapped to 79.2% with participants' self reports.
#### Neural Networks
We used the same classification procedure for neural networks. All individual sigmoid function fits for L1-regularised networks can be found in the Supplementary Material (Fig. S4).
## Acknowledgements
ATL is supported by the International Max Planck Research School on Computational Methods in Psychiatry and Ageing Research (IMPRS COMPPPSYCH, www.mps.ucl-centre.mpg.de). PMK was funded by the Wellcome Trust (award: 210849/Z18/Z). AMS was supported by a Sir Henry Dale Fellowship from the Wellcome Trust and Royal Society (216386/Z19/Z), and the Sainsbury Wellcome Centre Core Grant from Wellcome (219627/Z/19/Z) and the Gatsby Charitable Foundation (GAT3755). AMS is a CIFAR Azrieli Global Scholar in the Learning in Machines & Brains program. CS was funded by the European Research Council (ERC Consolidator awards 725937) and Special Grant Agreement No. 945539 (Human Brain Project SGA). NWS was funded by the Federal Government of Germany and the State of Hamburg as part of the Excellence Initiative, a Starting Grant from the European Union
(ERC-StG-REPLAY-852669), and an Independent Max Planck Research Group grant awarded by the Max Planck Society (M.TN.A.BILD0004). The funding parties had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
We thank Robert Gaschler for helpful comments on this manuscript.
## References
* Kohler (1925) Wolfgang Kohler. _The Mentality of Apes_. Kegan Paul, Trench, Trubner & Co. ; Harcourt, Brace & Co., 1925.
* Durstewitz et al. (2010) Daniel Durstewitz, Nicole M. Vittoz, Stan B. Floresco, and Jeremy K. Seamans. Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning. _Neuron_, 66(3):438-448, 2010. ISSN 08966273. doi: 10.1016/j.neuron.2010.03.029. URL [http://dx.doi.org/10.1016/j.neuron.2010.03.029](http://dx.doi.org/10.1016/j.neuron.2010.03.029).
* Stuyck et al. (2021) Hans Stuyck, Bart Aben, Axel Cleeremans, and Eva Van den Bussche. The Aha! moment: Is insight a different form of problem solving? _Consciousness and Cognition_, 90(April 2020):103055, 2021. ISSN 10902376. doi: 10.1016/j.concog.2020.103055. URL [https://doi.org/10.1016/j.concog.2020.103055](https://doi.org/10.1016/j.concog.2020.103055).
* Weisberg (2015) Robert W. Weisberg. Toward an integrated theory of insight in problem solving. _Thinking and Reasoning_, 21(1):5-39, 2015. ISSN 14640708. doi: 10.1080/13546783.2014.886625. URL [http://dx.doi.org/10.1080/13546783.2014.886625](http://dx.doi.org/10.1080/13546783.2014.886625).
* Kounios and Beeman (2014) John Kounios and Mark Beeman. The cognitive neuroscience of insight. _Annual Review of Psychology_, 65:71-93, 2014. ISSN 15452085. doi: 10.1146/annurev-psych-010213-115154.
* Jung-Beeman et al. (2004) Mark Jung-Beeman, Edward M. Bowden, Jason Haberman, Jennifer L. Frymiare, Stella Arambel-Liu, Richard Greenblatt, Paul J. Reber, and John Kounios. Neural activity when people solve verbal problems with insight. _PLoS Biology_, 2(4):500-510, 2004. ISSN 15449173. doi: 10.1371/journal.pbio.0020097.
* Danek et al. (2014) Amory H. Danek, Thomas Fraps, Albrecht von Muller, Benedikt Grothe, and Michael Ollinger. It's a kind of magic-what self-reports can reveal about the phenomenology of insight problem solving. _Frontiers in Psychology_, 5(DEC):1-11, 2014. ISSN 16641078. doi: 10.3389/fpsyg.2014.01408.
* Kounios and Beeman (2015) John Kounios and Mark Beeman. _The eureka factor: Aha moments, creative insight, and the brain_. Random House, New York, 2015. ISBN 9781400068548.
* Shen et al. (2018) Wangbing Shen, Yu Tong, Feng Li, Yuan Yuan, Bernhard Hommel, Chang Liu, and Jing Luo. Tracking the neurodynamics of insight: A meta-analysis of neuroimaging studies. _Biological Psychology_, 138(January):189-198, 2018. ISSN 18736246. doi: 10.1016/j.biopsycho.2018.08.018. URL [https://doi.org/10.1016/j.biopsycho.2018.08.018](https://doi.org/10.1016/j.biopsycho.2018.08.018).
* Tik et al. (2018) Martin Tik, Ronald Sladky, Caroline Di Bernardi Luft, David Willinger, Andre Hoffmann, Michael J. Banissy, Joydeep Bhattacharya, and Christian Windischberger. Ultra-high-field fMRI insights on insight: Neural correlates of the Aha!-moment. _Human Brain Mapping_, 39(8):3241-3252, 2018. ISSN 10970193. doi: 10.1002/hbm.24073.
* Schuck et al. (2015) Nicolas W. Schuck, Robert Gaschler, Dorit Wenke, Jakob Heinzle, Peter A. Frensch, John Dylan Haynes, and Carlo Reverberberi. Medial prefrontal cortex predicts internally driven strategy shifts. _Neuron_, 86(1):331-340, 2015. ISSN 10974199. doi: 10.1016/j.neuron.2015.03.015. URL [http://dx.doi.org/10.1016/j.neuron.2015.03.015](http://dx.doi.org/10.1016/j.neuron.2015.03.015).
* Schuck et al. (2022) Nicolas W. Schuck, Amy X. Li, Dorit Wenke, Destina S. Ay-Bryson, Anika T. Loewe, Robert Gaschler, and Yee Lee Shing. Spontaneous discovery of novel task solutions in children. _PloS ONE_, 17(5):e0266253, 2022. doi: 10.1371/journal.pone.0266253. URL [http://dx.doi.org/10.1371/journal.pone.0266253](http://dx.doi.org/10.1371/journal.pone.0266253).
* Gaschler et al. (2019) Robert Gaschler, Nicolas W. Schuck, Carlo Reverberi, Peter A. Frensch, and Dorit Wenke. Incidental covariation learning leading to strategy change. _PLoS ONE_, 14(1):1-32, 2019. ISSN 19326203. doi: 10.1371/journal.pone.0210597.
* Gaschler et al. (2013) Robert Gaschler, Bianca Vaterrodt, Peter A. Frensch, Alexandra Eichler, and Hilde Haider. Spontaneous Usage of Different Shortcuts Based on the Commutativity Principle. _PLoS ONE_, 8(9):1-13, 2013. ISSN 19326203. doi: 10.1371/journal.pone.0074972.
* Gaschler et al. (2015) Robert Gaschler, Julian N. Marewski, and Peter A. Frensch. Once and for all--How people change strategy to ignore irrelevant information in visual tasks. _Quarterly Journal of Experimental Psychology_, 68(3):543-567, 2015. ISSN 17470226. doi: 10.1080/17470218.2014.961933. URL [http://dx.doi.org/10.1080/17470218.2014.961933](http://dx.doi.org/10.1080/17470218.2014.961933).
* Bowden et al. (2005) Edward M. Bowden, Mark Jung-Beeman, Jessica Fleck, and John Kounios. New approaches to demystifying insight. _Trends in Cognitive Sciences_, 9(7):322-328, 2005. ISSN 13646613. doi: 10.1016/j.tics.2005.05.012.
* Metcalfe and Wiebe (1987) Janet Metcalfe and David Wiebe. Intuition in insight and noninsight problem solving. _Memory & Cognition_, 15(3):238-246, 1987. ISSN 0090502X. doi: 10.3758/BF03197722.
* Tik et al. (2018)
* Karlsson et al. (2012) Mattias P. Karlsson, Dougal G.R. Tervo, and Alla Y. Karpova. Network resets in medial prefrontal cortex mark the onset of behavioral uncertainty. _Science_, 338(6103):135-139, 2012. ISSN 10959203. doi: 10.1126/science.1226518.
* Miller and Katz (2010) Paul Miller and Donald B. Katz. Stochastic transitions between neural states in taste processing and decision-making. _Journal of Neuroscience_, 30(7):2559-2570, 2010. ISSN 02706474. doi: 10.1523/JNEUROSCI.3047-09.2010.
* Allegra et al. (2020) Michele Allegra, Shima Seyed-Allaei, Nicolas W. Schuck, Daniele Amati, Alessandro Laio, and Carlo Reverberi. Brain network dynamics during spontaneous strategy shifts and incremental task optimization. _NeuroImage_, 217(January):116854, 2020. ISSN 10959572. doi: 10.1016/j.neuroimage.2020.116854. URL [https://doi.org/10.1016/j.neuroimage.2020.116854](https://doi.org/10.1016/j.neuroimage.2020.116854).
* Friston et al. (2017) Karl J Friston, Marco Lin, Christopher D Frith, Giovanni Pezzulo, J. Allan Hobson, and Sasha Ondobaka. Active Inference, Curiosity and Insight. _Neural Computation_, 29:2633-2683, 2017. doi: 10.1162/neco.
* Ohlsson (1992) Stellan Ohlsson. Information-processing explanations of insight and related phenomena. In _Advances in the Psychology of Thinking_. Harvester Wheatseaf, 1992.
* Power et al. (2022) Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets. _arXiv_, pages 1-10, 2022. URL [http://arxiv.org/abs/2201.02177](http://arxiv.org/abs/2201.02177).
* Saxe et al. (2014) Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In _Proceedings of the International Conference on Learning Representations 2014.,_, pages 1-22, 2014.
* Saxe et al. (2019a) Andrew M. Saxe, James L. McClelland, and Surya Ganguli. A mathematical theory of semantic development in deep neural networks. _Proceedings of the National Academy of Sciences of the United States of America_, 166(23):11537-11546, 2019a. ISSN 10916490. doi: 10.1073/pnas.1820226116.
* Schapiro and McClelland (2009) Anna C. Schapiro and James L. McClelland. A connectionist model of a continuous developmental transition in the balance scale task. _Cognition_, 110(3):395-411, 2009. ISSN 00100277. doi: 10.1016/j.cognition.2008.11.017.
* McClelland and Rogers (2003) James L. McClelland and Timothy T. Rogers. The parallel distributed processing approach to semantic cognition. _Nature Reviews Neuroscience_, 4(4):310-322, 2003. ISSN 14710048. doi: 10.1038/nrn1076.
* Flesch et al. (2022) Timo Flesch, Keno Juechems, Tsvetomira Dumbalska, Andrew Saxe, and Christopher Summerfield. Orthogonal representations for robust context-dependent task performance in brains and neural networks. _Neuron_, 110(7):1258-1270, 2022. ISSN 10974199. doi: 10.1016/j.neuron.2022.01.005. URL [https://doi.org/10.1016/j.neuron.2022.01.005](https://doi.org/10.1016/j.neuron.2022.01.005).
* Saxe et al. (2019b) Andrew M. Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan D. Tracey, and David D. Cox. On the information bottleneck theory of deep learning. _Journal of Statistical Mechanics: Theory and Experiment_, 2019(12), 2019b. ISSN 17425468. doi: 10.1088/1742-5468/ab3985.
* Bengio et al. (2009) Yoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum Learning. In _In: Proceedings of International Conference on Machine Learning_, pages 41-48, 2009. URL [http://arxiv.org/abs/1611.06204](http://arxiv.org/abs/1611.06204).
* Flesch et al. (2018) Timo Flesch, Jan Balaguer, Ronald Dekker, Hamed Nili, and Christopher Summerfield. Comparing continual task learning in minds and machines. _Proceedings of the National Academy of Sciences of the United States of America_, 115(44):E10313-E10322, 2018. ISSN 10916490. doi: 10.1073/pnas.1800755115.
* Mehrer et al. (2020) Johannes Mehrer, Courtney J Spoerer, Nikolaus Kriegeskorte, and Tim C Kietzmann. Individual differences among deep neural network models. _Nature Communications_, 11(5725):1-12, 2020. ISSN 2041-1723. doi: 10.1038/s41467-020-19632-w. URL [http://dx.doi.org/10.1038/s41467-020-19632-w](http://dx.doi.org/10.1038/s41467-020-19632-w).
* Liu et al. (2020) Shengchao Liu, Dimitris Papailiopoulos, and Dimitris Achlioptas. Bad global minima exist and SGD can reach them. _Advances in Neural Information Processing Systems_, 2020-Decem(NeurIPS), 2020. ISSN 10495258.
* Bishop (2006) Chrisopher M. Bishop. _Pattern Recognition and Machine Learning_. Springer US, 2006. ISBN 9780387310732. doi: 10.1007/978-3-030-57077-4_11.
* Krishnamurthy et al. (2022) Kamesh Krishnamurthy, Tankut Can, and David J. Schwab. Theory of Gating in Recurrent Neural Networks. _Physical Review X_, 12(1):11011, 2022. ISSN 21603308. doi: 10.1103/PhysRevX.12.011011. URL [https://doi.org/10.1103/PhysRevX.12.011011](https://doi.org/10.1103/PhysRevX.12.011011).
* Jozefowicz et al. (2015) Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of Recurrent Network architectures. _32nd International Conference on Machine Learning, ICML 2015_, 3:2332-2340, 2015.
* Groschner et al. (2022) Lukas N. Groschner, Jonatan G. Malis, Birte Zuidinga, and Alexander Borst. A biophysical account of multiplication by a single neuron. _Nature_, 603(7899):119-123, 2022. ISSN 14764687. doi: 10.1038/s41586-022-04428-3.
* Poggio et al. (1985) Tomaso Poggio, Vincent Torre, and Christof Koch. Computational vision and regularization theory. _Nature_, 317(6035):314-319, 1985. ISSN 00280836. doi: 10.1038/317314a0.
* Poggio et al. (2018)
* Costa et al. (2017) Rui Ponte Costa, Yannis M. Assael, Brendan Shillingford, Nando De Freitas, and Tim P. Vogels. Cortical microcircuits as gated-recurrent neural networks. _Advances in Neural Information Processing Systems_, 2017-Decem(Nips 2017):272-283, 2017. ISSN 10495258.
* Rajananda et al. (2018) Sivananda Rajananda, Hakwan Lau, and Brian Odegaard. A random-dot kinematogram for web-based vision research. _Journal of Open Research Software_, 6(1), 2018. ISSN 20499647. doi: 10.5334/jors.194.
* Fensch et al. (2003) P. A. Fensch, H. Haider, D. Runger, U. Neugebauer, S. Voigt, and J. Werg. The route from implicit learning to verbal expression of what has been learned: Verbal report of incidentally experienced environmental regularity. In L. Jimenez, editor, _Attention and implicit learning_, pages 335-366. John Benjamins Publishing Company, 2003.
* Esser et al. (2022) Sarah Esser, Clarissa Lustig, and Hilde Haider. What triggers explicit awareness in implicit sequence learning? Implications from theories of consciousness. _Psychological Research_, 86(5):1442-1457, 2022. ISSN 14302772. doi: 10.1007/s00426-021-01594-3.
* Faisal et al. (2008) A. Aldo Faisal, Luc P.J. Selen, and Daniel M. Wolpert. Noise in the nervous system. _Nature Reviews Neuroscience_, 9(4):292-303, 2008. ISSN 1471003X. doi: 10.1038/nrn2258.
* Waschke et al. (2021) Leonhard Waschke, Niels A. Kloosterman, Jonas Obleser, and Douglas D. Garrett. Behavior needs neural variability. _Neuron_, 109(5):751-766, 2021. ISSN 10974199. doi: 10.1016/j.neuron.2021.01.023. URL [https://doi.org/10.1016/j.neuron.2021.01.023](https://doi.org/10.1016/j.neuron.2021.01.023).
* Rolls et al. (2008) Edmund T. Rolls, James M. Tromans, and Simon M. Stringer. Spatial scene representations formed by self-organizing learning in a hippocampal extension of the ventral visual system. _European Journal of Neuroscience_, 28(10):2116-2127, 2008. ISSN 0953816X. doi: 10.1111/j.1460-9568.2008.06486.x.
* Rolls and Deco (2012) Edmund T. Rolls and Gustavo Deco. _The Noisy Brain: Stochastic dynamics as a principle of brain function_. Oxford University Press, 2012. ISBN 9780191702471. doi: 10.1093/acprof:oso/9780199587865.001.0001.
* Garrett et al. (2013) Douglas D Garrett, Gregory R Samanez-larkin, Stuart W S Macdonald, Ulman Lindenberger, Anthony R Mcintosh, and Cheryl L Grady. Neuroscience and Biobehavioral Reviews Moment-to-moment brain signal variability : A next frontier in human brain mapping? _Neuroscience and Biobehavioral Reviews_, 37(4):610-624, 2013. ISSN 0149-7634. doi: 10.1016/j.neubiorev.2013.02.015. URL [http://dx.doi.org/10.1016/j.neubiorev.2013.02.015](http://dx.doi.org/10.1016/j.neubiorev.2013.02.015).
* Lee et al. (2019) Hey-kyoung Lee, Alfredo Kirkwood, and Hey-kyoung Lee. Mechanisms of Homeostatic Synaptic Plasticity in vivo. _PNAS_, 13(December):1-7, 2019. doi: 10.3389/fncel.2019.00520.
* Tononi and Cirelli (2014) Giulio Tononi and Chiara Cirelli. Sleep and the Price of Plasticity: From Synaptic and Cellular Homeostasis to Memory Consolidation and Integration. _Neuron_, 81(1):12-34, 2014. ISSN 08966273. doi: 10.1016/j.neuron.2013.12.025. URL [http://dx.doi.org/10.1016/j.neuron.2013.12.025](http://dx.doi.org/10.1016/j.neuron.2013.12.025).
* De Vivo et al. (2017) Luisa De Vivo, Michele Bellesi, William Marshall, Eric A. Bushong, Mark H. Ellisman, Giulio Tononi, and Chiara Cirelli. Ultrastructural evidence for synaptic scaling across the wake/sleep cycle. _Science_, 355(6324):507-510, 2017. ISSN 10959203. doi: 10.1126/science.aah5982.
* Hoel (2021) Erik Hoel. The overfitted brain : Dreams evolved to assist generalization. _Patterns_, 2(5):100244, 2021. ISSN 2666-3899. doi: 10.1016/j.patter.2021.100244. URL [https://doi.org/10.1016/j.patter.2021.100244](https://doi.org/10.1016/j.patter.2021.100244).
* Wagner et al. (2004) Ullrich Wagner, Steffen Gais, Hilde Haider, Rolf Verleger, and Jan Born. Sleep inspires insight. _Nature_, 427(6972):352-355, 2004. ISSN 00280836. doi: 10.1038/nature02223.
* Lacaux et al. (2021) Celia Lacaux, Thomas Andrillon, Celeste Bastoul, Yannis Idir, Alexandrine Fonteix-galet, Isabelle Arnulf, and Delphine Oudiette. Sleep onset is a creative sweet spot. _Science Advances_, 5866(December):1-10, 2021.
* Parpart et al. (2018) Paula Parpart, Matt Jones, and Bradley C Love. Heuristics as Bayesian inference under extreme priors. _Cognitive Psychology_, 102(March):127-144, 2018. ISSN 0010-0285. doi: 10.1016/j.cogpsych.2017.11.006. URL [https://doi.org/10.1016/j.cogpsych.2017.11.006](https://doi.org/10.1016/j.cogpsych.2017.11.006).
* Ritz et al. (2022) Harrison Ritz, Xianin Leng, and Amitai Shenhav. Cognitive control as a multivariate optimization problem. _Journal of Cognitive Neuroscience_, 34(4):569-591, 2022.
## Hidden layer model
In order to verify that our results were not merely an artefact of the oversimplified models we used, we tested the task on a more complex neural network model that had one additional hidden layer of fully connected linear units.
The linear neural network received two inputs, \(x_{m}\) and \(x_{c}\), corresponding to the stimulus motion and colour, respectively, and had two output nodes, \(\hat{y}\), as well as one hidden layer of 48 units. Importantly, each weight connecting the inputs with a hidden unit had one associated multiplicative gate \(g\). To introduce competitive dynamics between the input channels, we again applied L1-regularisation on the gate weights \(g\).
The network was trained on the Cross Entropy loss using stochastic gradient descent with \(\lambda\) = 0.002 and \(\alpha\) = 0.1.
As for the one-layer network, we trained this network on a curriculum precisely matched to the human task, and adjusted hyperparameters (noise levels), such that baseline network performance and learning speed were carefully equated between humans and networks (see Methods).
We employed the same analysis approach to detect insight-like behaviour (see Methods for details) by running simulations of a "control" network of the same architecture, but without correlated features and therefore without colour predictiveness in the _motion and colour phase_. We found that when we applied L1-regularisation with a regularisation parameter of \(\lambda=0.002\) on the gate weights, 18.2% of the networks exhibited _abrupt_ and _delayed_ learning dynamics, resembling insight-like behaviour in humans (Fig.1A) and thereby replicating the key insight characteristics suddenness and selectivity. Insight-like switches to the colour strategy thereby again improved the networks' performance significantly. Using the same parameters, experimental setup and analyses, but applying L2-regularisation on the gate weights \(g\), yielded an insight-like switch rate of 51.5% (Fig.1B).
We again also observed a wider distribution of delays, the time point when the switches in the _motion and colour phase_ occurred in insight networks, for L1-regularised networks with a hidden layer (Fig.1C-D).
Taken together, these results mirror our observations from network simulations with a simplified setup. We can thereby confirm that our results of L1-regularised neural networks' behaviour exhibiting all key characteristics of human insight behaviour (suddenness, selectivity and delay) are not an artefact of the one-layer linearity.
## Weight and Gate Differences between L1- and L2-regularised Networks
At correlation onset (first trial of _motion and colour phase_), neither motion nor colour weights differed (motion: \(M=3.5\pm 0.6\) vs \(M=3.4\pm 0.5\), \(t(192.7)=1.2\), \(p=0.2\), \(d=0.2\), colour: \(M=0.8\pm 0.6\) vs \(M=0.8\pm 0.5\), \(t(189.2)=0.4\), \(p=0.7\), \(d=0.1\)). After learning, however, i.e. at the last trial of the _motion and colour phase_, the average absolute size of the colour weights was higher in L2- compared to L1-networks (\(M=2.6\pm 2.2\) vs \(M=4.7\pm 0.7\), \(t(115.1)=-9\), \(p<.001\), \(d=1.3\)), while the reverse was true for motion weights (\(M=3.4\pm 0.7\) vs \(M=2.8\pm 0.6\), \(t(194.9)=5.6\), \(p<.001\), \(d=0.8\)). For gate weights, differences between L1- and L2-networks are already apparent at correlation onset (first trial of _motion and colour phase_), where the mean of the motion gate was 0.53 for L1-networks and 0.58 for L2-networks, and hence lower in L1 networks,
albeit not significantly (\(t(195.1)=-1\), \(p=0.3\), \(d=0.1\), see Fig. 4E). In addition, the average absolute size of the colour gate weights was higher in L2- compared to L1-networks (\(M=0.04\pm 0.05\) vs \(M=0.002\pm 0.006\), respectively, \(t(100.6)=-7.2\), \(p<0.001\), \(d=1\)). The respective distributions also reflected these effects. L1-networks had a much more narrow distribution for colour gates and just slightly narrower distribution for motion gates (L1: colour gates: 0 to 0.04, motion gates: 0 to 1.3, L2: colour gates: 0 to 2, motion gates: 0 to 1.4) After learning, i.e. at the last trial of the _motion and colour phase_, the mean colour gate size still was lower in L1- compared to L2-regularised networks (\(M=0.4\pm 0.4\) vs \(M=0.8\pm 0.2\), \(t(169.1)=-9.3\), \(p<0.001\), \(d=1.3\)), while the reverse was true for motion gates (\(M=0.3\pm 0.3\) vs \(M=0.2\pm 0.2\), \(t(152.4)=3.9\), \(p<0.001\), \(d=0.6\), see Fig. 4F). This was again also reflected in the respective distributions with L1-networks having much wider distributions for motion and slightly shorter width for colour gates (L1: colour gates: 0 to 1.2, motion gates: 0 to 1.3, L2: colour gates: 0 to 1.3, motion gates: 0 to 0.7).
## Gaussian Noise Differences at Weights and Gates between Insight and No-Insight Networks
Comparing Gaussian noise \(\xi\sim\mathcal{N}(0,\,\sigma_{\xi}^{2})\) at the weights and gates around the individually fitted switch points revealed no differences between insight and no-insight networks for either motion or colour weights (colour weights: \(M=-0.08\pm 1\) vs. \(M=0.04\pm 0.8\); \(t(89.5)=-0.6\), \(p=0.5\), motion weights: \(M=0.5\pm 0.3\) vs. \(M=0.6\pm 0.3\); \(t(93.1)=-1.7\), \(p=0.09\)) or gates (colour gates: \(M=-0.1\pm 0.9\) vs. \(M=0.1\pm 0.9\); \(t(95.3)=0.8\), \(p=0.44\), motion gates: \(M=0.2\pm 0.6\) vs. \(M=-0.3\pm 0.8\); \(t(94.4)=2\), \(p=0.05\)). There also were no \(\sigma_{\xi}\) differences at either the start of learning (first trial of the _motion and colour phase_) (colour weights: \(M=-0.06\pm 0.8\) vs. \(M=-0.03\pm 0.5\); \(t(78.1)=-0.2\), \(p=0.8\), motion weights: \(M=0.08\pm 0.7\) vs. \(M=0.07\pm 0.7\); \(t(96.7)=1\), \(p=0.3\), colour gates: \(M=0\pm 0.6\) vs. \(M=-0.2\pm 0.7\); \(t(97)=1.6\), \(p=0.1\), motion gates: \(M=-0.04\pm 0.6\) vs. \(M=-0.07\pm 0.7\); \(t(97)=0.2\), \(p=0.8\)) or end of learning (last trial of the _motion and colour phase_)(colour weights: \(M=0.05\pm 1.3\) vs. \(M=0.08\pm 1.1\); \(t(92.7)=-0.1\), \(p=0.9\), motion weights: \(M=0\pm 1.2\) vs. \(M=-0.02\pm 1.1\); \(t(95.6)=0.04\), \(p=1\), colour gates: \(M=0.2\pm 1.1\) vs. \(M=-0.2\pm 1.2\); \(t(97)=1.7\), \(p=0.09\), motion gates: \(M=-0.1\pm 1.3\) vs. \(M=0.05\pm 1.3\); \(t(96)=-0.7\), \(p=0.5\)).
Figure 1: Switch-aligned performance and switch point distributions for L1- and L2-regularised neural networks with a 48 unit hidden layer each. Blocks shown are halved task blocks (50 trials each). Error shadows signify SEM. **(A)** Switch-aligned performance for insight (18/99) and no-insight groups (81/99) respectively for L1-regularised networks with a hidden layer. **(B)** Switch-aligned performance for insight (51/99) and no-insight (48/99) L2-regularised neural networks with a hidden layer. **(C)** Switch point distributions for L1-regularised insight networks with a hidden layer. Dashed vertical line marks onset of colour predictiveness. **(D)** Switch point distributions for L2-regularised insight neural networks. Dashed vertical line marks onset of colour predictiveness.
Fig. 2: Illustrations of models and respective parameters. **(A)** Linear function with free parameters intercept \(y_{0}\) and slope \(m\). **(B)** Step function with free parameters inflection point \(t_{s}\) and function maximum \(y_{max}\). **(B)** Generalised logistic regression function with free parameters slope \(m\), inflection point \(t_{s}\) and function maximum \(y_{max}\). |
2310.13019 | Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class
Manipulation Using DeepFool Algorithm | The susceptibility of deep neural networks (DNNs) to adversarial attacks
undermines their reliability across numerous applications, underscoring the
necessity for an in-depth exploration of these vulnerabilities and the
formulation of robust defense strategies. The DeepFool algorithm by
Moosavi-Dezfooli et al. (2016) represents a pivotal step in identifying minimal
perturbations required to induce misclassification of input images.
Nonetheless, its generic methodology falls short in scenarios necessitating
targeted interventions. Additionally, previous research studies have
predominantly concentrated on the success rate of attacks without adequately
addressing the consequential distortion of images, the maintenance of image
quality, or the confidence threshold required for misclassification. To bridge
these gaps, we introduce the Enhanced Targeted DeepFool (ET DeepFool)
algorithm, an evolution of DeepFool that not only facilitates the specification
of desired misclassification targets but also incorporates a configurable
minimum confidence score. Our empirical investigations demonstrate the
superiority of this refined approach in maintaining the integrity of images and
minimizing perturbations across a variety of DNN architectures. Unlike previous
iterations, such as the Targeted DeepFool by Gajjar et al. (2022), our method
grants unparalleled control over the perturbation process, enabling precise
manipulation of model responses. Preliminary outcomes reveal that certain
models, including AlexNet and the advanced Vision Transformer, display
commendable robustness to such manipulations. This discovery of varying levels
of model robustness, as unveiled through our confidence level adjustments,
could have far-reaching implications for the field of image recognition. Our
code will be made public upon acceptance of the paper. | S. M. Fazle Rabby Labib, Joyanta Jyoti Mondal, Meem Arafat Manab, Sarfaraz Newaz, Xi Xiao | 2023-10-18T18:50:39Z | http://arxiv.org/abs/2310.13019v4 | Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
###### Abstract
Deep neural networks (DNNs) have significantly advanced various domains, but their vulnerability to adversarial attacks poses serious concerns. Understanding these vulnerabilities and developing effective defense mechanisms is crucial. DeepFool, an algorithm proposed by Moosavi-Dezfooli et al. (2016), finds minimal perturbations to misclassify input images. However, DeepFool lacks a targeted approach, making it less effective in specific attack scenarios. Also, in previous related works, researchers primarily focus on success, not considering how much an image is getting distorted; the integrity of the image quality, and the confidence level to misclassifying. So, in this paper, we propose Enhanced Targeted DeepFool, an augmented version of DeepFool that allows targeting specific classes for misclassification and also introduce a minimum confidence score requirement hyperparameter to enhance flexibility. Our experiments demonstrate the effectiveness and efficiency of the proposed method across different deep neural network architectures while preserving image integrity as much and perturbation rate as less as possible. By using our approach, the behavior of models can be manipulated arbitrarily using the perturbed images, as we can specify both the target class and the associated confidence score, unlike other DeepFool-derivative works, such as Targeted DeepFool by Gajjar et al. (2022). Results show that one of the deep convolutional neural network architectures, AlexNet, and one of the state-of-the-art model Vision Transformer exhibit high robustness to getting fooled. This approach can have larger implication, as our tuning of confidence level can expose the robustness of image recognition models. Our code will be made public upon acceptance of the paper.
## 1 Introduction
Deep neural networks (DNNs) have revolutionized many fields including but not limited to speech recognition [5, 16], computer vision [4, 14], natural language processing [31], and even game playing [26]. However, their high accuracy and robustness can be compromised by adversaries who intentionally manipulate the input data to fool the model. Such attacks can have serious consequences in real-world applications such as autonomous driving, medical diagnosis, and security systems. Therefore, understanding the vulnerabilities of DNNs to adversarial attacks and developing effective defense mechanisms has become an important research area in machine learning and computer security. DeepFool is one of the algorithms, proposed by Moosavi-Dezfooli _et al_. [20], which iteratively finds the minimum amount of perturbations required to push a given input image to a misclassified region of the feature space. They use the following equation that defines an adversarial perturbation as the minimal perturbation \(r\) that is sufficient to change the estimated label \(\hat{k}(x)\):
\[\Delta(x;\hat{k}):=\min_{r}||r||_{2}\text{ subject to }\hat{k}(x+r)\neq\hat{k}(x) \tag{1}\]
where, \(x\) is an image, and \(\hat{k}(x)\) is the estimated label. With this, an image can be misclassified with a minimal amount of perturbations. However, this approach is not focused on any specific target. Instead, the images are classified as a different class with a minimal amount of perturbation. Thus, if an image \(x\) can be misclassified as some class \(A\) with less perturbation than some other class \(B\), DeepFool
will choose to use the perturbation that misclassifies \(x\) as class \(A\).
While small perturbations by untargeted attacks can fool a deep neural network by misclassifying data, targeted attacks may be more harmful as they aim to deceive the DNN into producing a specific output. An attacker may be able to deceive a self-driving car into misidentifying a stop sign as a green light. Or it can be used to fool security systems that use DNNs for face recognition. Therefore an accurate method of targeted attack to fool DNNs are necessary to make the models more robust against these type of attacks. While the DeepFool algorithm is a good approach to finding minimal perturbations to misclassify data to an unspecific target, a targeted approach is much needed.
This approach by Gajjar _et al_. [10] proposes an algorithm that can misclassify an image into a specific target class. However, it has a limited success rate and additionally, does not have any option to control the different hyperparameters inside the algorithm.
To fill the gap, in this paper, we propose Enhanced Targeted Deepfool or ET DeepFool in short which is an approach to the DeepFool algorithm where we can not only target the class that we want to misclassify but also augment DeepFool and make it more parametrized by giving the option to set minimum confidence score requirements. We show that the algorithm is simpler than the original, in terms of time complexity, and effective in fooling different deep neural network architectures towards specific classes. Afterward, we examine the performance of the proposed method. Our experiments show that the proposed system performs very efficiently on different machines while keeping the integrity of the image almost similar to the original one. It can be used to find the robustness of a model, as this is the first perturbation method to the best of our knowledge, where a performance metric like the confidence model can be arbitrarily specified. This effectively tells us with what amount of perturbation an existing image recognition model can be fooled into classifying one class of images as that of another class with a strong error rate and confidence score. Previous works would stop at fooling the model, without looking at the confidence, so while those perturbations worked, the models would often misclassify the sample images with low levels of confidence.
## 2 Related Works
Adversarial attacks are done on data to perturb it to some extent so that it gets misclassified by an ML model. These attacks can be implemented in several ways in the form of black-box, white-box, and grey-box attacks. In this section, we cover existing literature related to different adversarial attacks against image classification models as well as works done on adversarial defense.
Figure 1: Comparison between original DeepFool and our proposed Enhanced Targeted DeepFool. Here, the perturbation image is scaled 20 times for visibility.
### White-Box Attacks:
In white-box attacks, the attackers have complete knowledge of the target model's architecture, weights, gradients, parameters, and training data. By having access to the model's internals, an attacker can explore its vulnerabilities more effectively and create adversarial examples that are highly effective at deceiving the model's predictions. There are several common white-box adversarial attack methods used in the field of image classification. One such method is Fast Gradient Sign Method (FGSM) [13], a type of adversarial attack for image classification that involves adding minimal noise to each pixel of an image, based on the gradient of the loss function with respect to the image. Another notable algorithm proposed by Carlini and Wagner [3] finds the smallest noise to be added to an image to misclassify it. This method goes beyond the FGSM approach by seeking the most effective perturbation for achieving misclassification. Jacobian-based Saliency Map Approach as proposed by Papernot [23] works by iteratively modifying the input features of a sample to maximize the difference between the predicted output and the true output. Additionally, the Universal Adversarial Perturbations by Moosavi-Dezfooli [21] fools a deep neural network by adding the same perturbations to multiple images, causing it to misclassify all of the affected images. Furthermore, Duan [9] proposes an attack that drops information from the image instead of perturbing it.
### Black-Box Attacks:
In black box attack scenarios, the internal workings of the models are not available. The attacker usually has the input-output behavior and the probability labels of the target models. Gao [11] presents a black-box attack method called Patch-wise Iterative Fast Gradient Sign Method that outperforms pixel-wise methods in generating transferable adversarial examples against various mainstream models. Another approach by Zhao [35] is a GAN-based method that involves training a generator network to produce perturbations that can be added to the original input to create an adversarial example. This method proposed by Li [19], integrates Poincare distance into iterative FGSM and uses a metric learning approach to regularize iterative attacks. It generates transferable targeted adversarial examples by iteratively perturbing the input image in the direction of the target class while simultaneously minimizing the Poincare distance between the original and perturbed images. The Adversarial Patch attack, as proposed by Brown [2], takes a different approach. Instead of modifying the entire image, it focuses on creating a small patch that can be strategically placed in the real world. Furthermore, Su [27] creates a method that fools a network by only altering a single pixel of an image. Wei [33] propose a very different approach by manipulating image attributes such as brightness, contrast, sharpness instead of generating any adversarial noise.
### Data Poisoning Attacks
The paper by Shafahi [25] applies a one-shot poisoning attack by injecting a single poison instance into a clean dataset, causing the model to misclassify a specific target instance without negatively impacting its performance on other examples. Huang [17] propose MetaPoison, a meta-learning approach for crafting poisons to fool neural networks using clean-label data poisoning. Di [6] presents a camouflaging approach for targeted poisoning attacks based on the gradient-matching approach of Geiping [12]. Munoz-Gonzalez [22] propose pGAN, a scheme that generates poisoning points to maximize the error of a target classifier while minimizing detectability by a discriminator.
### Adversarial Defense
To protect DNNs from adversarial attacks and improve their robustness various methods are applied. These methods include training a network with adversarial examples, detecting adversarial examples instead of classifying them, and using randomness to defend the networks. One notable improvement in performance was introduced by Ding [7] combines the cross-entropy loss with a margin maximization loss term, which is applied to correctly classified examples. In a different vein, Xu [34] propose a method called feature squeezing, which decreases the search space of an adversary by combining samples analogous with multiple different vectors into a single sample. Furthermore, Zheng [36], propose a method that requires modification of the output of the classifier by performing hypothesis testing and using Gaussian mixture models to detect adversarial examples.
## 3 Methodology
In this section, we initially discuss about the background of Vanilla DeepFool and what we introduce: Enhanced Targeted DeepFool.
### Background of Vanilla DeepFool
For the multiclass classifier, each class is on a hyperplane that divides one class from another, and \(\mathbf{x}\) is the input that gets classified to whichever hyperplane it lies on. The original DeepFool algorithm finds the closest hyperplane and pushes \(\mathbf{x}\) towards it and makes it misclassified with the minimum amount of perturbation. This is done iteratively until the image is misclassified. In the end, the algorithm returns the total perturbations \(\mathbf{\hat{r}}\). The following equations are used to calculate the closest hyperplane, \(\hat{l}\) where \(\mathbf{w}_{k}^{\prime}\) is a vector that points in the direction of the decision boundary
between the predicted label and the \(k_{th}\) largest activation. This is done by subtracting the gradient of the predicted label from the gradient of the \(k_{th}\) largest activation. \(f^{\prime}_{k}\) is the difference between the labels:
\[\mathbf{w}^{\prime}_{k}\leftarrow\nabla f_{k}(\mathbf{x}_{i})-\nabla f_{\hat{k}(\mathbf{x}_ {0})}(\mathbf{x}_{i}) \tag{2}\]
\[f^{\prime}_{k}\gets f_{k}(\mathbf{x}_{i})-f_{\hat{k}(\mathbf{x}_{0})}(\mathbf{x}_{i}) \tag{3}\]
After calculating \(\mathbf{w}^{\prime}_{k}\) and \(f^{\prime}_{k}\), the following equation calculates the closest hyperplane \(\hat{l}\) and the minimum amount of perturbation for \(k_{th}\) iteration, \(r_{i}\):
\[\hat{l}\leftarrow\operatorname*{arg\,min}_{k\neq\hat{k}(\mathbf{x}_{0})}\frac{|f^ {\prime}_{k}|}{||\mathbf{w}^{\prime}_{k}||_{2}} \tag{4}\]
\[\mathbf{r}_{i}\leftarrow\frac{|f^{\prime}_{i}|}{||\mathbf{w}^{\prime}_{\hat{l}}||_{2 }^{2}}\mathbf{w}^{\prime}_{\hat{l}} \tag{5}\]
Whenever \(\hat{k}(\mathbf{x}_{i})\) changes into a different label, the loop stops, and the value of total perturbation, \(\hat{r}\) is returned.
### Targeted DeepFool
Gajjar _et al_. [10] proposes two algorithms, the first one is a basic iterative approach and the other is a recursive approach. The first approach iteratively goes through the image and adds perturbations until it gets misclassified as the target class or reaches a threshold. In the recursive approach, the algorithm is applied repeatedly until the adversarial sample reaches the target hyperplane. If the adversarial sample cannot reach the target hyperplane after a certain number of iterations, the algorithm is applied again with a new target hyperplane. The recursive approach is found to be more effective than the basic algorithm on experimental grounds.
### Our Approach: Enhanced Targeted DeepFool
Now to turn the original DeepFool algorithm to misclassify an image into a specific target class we propose the algorithm shown in Algorithm 2 below.
Instead of running the loop till the image gets misclassified, we run it till the current label is not equal to the target label. We also remove the for loop that is shown in line 6 of Algorithm 1, because we are not calculating the gradients of the best \(n\) classes that have the most probability to be classified after the original class. This also decreases the time complexity by \(O(n)\). We change equations 2 and 3 to the ones shown below where \(\mathbf{w}^{\prime}_{k}\) is now calculating the difference between the gradients for the target class and the true class. \(f^{\prime}_{k}\) is calculating the perturbations needed with respect to the target class and true class.
\[\mathbf{w}^{\prime}k\leftarrow\nabla f_{t}(\mathbf{x}_{i})-\nabla f_{k(\mathbf{x}_{0})}( \mathbf{x}_{i}) \tag{6}\]
\[f^{\prime}_{k}\gets f_{t}(\mathbf{x}_{i})-f_{k(\mathbf{x}_{0})}(\mathbf{x}_{i}) \tag{7}\]
Since we are not comparing between the best \(n\) classes anymore we change the equation 4 to the one below:
\[\hat{l}\leftarrow\frac{|f^{\prime}_{k}|}{||\mathbf{w}^{\prime}_{k}||_{2}} \tag{8}\]
Only with these small changes, we are able to successfully misclassify an image to a specific class of our choosing.
We have also added another condition to the algorithm called minimum confidence, \(\mathbf{c}_{min}\). This lets users define a minimum confidence requirement as a hyperparameter. This results into a perturbed image which not only gets misclassified to a specific target class but also retains a high confidence score.
## 4 Experimental Evaluation and Results
Here we apply our proposed method to multiple state-of-the-art image classification models and show our findings.
### Experiments
**Dataset.** We use the validation images of the ILSVRC2012 [24] or ImageNet dataset for our experiments. It contains 50 thousand images with a thousand different classes. It is widely regarded as a benchmark dataset in the field of computer vision and has played a crucial role in advancing the development of deep learning models. The dataset is large and diverse which makes it a comprehensive representation of real-world visual data. Due to its size and diversity, the models pre-trained on this dataset can learn rich feature representation that captures a good amount of visual information.
**Models.** We execute different pre-trained deep convolutional neural networks to experiment with the Enhanced Targeted Deepfool. For instance, ResNet50 [15], AlexNet [18], EfficientNet_v2 [30], GoogLeNet [28], and Inception_v3 [29] to work on our proposed architecture. Additionally, we also use one of the state-of-the-art architecture, Vision Transformer (ViT) [8] image classification model to test our method.
**Testbed Setups.** We use two different testbed devices to experiment with our classifiers for this targeted attack. One of them includes Intel Core i7-12700K processor, RTX 3070 Ti, and 32 GB RAM. The other one consists of an Intel Core i5 13400F, RTX 3060 Ti, and 32 GB RAM. We install PyTorch 2.0 and Torchmetrics 0.11.4 libraries in these testbed systems, keeping the version of Python on 3.10.
**Setting up Hyperparameter and Test Approach.** For the tests, we use the validation images and generate a random target class that is not its true class. These images along with the target classes that were generated are fed into our function. We use several hyper-parameters such as overshoot which is set to the default value of 0.02, this is used as a termination criterion to prevent vanishing updates and
oscillations. We set the minimum amount of confidence needed as 95% and the maximum iterations as 100. This is done because in most cases the confidence score of the perturbed image is usually lower than expected (\(\sim\) 60%), therefore we add another condition in the while loop to make the code run until desired confidence is reached. Although, this will lead to more perturbations. The code will run until these conditions are met or until maximum iterations are reached. These hyper-parameters can be tuned to one's needs.
**Metrics.** We calculate the confidence score for the target class by passing the output tensor through the softmax function. It reflects the classifier's level of confidence in its predictions for the perturbed images. The magnitude of perturbations added to the images referred to as "Perturbations", quantifies the level of changes required to deceive the classifier. We find the change in an image by calculating the L2 distance between the perturbed and the original image and dividing it by the maximum L2 distance. We also calculate the Structural Similarity Index Measure (SSIM) [32] between the perturbed and original image and the number of iterations needed to perturb an image. The "Iterations" metric indicates the mean number of iterations required to achieve a successful misclassification. The "success" metric shows the percentage of images being successfully misclassified as the randomly selected target image. Finally, the computational time needed to execute the attack against a single image is denoted as "Time".
```
1:Input: Image \(\mathbf{x}\), classifier \(f\),
2:target class \(t\), minimum confidence \(\mathbf{c}_{min}\).
3:Output: Perturbation \(\mathbf{\hat{r}}\).
4:Initialize \(\mathbf{x}_{0}\leftarrow\mathbf{x},i\gets 0\).
5:while\(\hat{k}(\mathbf{x}_{i})\neq t\) or \(c<\mathbf{c}_{min}\)do
6:
7:\(\mathbf{w}^{\prime}k\leftarrow\nabla f_{k}(\mathbf{x}_{i})-\nabla f_{\hat{k}(\mathbf{x}_{0 })}(\mathbf{x}_{i})\)
8:\(f^{\prime}_{k}\gets f_{k}(\mathbf{x}_{i})-f_{\hat{k}(\mathbf{x}_{0})}(\mathbf{x}_{i})\)
9:endfor
10:\(\hat{l}\leftarrow\frac{|f^{\prime}_{l}|}{||\mathbf{w}^{\prime}_{k}||_{2}}\)
11:\(\mathbf{r}_{i}\leftarrow\frac{|f^{\prime}_{l}|}{||\mathbf{w}^{\prime}_{l}||_{2}^{2}} \mathbf{w}^{\prime}_{l}\)
12:\(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}+\mathbf{r}_{i}\)
13:\(i\gets i+1\)
14:endwhile
15:return\(\mathbf{\hat{r}}=\sum_{i}\mathbf{r}_{i}\)
```
**Algorithm 1** DeepFool: multi-class case
### Results
In this subsection, we describe the findings after successfully experimenting with our proposed model under different scenarios.
After running the dataset in different classifiers through our architecture, we see some interesting results. in Table 1. Our method generates perturbed images with a mean confidence score of 0.97 for almost all of the models. Moreover, we see some classifiers, for instance, ResNet50, EfficientNet_v2, GoogLeNet, and Inception_v3 show a considerable vulnerability to our approach. Here, the perturbation rates range from 2.14% to 3.37%. To compare with that, ET Deepfool does not perform well with AlexNet and ViT. These architectures require the most amount of perturbations to fool, with rates of 9.08% and 11.27% respectively. When it comes to image integrity, ResNet50, EfficientNet_v2, GoogLeNet, and Inception_v3 consistently exhibit the highest percentile. On the other hand, AlexNet and ViT have the lowest mean SSIM scores, with 92% and 89%, respectively. The attack against classifiers EfficientNet_v2, GoogLeNet, and Inception_v3 perform consistently well, with an iteration count ranging from 33 to 38. However, compared to other architectures, ET Deepfool struggles in ViT. It requires a significantly higher number of iterations to fool it, with an average of 67 iterations per image. The attack consistently succeeds against 97% of the dataset when applied against ResNet50, EfficientNet_v2, GoogLeNet and Inception_v3. Keeping up with the trend of other metrics, the attack has the lowest success rate against AlexNet and ViT, achieving success rates of 94% and 89% respectively.
Figure 2: Comparison between the original DeepFool and our proposed Enhanced Targeted DeepFool algorithm.
Finally, the attack against EfficientNet_v2 and Inception_v3 shows faster execution times, requiring approximately 0.31 seconds and 1.14 seconds per image respectively. On the other hand, ViT requires the highest computational overhead, with an average execution time of 2.36 seconds per image.
In Table 2, we can see that after running the method on five sample images, the successful samples require only a small amount of perturbations. And the ones that fail do so even after a large amount of perturbation in the image. Overall, we find our method to be effective in various degrees against most of these classifiers. The results provide insights into the comparative strengths and weaknesses of the given image classifiers under adversarial conditions, which can aid in developing improved defense mechanisms and enhancing the overall robustness of image classifiers.
## 5 Discussion
During the development of the ET DeepFool method, we observe an interesting phenomenon. While the original method exhibits the ability to misclassify an image with a minimal amount of perturbation, we notice that the confidence score associated with the perturbed image is often very low, shown in Figure 1.
To tackle this issue, we introduce confidence threshold as a specific hyperparameter that allows us to specify a minimum confidence level to be attained. We aim to enhance the overall confidence of the perturbed images while maintaining their effectiveness in inducing misclassification to
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & Confidence\(\uparrow\) & Perturbation & SSIM\(\uparrow\) & Iterations & Success\(\uparrow\) & Time\(\downarrow\) \\ \hline ResNet50* & 0.97 & 2.14\% & 0.99 & 29 & 0.97 & 0.37 s \\ AlexNet* & 0.97 & 9.08\% & 0.92 & 25 & 0.94 & 0.52 s \\ EfficientNet\_v2** & 0.97 & 3.37\% & 0.98 & 33 & 0.97 & 0.31 s \\ GoogLeNet** & 0.97 & 3.45\% & 0.97 & 33 & 0.97 & 1.48 s \\ Inception\_v3* & 0.97 & 2.35\% & 0.99 & 38 & 0.97 & 1.14 s \\ ViT* & 0.96 & 11.27\% & 0.89 & 67 & 0.89 & 2.36 s \\ \hline \hline \end{tabular}
\end{table}
Table 1: The performance of ET Deepfool on various classifiers. These are the mean values from our experiment results. Here, * means the model is run on an RTX 3070 Ti, and ** means RTX 3060 Ti is used to run the classifier. Also, time means the average time it takes to run on a single image. Moreover, \(\uparrow\) means the higher the score, the better results, and vice versa.
Figure 3: Few sample images from our experiments. The perturbed classes are as follows: Traffic light as Manhole cover, School bus as Ambilance, Acoustic guitar as Assault Rifle. Perturbations are shown in the first row, scaled 20 times for visibility.
specific classes by incorporating this hyperparameter into our Enhanced Targeted DeepFool approach which is not an option for the original DeepFool method as well as other existing works. One consequence of introducing this hyperparameter is an increase in the number of perturbations added to the original image. However, we find that the additional perturbations are negligible in magnitude. In Table 3, we can see that our method outperforms other attacks. FGSM shows a 0.97 confidence rate, which is easily obtainable with our method by tuning the minimum confidence threshold. Moreover, only JSMA has a higher success rate but the experiments are done on MNIST which has only 10 classes and only contains numerical images. C&W shows a surprising 100% success rate but they only do experiments with 1000 images and 100 classes whereas we use 50000 images and 1000 classes. Moreover, all of the existing works focus solely on the success rate. However, we make sure wherever it gets successful, it does that with the highest confidence and lowest perturbation possible.
The Targeted DeepFool algorithm by Gajjar _et al_. [10] uses MNIST dataset, which is a \(28\times 28\) pixel grayscale dataset of numbers, and CIFAR10 dataset which only contains 10 classes. The images in these datasets are not suitable to compare against real-life images. They get a 77% success rate for CIFAR10 whereas our approach gets a 95% success rate on Imagenet with 1000 classes. Additionally, their approach results in a higher average distortion for the MNIST dataset compared to our approach with the Imagenet dataset. They also claim that the adversarial images are visually indistinguishable from the original ones, however, this claim is questionable, as the images are too small to clearly assess the level of distortion. Furthermore, their most efficient approach, the recursive algorithm, is more time-consuming.
Furthermore, in contrast to the majority of untargeted attacks in the literature, our study does not seek to elucidate the degradation in model performance when retrained with perturbed images; consequently, we have chosen not to incorporate this aspect in our analysis.
In our results, we observe that ViT performs better against our attack. Benz _et al_. [1] also supports this claim as their results show that ViT has a better success rate against vanilla DeepFool. One possible reason behind this is the architecture of the model. Since it turns an image into multiple patches in the first place, therefore every patch is executed individually and perturbations are added to every patch, which is pretty visible in Figure 3. This is one of the likeliest core reasons for getting the comparatively higher amount of mean perturbations, as seen in Table 1.
Our attack has far-ranging implications, as this can tell us what kind of image inputs are most likely to fail for a given model. Our current implementation is using only one target class, but this attack can be extended to include multiple classes with individual confidence levels, which can then expose which classes of images are more likely to be mistaken in an image recognition model.
## 6 Conclusion
In this paper, we propose an algorithm, Enhanced Targeted DeepFool, which improves on the original DeepFool algorithm and makes it not only able to misclassify an image to a specific target class but also achieve the desired amount of confidence needed to fool a classifier. We show that our algorithm is simple and requires less time complexity. We demonstrate that the algorithm performs well against various state-of-the-art image classifiers. We also provide evi
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{Perturbation Amount} \\ \cline{2-7} Image No. & ResNet50 & AlexNet & EfficientNet & GoogLeNet & Inception\_v3 & ViT \\ \hline
3.jpg & 3.31\% & 4.90\% & 2.91\% & 5.59\% & 5.59\% & 11.92\% \\
63.jpg & 0.79\% & 20.10\% & 0.79\% & 1.24\% & 1.27\% & 3.34\% \\
328.jpg & 2.59\% & 7.13\% & 5.83\% & 4.99\% & 4.00\% & 56.26\%* \\
1125.jpg & 13.68\% & 98.27\%* & 1.49\% & 2.46\% & 1.47\% & 2.87\% \\
1398.jpg & 1.50\% & 6.94\% & 3.35\% & 3.09\% & 2.74\% & 65.38\%* \\ \hline \hline \end{tabular}
\end{table}
Table 2: Percentage of perturbations needed, on five sample images from the dataset. Here, * beside the perturbation value signifies that the attack does not succeed.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Attack Name & Success & Confidence & Perturbation \\ \hline FGSM [13] & 0.75 & 0.97 & - \\ C\&W [3] & **1.00** & - & - \\ JSMA [23] & 0.97 & - & 4.01 \\ UAP [21] & 0.95 & - & - \\ One Pixel [27] & 0.35 & 0.60 & - \\ PoTrip [19] & 0.88 & - & - \\ Targeted DeepFool [10] & 0.77 & - & 2.28** \\ ET DeepFool (Ours) & 0.95 & 0.95* & **2.14** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative comparison among existing attacks and our proposed attack. Here, * represents it can be a variable (from 0 to 1), according to our preference. ** represents the average perturbation for MNIST dataset.
dence on how our approach performs better than an existing one. We are convinced that by training and fine-tuning the classifiers on the images generated by the algorithm, they will become more robust to future attacks.
Our future plans include the extension of this attack for multiple classes, each with its own confidence level, which would make our method effectively capable of finding which classes of images are easier to be mistaken for another class, leading to a measurement of the robustness of an image recognition model given a specific sample of images. To the best of our knowledge, our work is the first perturbation procedure where a post-perturbation performance metric like the confidence level can be tuned to its arbitrary precision level. This allows us to have a look at how little perturbation, i.e. the minimal level of change, it requires to make any model mistake a class of images as another with strong error rate and misplaced confidence. Another area of potential future research lies in devising an approach that minimizes computational requirements. Although the proposed algorithm already demonstrates improved time complexity, there can be more investigations done to optimize its computational demands to ensure broader practical applicability. Moreover, why the attack works less well for ViT and AlexNet can be further explored, so that a perturbation attack specifically tailored for them can be constructed. We hope that these findings contribute to the advancement of adversarial machine learning and provide a foundation for further exploration and refinement of targeted attack methods.
|
2304.13192 | Towards Reliable Colorectal Cancer Polyps Classification via Vision
Based Tactile Sensing and Confidence-Calibrated Neural Networks | In this study, toward addressing the over-confident outputs of existing
artificial intelligence-based colorectal cancer (CRC) polyp classification
techniques, we propose a confidence-calibrated residual neural network.
Utilizing a novel vision-based tactile sensing (VS-TS) system and unique CRC
polyp phantoms, we demonstrate that traditional metrics such as accuracy and
precision are not sufficient to encapsulate model performance for handling a
sensitive CRC polyp diagnosis. To this end, we develop a residual neural
network classifier and address its over-confident outputs for CRC polyps
classification via the post-processing method of temperature scaling. To
evaluate the proposed method, we introduce noise and blur to the obtained
textural images of the VS-TS and test the model's reliability for non-ideal
inputs through reliability diagrams and other statistical metrics. | Siddhartha Kapuria, Tarunraj G. Mohanraj, Nethra Venkatayogi, Ozdemir Can Kara, Yuki Hirata, Patrick Minot, Ariel Kapusta, Naruhiko Ikoma, Farshid Alambeigi | 2023-04-25T23:18:13Z | http://arxiv.org/abs/2304.13192v1 | Towards Reliable Colorectal Cancer Polyps Classification via Vision Based Tactile Sensing and Confidence-Calibrated Neural Networks
###### Abstract
In this study, toward addressing the over-confident outputs of existing artificial intelligence-based colorectal cancer (CRC) polypy classification techniques, we propose a confidence-calibrated residual neural network. Utilizing a novel vision-based tactile sensing (VS-TS) system and unique CRC polyp phantoms, we demonstrate that traditional metrics such as accuracy and precision are not sufficient to encapsulate model performance for handling a sensitive CRC polyp diagnosis. To this end, we develop a residual neural network classifier and address its over-confident outputs for CRC polyps classification via the post-processing method of temperature scaling. To evaluate the proposed method, we introduce noise and blur to the obtained textural images of the VSTS and test the model's reliability for non-ideal inputs through reliability diagrams and other statistical metrics.
## I Introduction
Colorectal cancer (CRC) is the third most diagnosed cancer in the United States [1]. Early detection of precancerous polyp lesions can potentially increase the survival rate of patients to almost 90% [2]. It has been shown that the morphological characteristics of CRC polyps observed during colonoscopy screening can be used as an indicator of the neoplasticity of a polyp (i.e. its cancerous potential) [3, 4]. However, the task of early CRC polyp detection and classification using colonoscopy images is highly complex and clinician-dependent [5], increasing the risk of early detection miss rate (EDMR) and mortality.
To address the critical EDMR issue, computer-aided diagnostics using artificial intelligence (AI) has increasingly been employed for improving the detection and characterization of cancer polyps. Examples of the utilized AI algorithms include support vector machines (SVM), k-nearest neighbors (k-NN), ensemble methods, random forests, and convolutional neural networks (CNN) [7, 8, 9, 10]. Due to the difficulties associated with obtaining medical data and patient records to generate datasets, recently, transfer learning using neural networks pre-trained on large general-purpose datasets such as ImageNet [11] has also become a widely popular technique to aid in medical computer aided diagnostics, and in particular, the detection and classification of CRC polyps [12, 13]. To evaluate the performance of the utilized AI algorithms, statistical metrics such as accuracy, precision, sensitivity, and recall are typically used in the literature. For example, Zhang et al. [13] used precision, accuracy, and recall rate to evaluate the performance of the implemented AI algorithms, while Ribeiro et al. [12] only used accuracy as an evaluation metric.
A review of the literature demonstrates that using the aforementioned statistical metrics, researchers mainly have focused on the "_correctness_" of the predictions and not the "_reliability_" and "_confidence_" of the implemented AI algorithms. In other words, these studies solely have focused on comparing the correctness of the predicted labels with the ground truth labels. Nevertheless, in sensitive AI applications such as cancer diagnosis, it is also critical to reduce incorrect diagnoses by reporting the likelihood of correctly predicting the labels, through attaching a "confidence" metric to each
Fig. 1: A conceptual three-dimensional illustration of the image dataset collected by HySenSe on 5 out of the 10 variations of 3D printed polyp phantoms. These polyp phantoms are classified based on Kudo pit-patterns, such as Asteroid, Gyrus, Round, and Oval [6]. Moreover, each of the polyp phantoms is printed with 4 different materials (i.e., DM-400, DM-600, A-40, and A-70.
prediction. Of note, accurately providing a confidence level significantly improves the interpretability and appropriate level of trust of the model's output. For instance, for the case of CRC polyps' detection and classification, a more accurate confidence estimate can better inform clinicians basing decisions on the AI diagnosis.
In case of deep neural networks, it is often erroneously assumed that the output of the final classification layer (i.e., softmax) is a realistic measure of confidence [14]. However, as shown by Guo et al. [15] taking the example of a ResNet with 110 layers [16], deep neural networks often produce a higher softmax output than the ground truth demonstrating over-confident results. Such a network with a difference in ground truth probabilities and the predicted softmax outputs is called a "_miscalibrated_" network [15]. To address this miscalibration issue and use softmax outputs of neural networks as realistic confidence estimates, different techniques have been explored in the literature. For example, Guo et al. [15] provided insights on simple post-processing calibration methods to obtain accurate confidence estimates. Moreover, modifying the loss function using the difference between confidence and accuracy (DCA) [17] and Dynamically Weighted Balanced (DWB) [18] have also been explored by researchers. Similar efforts have been made in the medical imaging community to incorporate confidence calibration in neural network models. For instance, Carneiro et al. [19] explored the role of confidence calibration for polyps classification based on colonoscopy images and used the temperature scaling technique for network calibration. Building on this, Kusters et al. [20] employed trainable methods based on DCA and DWB for confidence calibration.
In our recent work [21], solely utilizing the typical evaluation metrics (i.e., precision, accuracy, and recall), we demonstrated the high potentials of utilizing a dilated Convolutional Neural Network (CNN) to precisely and sensitively (i.e., an average accuracy of 93%) classify CRC polyps under the Kudo classification system [6]. Unlike common images provided during colonoscopy screening, this framework utilizes unique 3D textural images (shown in Fig. 1) captured by the HySenSe sensor [22], which is a novel hyper sensitive and high fidelity vision-based surface tactile sensor (VS-TS). In this paper, towards developing a reliable and interpretable CRC polyp classification model and in order to address our over-confident results in [21], we further develop our best-performing ML classifier and address its confidence calibration for CRC polyps classification via the post-processing method of temperature scaling. We also focus on improving the model's generalization ability for non-ideal inputs (i.e., noisy and blurry textural images) and calculating the likelihood of the CRC polyps' prediction to the true class.
## II Materials and Methods
### _Vision Based Tactile Sensor (VS-TS)_
In this study, we utilized a novel VS-TS called HySenSe developed in [22] to collect high-fidelity textural images of CRC polyp phantoms for the training and evaluation of our confidence-calibrated AI model. As shown in Fig. 2, this sensor consists of: (I) a deformable silicone membrane that directly interacts with polyp phantoms, (II) an optical module (Arducam 1/4 inch 5 MP camera), which captures the tiny deformations of the gel layer when there exist an interaction with a polyp phantom, (III) a transparent acrylic plate providing support to the gel layer, (IV) an array of Red, Green and Blue LEDs to provide internal illumination for the depth perception, and (V) a rigid frame supporting the entire structure. Working principle of the VS-TS is very
Fig. 3: Exemplary visuals for noise and blur levels that are utilized in the datasets.
Fig. 2: Experimental Setup: 1: CRC polyp phantoms, 2: Mark-10 Series 5 Digital Force Gauge, 3: M-UMR12.40 Precision Linear Stage, 4: HySenSe sensor, 5: Raspberry Pi 4 Model B, 6: HySenSe image output, 7: Dimensions of polyp phantom, 8: HySenSe side view, top view, and dimensions, \(h_{s}\) = 24 mm is the height of the 3D printed rigid frame, \(t_{s}\) = 4.5 mm is the thickness of the gel layer \(w_{s}\) = 33.8 mm is the width of the gel layer, and R = 35 mm is the radius of the dome-shaped gel layer.
simple yet highly intuitive in which the deformation caused by the interaction of the deformable membrane with the CRC polyps surface can visually be captured by a camera embedded in the frame. More details about the fabrication and functionality of this sensor can be found in [22].
### _Polyp Phantoms and Experimental Procedure_
Fig. 1 illustrates a 3D tensor conceptually illustrating the fabricated CRC polyp phantoms designed and additively manufactured based on the realistic CRC polyps described in [6]. As shown in this figure, by varying the indices (\(i\), \(j\), \(k\)) along each side of the tensor, a unique polyp \(P_{i,j,k}\) can be characterized showing one of four Kudo pit-pattern classifications \(i\) (referred to as A (Asteroid), G (Gyrus), O (Oval/Tubular), and R (Round) throughout this paper) [6], one of ten geometric variations \(j\), and one of four materials with different hardness \(k\) (representing different stages of cancer [23]). Across the four classes, the feature dimensions range from 300 to 900 microns, with an average spacing of 600 microns between pit patterns. Following the design of the polyp phantoms CAD model in SolidWorks (SolidWorks, Dassault Systemes), each of the 160 unique phantoms that constitute the dataset was printed with the J750 Digital Anatomy Printer (Stratasys, Ltd) with different material combinations shown in Fig. 1. More details about the fabrication of polyp phantoms can be found in [21].
### _Experimental Setup and Data Collection Procedure_
Utilizing the Hysense sensor, CRC polyp phantoms, and the experimental setup shown in Fig. 2, a set of experiments were conducted under two contact angles of 0\({}^{\circ}\) and 45\({}^{\circ}\) between the polyp face and HySenSe. Of note, 0\({}^{\circ}\) mimics a complete interaction between the HySenSe deformable layer and the polyp, in which the whole texture of the polyp can be captured by HySenSe, whereas the 45\({}^{\circ}\) simulates a case in which limited portion of the polyp's texture can be captured by the sensor. Each of the 160 unique polyps had an interaction with the HySenSe until a 2 N force was exerted in the 0\({}^{\circ}\) orientation. Additionally, five out of the ten geometric variations \(j\) chosen randomly from each polyp class \(i\), across each of the four materials \(k\), were used for the experiments in the 45\({}^{\circ}\) orientation, for a total of 80 angled experiments resulting a total of 229 samples that constitute the dataset.
### _Datasets and Pre-Processing_
Between 229 samples, the class counts for each polyp A, G, O, and R were 57, 57, 55, and 60, respectively. From the 229 polyp visuals, training via stratified K-fold cross-validation was performed with 5 folds on 80% (182 samples) of the dataset, while 20% (47 samples) was reserved for model evaluation purposes. The obtained HySenSe visuals were manually cropped to only include the polyp area of interest and downsized from the native 1080 x 1280 pixels to 224 x 224 to improve model performance. Three different datasets were constructed using the same training data split: (I) with neither Gaussian Blur nor Gaussian Noise transformations on the base samples (examples shown in Fig. 1), (II) with Gaussian Blur (Fig. 2(a)), and (III) with Gaussian Noise (Fig. 2(b)). Of note, the Gaussian transforms used in datasets (I)-(III) occur at a probability of 0.5 for each sample. We also used a value of \(\sigma\) ranging from 1 to 256 for blur, and a \(\sigma\) of 1 to 50 for adding the noise values. Notably, higher values of \(\sigma\) denote more significant blur and noise, as illustrated in Fig. 3. To improve generalization, we chose the maximum blur and noise limits to be well beyond the worst case that the model may encounter in a clinical setting. Additionally, to further improve the model's robustness, all four sets included standard geometric augmentations, such as random cropping, horizontal and vertical flips, and random rotations between -45\({}^{\circ}\) and 45\({}^{\circ}\), each with an independent occurrence probability of 0.5.
The original 47 samples (i.e. 20% of the dataset) that were reserved for model evaluation were used to construct an expanded test set consisting of a total of \(47\times 4=188\) samples. This was achieved by combining four independent, visually distinct groups of the same 47 samples: Group A consisting of "clean" images without any Gaussian transformations applied to the samples, Group B with each of the 47 samples incorporating varying levels of Gaussian Blur, Group C with each of the samples incorporating varying levels of Gaussian Noise, and Group D with all the images experiencing a combination of both Gaussian Blur and Gaussian Noise. The 188 images resulting from combining the samples in Group A-D were used to evaluate the calibration performance of the model trained on Datasets I-III. To simulate a more reasonable level of blur and noise in the test set that the model may encounter in a real-world setting, we limited the maximum values of \(\sigma\) to 32 and 30 for blur and noise, respectively.
### _Model Architecture_
The residual network (ResNet) architecture is the current standard ML model for polyp classification tasks [24, 25] due to its ability to curtail exploding gradients [26, 27]. Additionally, ResNets use skip connections to lessen the degradation problem, where model performance is negatively impacted by increasing its complexity [26]. Notably, standard ResNet convolutional layers do not utilize dilated kernels. Dilations maintain the spatial resolution of feature maps encountered during convolutions while also enhancing the network's receptive field to observe more details. In [28], we demonstrated the effectiveness of using a dilated CNN--inspired by the ResNet architecture--to capture and classify the intricate textural features seen in our dataset, while outperforming state-of-the-art networks across a wide range of clinically relevant metrics. In this work, we employ the same model to examine its response to calibration and address the over-confident results reported in [28].
### _Model Calibration_
Confidence calibration is the problem of matching the output confidence level with the actual likelihood of the model. It is an important step towards improving model interpretability as most of the deep neural networks are typically over-confident in their predictions [15]. Of note, A model is said to be perfectly calibrated when the confidence
level of a prediction represents the true probability of the prediction being correct [15]. Mathematically speaking, if input \(X\) is considered with class labels \(Y\), the predicted class is \(\hat{Y}\) and \(\hat{P}\) is its associated confidence, then for perfect calibration, the probability \(\mathbf{P}is\):
\[\mathbf{P}(\hat{Y}=Y|\hat{P}=p)=p,\;\;\forall\;p\in[0,1]\]
where the probability is over the joint distribution.
#### Ii-B1 Temperature Scaling
Temperature scaling is the simplest extension of Platt scaling [29] and uses a single parameter \(T>0\) for all cases. Guo et al. [15] have shown that temperature scaling is an effective method for confidence calibration. Although other trainable calibration methods (e.g., DCA and DWB) also exist, we chose temperature scaling due to its simplicity and independence from model training. Given a logit vector \(z_{i}\) which is the input to the SoftMax function \(\sigma_{SM}\), the new confidence prediction is:
\[\hat{q_{i}}=\max_{k}\sigma_{SM}\left(\frac{z_{i}}{T}\right)^{(k)}\]
where, parameter \(T\) is the temperature, and is learned over the holdout validation set by minimizing the negative log-likelihood. Of note, this approach works by "softening" out the output SoftMax function.
#### Ii-B2 Reliability Diagrams
Reliability diagrams are an intuitive way of visually representing model calibration [15]. By grouping predictions into bins based on their confidence levels and calculating the average accuracy in each bin, we can plot the expected sample accuracy as a function of confidence. The diagram of a perfectly calibrated model plots the identity function, and gaps in calibration can be seen as the deviation from the identity function.
Taking M equal spaced confidence bins with \(B_{m}\) to be the set of indices in the \(m^{th}\) confidence interval, and \(n\) the number of samples, the accuracy of \(B_{m}\) can be calculated as [15][20]:
\[acc(B_{m})=\frac{1}{|B_{m}|}\sum_{i\in B_{m}}1(\hat{y_{i}}=y_{i})\]
The average confidence within \(B_{m}\), taking \(\hat{p_{i}}\) to be the confidence of sample \(i\) is[15], is:
\[conf(B_{m})=\frac{1}{|B_{m}|}\sum_{i\in B_{m}}\hat{p_{i}}\]
#### Ii-B3 Metrics
In addition to accuracy (A), sensitivity (S), and precision (P), we use the following scalar summary statistics for calibration. Similar to reliability diagram construction, the predictions are divided into M-equal confidence bins. Taking \(B_{m}\) to be the set of indices in the \(m^{th}\) confidence interval, and \(n\) the number of samples, we have:
1. MCE: Maximum Calibration Error [15]: \[MCE=\max_{m\in\{1,2,...M\}}|acc(B_{m})-conf(B_{m})|\]
2. ECE: Expected Calibration Error [15]: \[ECE=\sum_{m=1}^{M}\frac{|B_{m}|}{n}|acc(B_{m})-conf(B_{m})|\]
3. ACE: Average Calibration Error [15]: \[ACE=\frac{1}{M^{+}}\sum_{m=1}^{M}|acc(B_{m})-conf(B_{m})|\]
where \(M^{+}\) is the number of non empty bins.
## III Results and Discussion
Accuracy, Average Confidence, MCE, ECE, and ACE were recorded for the network trained on Datasets (I)-(III) and evaluated on a dataset that incorporates clean, blurry, and noisy images, as well as a combination of noise and blur. Results have been summarized in Table I and shown in Figs. 4-6. As can be observed from these results, the calibrated models produce lower MCE, ECE, and ACE, although the gap varies between the datasets. It is of note that when trained on Dataset I (i.e., Fig. 4), which contains only clear images, the model accuracy over the test set (which contains blurry and noisy images) is only 60%, yet the average reported confidence is 80%. This considerable discrepancy between the model's true performance (i.e. accuracy) and its reported performance (i.e. confidence) highlights the need for calibration.
When trained on Dataset II with clean and blurry images, as shown in Fig. 4(a), there is a slight improvement in model performance, however, the uncalibrated model still has a tendency to over-report confidence. For this dataset, the model accuracy is 62%, while the average confidence is 82%. A similar trend is seen when training on Dataset III with clean and noisy images. In this case, the model accuracy is 68%, with the average confidence being 78%. Although the gap between model confidence and accuracy decreases with the calibration, we note that temperature scaling does not seem to perform as well on Dataset III relative to Datasets I and II. As can be observed from Fig. 5(b), there is still a gap of 4% between average confidence and accuracy in the calibrated model, whereas the gap is reduced to 1% and 2% for the Dataset I and II through calibration, respectively.
The reliability diagrams show that none of the models manage to achieve perfect calibration even after temperature scaling despite the average accuracy and confidence lining up. The achieved accuracies of the bins for the calibrated
\begin{table}
\begin{tabular}{|l||l|l||l||l||l||l||l||l||} \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c||}{ECE} & \multicolumn{2}{c||}{MCE} & \multicolumn{2}{c||}{ACE} & \multicolumn{2}{c||}{Average Confidence} & \multirow{2}{*}{Accuracy} \\ \cline{2-4} \cline{6-8} & Uncalibrated & Calibrated & Uncalibrated & Calibrated & Uncalibrated & Calibrated & Uncalibrated & Calibrated \\ \hline I & 0.187 & 0.0901 & 0.346 & 0.291 & 0.211 & 0.139 & 79\% & 61\% & 60\% \\ \hline II & 0.166 & 0.093 & 0.385 & 0.119 & 0.184 & 0.0845 & 82\% & 64\% & 62\% \\ \hline III & 0.124 & 0.0663 & 0.271 & 0.258 & 0.169 & 0.090 & 78\% & 64\% & 68\% \\ \hline \end{tabular}
\end{table} TABLE I: Calibration Results
## References
* [1] M. A. Abadi, M. A. Abadi, and M. A. Abadi, "A new approach to the design of the digital
models still at least somewhat deviate from the ideal diagonal in all cases, although they are, on average, closer to this diagonal than their uncalibrated counterparts, therefore further supporting the advantage of employing a calibrated neural network.
In addition, to evaluate the model's capabilities to generalize at higher blur and noise levels, we plot the accuracy and confidence of the calibrated and uncalibrated model trained on each Dataset from I to III versus increasing blur and noise. We use a value of \(\sigma\) ranging from 1 to 256 for blur and from 1 to 50 for noise, with a logarithmic step size for blur and a linear step for noise.
The results of model performance versus increasing blur and noise levels are presented in Fig. 7. The model trained on Dataset I shows a sharp decrease in accuracy when noise and blur are introduced, even though the reported confidences remain relatively high. It is only at large levels of noise and blur that the average confidence and accuracy line up--at around the 50% mark--but that is still too low to be of clinical significance. The accuracy drop for the model trained on Dataset II is much smaller, although there is still a significant gap between average confidence and accuracy for a significant portion of our testing range. The accuracy and confidence values converge at a higher point (75%) than in the previous case. The model trained on Dataset III appears to be the worst performing of the three cases that have been considered since the accuracy drop occurs significantly for a low \(\sigma\) and it never converges with the confidence. At the maximum blur and noise levels, which are beyond what may be encountered in a real-world setting, there is a significant gap between accuracy and confidence, which renders the model uninterpretable.
As discussed previously, accuracy, precision, sensitivity, etc are insufficient metrics to determine the performance of a model. Keeping that in mind, we define the best-performing model in our tests to be the one that minimizes ACE, MCE, ECE, and the accuracy-confidence gap, while maximizing the aforementioned metrics. Additionally, the model should be able to remain calibrated even when exposed to noisy and/or blurry data. The model trained on Dataset I has the highest ACE, MCE, and ECE, which suggests its poor performance with regard to reliability. The model trained on Dataset III (noisy data) has the highest accuracy, however, it is miscalibrated even after temperature scaling. Additionally, it is unable to generalize over higher levels of noise and blur as discussed previously. The model trained on dataset II has lower accuracy, however, after calibration, the confidence estimates are close to ideal. The scalar metrics ACE, MCE, and ECE are also the lowest amongst the three models for this model, which makes it comparatively well-performing. Thus, considering these extra metrics allows us to choose a model that is not only accurate, but also reliable and interpretable. In a clinical context, the best-performing model (i.e., the model trained on Dataset II) would produce a confidence for the predicted polyp class that is a representation of its true accuracy. A clinician would interpret the model's confidence as a reliable measure for encountering a particular class of CRC polyps--a prediction confidence of 80% would be equivalent to an 80% accuracy of the model-- which can be used to more reliably distinguish neoplastic polyp classes from their non-neoplastic counterparts. Thus, as opposed to the confidence reported by an uncalibrated network, a temperature scaled network incorporates a clinically-relevant significance to the model's classification confidence for a given HySenSe output.
## IV Conclusions
In this paper, we address the reliability and interpretability of our previously developed best-performing neural network model in [28] by using a post-processing temperature scaling method for confidence calibration. Through testing via non-ideal inputs with blur and noise, we highlighted the difference in the confidence-accuracy gap of the model predictions for uncalibrated and calibrated models using reliability diagrams. We demonstrated that utilizing traditional metrics such as accuracy are not sufficient to encapsulate model performance demanding additional metrics to capture model reliability and interpretability for handling real-world scenarios. Using the additional metrics, we show that the proposed confidence-calibration method can provide a better AI algorithm for a reliable CRC polyp diagnosis and classification. Such AI algorithms can provide a trustworthy and reliable outputs to potentially reduce the EDMR by making it easier for the clinician to take decision control for low confidence predictions. Our future works will primarily focus on the variance in this confidence estimate, which is encapsulated by another metric called uncertainty.
Fig. 7: Results for model performance when subjected to increasing levels of blur and noise. |